Apr 22 15:05:48.065069 ip-10-0-134-217 systemd[1]: kubelet.service: Failed to load environment files: No such file or directory Apr 22 15:05:48.065078 ip-10-0-134-217 systemd[1]: kubelet.service: Failed to run 'start-pre' task: No such file or directory Apr 22 15:05:48.065085 ip-10-0-134-217 systemd[1]: kubelet.service: Failed with result 'resources'. Apr 22 15:05:48.065333 ip-10-0-134-217 systemd[1]: Failed to start Kubernetes Kubelet. Apr 22 15:05:58.132276 ip-10-0-134-217 systemd[1]: kubelet.service: Failed to schedule restart job: Unit crio.service not found. Apr 22 15:05:58.132292 ip-10-0-134-217 systemd[1]: kubelet.service: Failed with result 'resources'. -- Boot c5e612c7695145d2bb45ce9ee0a889e0 -- Apr 22 15:08:20.651210 ip-10-0-134-217 systemd[1]: Starting Kubernetes Kubelet... Apr 22 15:08:21.075307 ip-10-0-134-217 kubenswrapper[2575]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 22 15:08:21.075307 ip-10-0-134-217 kubenswrapper[2575]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Apr 22 15:08:21.075307 ip-10-0-134-217 kubenswrapper[2575]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 22 15:08:21.075307 ip-10-0-134-217 kubenswrapper[2575]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 22 15:08:21.075307 ip-10-0-134-217 kubenswrapper[2575]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 22 15:08:21.076903 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.076782 2575 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 22 15:08:21.081466 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081445 2575 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 22 15:08:21.081466 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081464 2575 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 22 15:08:21.081466 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081469 2575 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 22 15:08:21.081466 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081474 2575 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 22 15:08:21.081721 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081479 2575 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 22 15:08:21.081721 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081483 2575 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 22 15:08:21.081721 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081487 2575 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 22 15:08:21.081721 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081491 2575 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 22 15:08:21.081721 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081494 2575 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 22 15:08:21.081721 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081499 2575 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 22 15:08:21.081721 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081503 2575 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 22 15:08:21.081721 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081507 2575 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 22 15:08:21.081721 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081510 2575 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 22 15:08:21.081721 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081514 2575 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 22 15:08:21.081721 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081518 2575 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 22 15:08:21.081721 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081521 2575 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 22 15:08:21.081721 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081525 2575 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 22 15:08:21.081721 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081529 2575 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 22 15:08:21.081721 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081532 2575 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 22 15:08:21.081721 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081536 2575 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 22 15:08:21.081721 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081541 2575 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 22 15:08:21.081721 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081545 2575 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 22 15:08:21.081721 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081548 2575 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 22 15:08:21.081721 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081553 2575 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 22 15:08:21.082536 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081557 2575 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 22 15:08:21.082536 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081560 2575 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 22 15:08:21.082536 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081564 2575 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 22 15:08:21.082536 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081568 2575 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 22 15:08:21.082536 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081572 2575 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 22 15:08:21.082536 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081577 2575 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 22 15:08:21.082536 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081580 2575 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 22 15:08:21.082536 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081585 2575 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 22 15:08:21.082536 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081589 2575 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 22 15:08:21.082536 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081592 2575 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 22 15:08:21.082536 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081596 2575 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 22 15:08:21.082536 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081600 2575 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 22 15:08:21.082536 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081604 2575 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 22 15:08:21.082536 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081609 2575 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 22 15:08:21.082536 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081613 2575 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 22 15:08:21.082536 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081618 2575 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 22 15:08:21.082536 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081625 2575 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 22 15:08:21.082536 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081632 2575 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 22 15:08:21.082536 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081637 2575 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 22 15:08:21.083277 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081641 2575 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 22 15:08:21.083277 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081647 2575 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 22 15:08:21.083277 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081653 2575 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 22 15:08:21.083277 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081657 2575 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 22 15:08:21.083277 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081661 2575 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 22 15:08:21.083277 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081666 2575 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 22 15:08:21.083277 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081670 2575 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 22 15:08:21.083277 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081676 2575 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 22 15:08:21.083277 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081682 2575 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 22 15:08:21.083277 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081687 2575 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 22 15:08:21.083277 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081691 2575 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 22 15:08:21.083277 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081695 2575 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 22 15:08:21.083277 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081700 2575 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 22 15:08:21.083277 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081704 2575 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 22 15:08:21.083277 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081708 2575 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 22 15:08:21.083277 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081712 2575 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 22 15:08:21.083277 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081716 2575 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 22 15:08:21.083277 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081720 2575 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 22 15:08:21.083277 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081724 2575 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 22 15:08:21.083277 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081728 2575 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 22 15:08:21.083768 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081732 2575 feature_gate.go:328] unrecognized feature gate: Example Apr 22 15:08:21.083768 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081739 2575 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 22 15:08:21.083768 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081743 2575 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 22 15:08:21.083768 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081748 2575 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 22 15:08:21.083768 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081753 2575 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 22 15:08:21.083768 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081757 2575 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 22 15:08:21.083768 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081762 2575 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 22 15:08:21.083768 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081766 2575 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 22 15:08:21.083768 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081770 2575 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 22 15:08:21.083768 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081784 2575 feature_gate.go:328] unrecognized feature gate: Example2 Apr 22 15:08:21.083768 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081789 2575 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 22 15:08:21.083768 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081793 2575 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 22 15:08:21.083768 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081798 2575 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 22 15:08:21.083768 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081802 2575 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 22 15:08:21.083768 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081806 2575 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 22 15:08:21.083768 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081811 2575 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 22 15:08:21.083768 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081815 2575 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 22 15:08:21.083768 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081819 2575 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 22 15:08:21.083768 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081823 2575 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 22 15:08:21.083768 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081828 2575 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 22 15:08:21.084458 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081832 2575 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 22 15:08:21.084458 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081844 2575 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 22 15:08:21.084458 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.081849 2575 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 22 15:08:21.084458 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082571 2575 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 22 15:08:21.084458 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082581 2575 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 22 15:08:21.084458 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082586 2575 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 22 15:08:21.084458 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082590 2575 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 22 15:08:21.084458 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082594 2575 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 22 15:08:21.084458 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082599 2575 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 22 15:08:21.084458 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082603 2575 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 22 15:08:21.084458 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082607 2575 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 22 15:08:21.084458 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082611 2575 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 22 15:08:21.084458 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082615 2575 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 22 15:08:21.084458 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082620 2575 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 22 15:08:21.084458 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082624 2575 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 22 15:08:21.084458 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082628 2575 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 22 15:08:21.084458 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082632 2575 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 22 15:08:21.084458 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082636 2575 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 22 15:08:21.084458 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082641 2575 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 22 15:08:21.084458 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082645 2575 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 22 15:08:21.084970 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082650 2575 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 22 15:08:21.084970 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082662 2575 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 22 15:08:21.084970 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082667 2575 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 22 15:08:21.084970 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082671 2575 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 22 15:08:21.084970 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082675 2575 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 22 15:08:21.084970 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082679 2575 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 22 15:08:21.084970 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082683 2575 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 22 15:08:21.084970 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082687 2575 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 22 15:08:21.084970 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082694 2575 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 22 15:08:21.084970 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082700 2575 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 22 15:08:21.084970 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082706 2575 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 22 15:08:21.084970 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082710 2575 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 22 15:08:21.084970 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082715 2575 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 22 15:08:21.084970 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082719 2575 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 22 15:08:21.084970 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082724 2575 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 22 15:08:21.084970 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082728 2575 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 22 15:08:21.084970 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082742 2575 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 22 15:08:21.084970 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082747 2575 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 22 15:08:21.084970 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082751 2575 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 22 15:08:21.085453 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082755 2575 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 22 15:08:21.085453 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082759 2575 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 22 15:08:21.085453 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082763 2575 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 22 15:08:21.085453 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082767 2575 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 22 15:08:21.085453 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082772 2575 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 22 15:08:21.085453 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082776 2575 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 22 15:08:21.085453 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082780 2575 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 22 15:08:21.085453 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082784 2575 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 22 15:08:21.085453 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082788 2575 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 22 15:08:21.085453 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082793 2575 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 22 15:08:21.085453 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082799 2575 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 22 15:08:21.085453 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082806 2575 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 22 15:08:21.085453 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082811 2575 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 22 15:08:21.085453 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082817 2575 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 22 15:08:21.085453 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082821 2575 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 22 15:08:21.085453 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082833 2575 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 22 15:08:21.085453 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082838 2575 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 22 15:08:21.085453 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082843 2575 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 22 15:08:21.085453 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082847 2575 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 22 15:08:21.085950 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082851 2575 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 22 15:08:21.085950 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082855 2575 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 22 15:08:21.085950 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082879 2575 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 22 15:08:21.085950 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082884 2575 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 22 15:08:21.085950 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082888 2575 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 22 15:08:21.085950 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082892 2575 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 22 15:08:21.085950 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082896 2575 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 22 15:08:21.085950 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082901 2575 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 22 15:08:21.085950 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082905 2575 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 22 15:08:21.085950 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082910 2575 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 22 15:08:21.085950 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082915 2575 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 22 15:08:21.085950 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082919 2575 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 22 15:08:21.085950 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082923 2575 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 22 15:08:21.085950 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082927 2575 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 22 15:08:21.085950 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082931 2575 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 22 15:08:21.085950 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082934 2575 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 22 15:08:21.085950 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082939 2575 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 22 15:08:21.085950 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082945 2575 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 22 15:08:21.085950 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082949 2575 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 22 15:08:21.085950 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082953 2575 feature_gate.go:328] unrecognized feature gate: Example Apr 22 15:08:21.086438 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082957 2575 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 22 15:08:21.086438 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082961 2575 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 22 15:08:21.086438 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082965 2575 feature_gate.go:328] unrecognized feature gate: Example2 Apr 22 15:08:21.086438 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082970 2575 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 22 15:08:21.086438 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082974 2575 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 22 15:08:21.086438 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082978 2575 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 22 15:08:21.086438 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082982 2575 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 22 15:08:21.086438 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082986 2575 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 22 15:08:21.086438 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.082990 2575 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 22 15:08:21.086438 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.083003 2575 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 22 15:08:21.086438 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.083007 2575 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 22 15:08:21.086438 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.083811 2575 flags.go:64] FLAG: --address="0.0.0.0" Apr 22 15:08:21.086438 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.083832 2575 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Apr 22 15:08:21.086438 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.083847 2575 flags.go:64] FLAG: --anonymous-auth="true" Apr 22 15:08:21.086438 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.083854 2575 flags.go:64] FLAG: --application-metrics-count-limit="100" Apr 22 15:08:21.086438 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.083881 2575 flags.go:64] FLAG: --authentication-token-webhook="false" Apr 22 15:08:21.086438 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.083886 2575 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Apr 22 15:08:21.086438 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.083893 2575 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Apr 22 15:08:21.086438 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.083900 2575 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Apr 22 15:08:21.086438 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.083905 2575 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Apr 22 15:08:21.086438 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.083910 2575 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Apr 22 15:08:21.086982 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.083916 2575 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Apr 22 15:08:21.086982 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.083921 2575 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Apr 22 15:08:21.086982 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.083926 2575 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Apr 22 15:08:21.086982 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.083931 2575 flags.go:64] FLAG: --cgroup-root="" Apr 22 15:08:21.086982 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.083936 2575 flags.go:64] FLAG: --cgroups-per-qos="true" Apr 22 15:08:21.086982 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.083940 2575 flags.go:64] FLAG: --client-ca-file="" Apr 22 15:08:21.086982 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.083944 2575 flags.go:64] FLAG: --cloud-config="" Apr 22 15:08:21.086982 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.083949 2575 flags.go:64] FLAG: --cloud-provider="external" Apr 22 15:08:21.086982 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.083954 2575 flags.go:64] FLAG: --cluster-dns="[]" Apr 22 15:08:21.086982 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.083965 2575 flags.go:64] FLAG: --cluster-domain="" Apr 22 15:08:21.086982 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.083969 2575 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Apr 22 15:08:21.086982 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.083975 2575 flags.go:64] FLAG: --config-dir="" Apr 22 15:08:21.086982 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.083980 2575 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Apr 22 15:08:21.086982 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.083986 2575 flags.go:64] FLAG: --container-log-max-files="5" Apr 22 15:08:21.086982 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.083993 2575 flags.go:64] FLAG: --container-log-max-size="10Mi" Apr 22 15:08:21.086982 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.083999 2575 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Apr 22 15:08:21.086982 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084004 2575 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Apr 22 15:08:21.086982 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084009 2575 flags.go:64] FLAG: --containerd-namespace="k8s.io" Apr 22 15:08:21.086982 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084014 2575 flags.go:64] FLAG: --contention-profiling="false" Apr 22 15:08:21.086982 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084019 2575 flags.go:64] FLAG: --cpu-cfs-quota="true" Apr 22 15:08:21.086982 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084025 2575 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Apr 22 15:08:21.086982 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084038 2575 flags.go:64] FLAG: --cpu-manager-policy="none" Apr 22 15:08:21.086982 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084043 2575 flags.go:64] FLAG: --cpu-manager-policy-options="" Apr 22 15:08:21.086982 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084050 2575 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Apr 22 15:08:21.086982 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084055 2575 flags.go:64] FLAG: --enable-controller-attach-detach="true" Apr 22 15:08:21.087597 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084059 2575 flags.go:64] FLAG: --enable-debugging-handlers="true" Apr 22 15:08:21.087597 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084064 2575 flags.go:64] FLAG: --enable-load-reader="false" Apr 22 15:08:21.087597 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084070 2575 flags.go:64] FLAG: --enable-server="true" Apr 22 15:08:21.087597 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084074 2575 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Apr 22 15:08:21.087597 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084085 2575 flags.go:64] FLAG: --event-burst="100" Apr 22 15:08:21.087597 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084090 2575 flags.go:64] FLAG: --event-qps="50" Apr 22 15:08:21.087597 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084095 2575 flags.go:64] FLAG: --event-storage-age-limit="default=0" Apr 22 15:08:21.087597 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084101 2575 flags.go:64] FLAG: --event-storage-event-limit="default=0" Apr 22 15:08:21.087597 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084106 2575 flags.go:64] FLAG: --eviction-hard="" Apr 22 15:08:21.087597 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084112 2575 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Apr 22 15:08:21.087597 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084117 2575 flags.go:64] FLAG: --eviction-minimum-reclaim="" Apr 22 15:08:21.087597 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084122 2575 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Apr 22 15:08:21.087597 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084127 2575 flags.go:64] FLAG: --eviction-soft="" Apr 22 15:08:21.087597 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084131 2575 flags.go:64] FLAG: --eviction-soft-grace-period="" Apr 22 15:08:21.087597 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084136 2575 flags.go:64] FLAG: --exit-on-lock-contention="false" Apr 22 15:08:21.087597 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084140 2575 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Apr 22 15:08:21.087597 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084145 2575 flags.go:64] FLAG: --experimental-mounter-path="" Apr 22 15:08:21.087597 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084150 2575 flags.go:64] FLAG: --fail-cgroupv1="false" Apr 22 15:08:21.087597 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084155 2575 flags.go:64] FLAG: --fail-swap-on="true" Apr 22 15:08:21.087597 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084159 2575 flags.go:64] FLAG: --feature-gates="" Apr 22 15:08:21.087597 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084166 2575 flags.go:64] FLAG: --file-check-frequency="20s" Apr 22 15:08:21.087597 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084171 2575 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Apr 22 15:08:21.087597 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084176 2575 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Apr 22 15:08:21.087597 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084181 2575 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Apr 22 15:08:21.087597 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084186 2575 flags.go:64] FLAG: --healthz-port="10248" Apr 22 15:08:21.087597 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084191 2575 flags.go:64] FLAG: --help="false" Apr 22 15:08:21.088273 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084196 2575 flags.go:64] FLAG: --hostname-override="ip-10-0-134-217.ec2.internal" Apr 22 15:08:21.088273 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084201 2575 flags.go:64] FLAG: --housekeeping-interval="10s" Apr 22 15:08:21.088273 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084206 2575 flags.go:64] FLAG: --http-check-frequency="20s" Apr 22 15:08:21.088273 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084243 2575 flags.go:64] FLAG: --image-credential-provider-bin-dir="/usr/libexec/kubelet-image-credential-provider-plugins" Apr 22 15:08:21.088273 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084270 2575 flags.go:64] FLAG: --image-credential-provider-config="/etc/kubernetes/credential-providers/ecr-credential-provider.yaml" Apr 22 15:08:21.088273 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084276 2575 flags.go:64] FLAG: --image-gc-high-threshold="85" Apr 22 15:08:21.088273 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084281 2575 flags.go:64] FLAG: --image-gc-low-threshold="80" Apr 22 15:08:21.088273 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084285 2575 flags.go:64] FLAG: --image-service-endpoint="" Apr 22 15:08:21.088273 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084290 2575 flags.go:64] FLAG: --kernel-memcg-notification="false" Apr 22 15:08:21.088273 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084293 2575 flags.go:64] FLAG: --kube-api-burst="100" Apr 22 15:08:21.088273 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084297 2575 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Apr 22 15:08:21.088273 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084301 2575 flags.go:64] FLAG: --kube-api-qps="50" Apr 22 15:08:21.088273 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084304 2575 flags.go:64] FLAG: --kube-reserved="" Apr 22 15:08:21.088273 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084308 2575 flags.go:64] FLAG: --kube-reserved-cgroup="" Apr 22 15:08:21.088273 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084312 2575 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Apr 22 15:08:21.088273 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084315 2575 flags.go:64] FLAG: --kubelet-cgroups="" Apr 22 15:08:21.088273 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084318 2575 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Apr 22 15:08:21.088273 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084321 2575 flags.go:64] FLAG: --lock-file="" Apr 22 15:08:21.088273 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084325 2575 flags.go:64] FLAG: --log-cadvisor-usage="false" Apr 22 15:08:21.088273 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084329 2575 flags.go:64] FLAG: --log-flush-frequency="5s" Apr 22 15:08:21.088273 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084332 2575 flags.go:64] FLAG: --log-json-info-buffer-size="0" Apr 22 15:08:21.088273 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084339 2575 flags.go:64] FLAG: --log-json-split-stream="false" Apr 22 15:08:21.088273 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084342 2575 flags.go:64] FLAG: --log-text-info-buffer-size="0" Apr 22 15:08:21.088859 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084345 2575 flags.go:64] FLAG: --log-text-split-stream="false" Apr 22 15:08:21.088859 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084348 2575 flags.go:64] FLAG: --logging-format="text" Apr 22 15:08:21.088859 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084352 2575 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Apr 22 15:08:21.088859 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084356 2575 flags.go:64] FLAG: --make-iptables-util-chains="true" Apr 22 15:08:21.088859 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084359 2575 flags.go:64] FLAG: --manifest-url="" Apr 22 15:08:21.088859 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084362 2575 flags.go:64] FLAG: --manifest-url-header="" Apr 22 15:08:21.088859 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084367 2575 flags.go:64] FLAG: --max-housekeeping-interval="15s" Apr 22 15:08:21.088859 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084371 2575 flags.go:64] FLAG: --max-open-files="1000000" Apr 22 15:08:21.088859 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084376 2575 flags.go:64] FLAG: --max-pods="110" Apr 22 15:08:21.088859 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084379 2575 flags.go:64] FLAG: --maximum-dead-containers="-1" Apr 22 15:08:21.088859 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084382 2575 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Apr 22 15:08:21.088859 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084385 2575 flags.go:64] FLAG: --memory-manager-policy="None" Apr 22 15:08:21.088859 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084389 2575 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Apr 22 15:08:21.088859 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084393 2575 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Apr 22 15:08:21.088859 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084397 2575 flags.go:64] FLAG: --node-ip="0.0.0.0" Apr 22 15:08:21.088859 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084400 2575 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhel" Apr 22 15:08:21.088859 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084412 2575 flags.go:64] FLAG: --node-status-max-images="50" Apr 22 15:08:21.088859 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084415 2575 flags.go:64] FLAG: --node-status-update-frequency="10s" Apr 22 15:08:21.088859 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084418 2575 flags.go:64] FLAG: --oom-score-adj="-999" Apr 22 15:08:21.088859 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084422 2575 flags.go:64] FLAG: --pod-cidr="" Apr 22 15:08:21.088859 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084424 2575 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8cfe89231412ff3ee8cb6207fa0be33cad0f08e88c9c0f1e9f7e8c6f14d6715" Apr 22 15:08:21.088859 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084430 2575 flags.go:64] FLAG: --pod-manifest-path="" Apr 22 15:08:21.088859 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084433 2575 flags.go:64] FLAG: --pod-max-pids="-1" Apr 22 15:08:21.088859 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084436 2575 flags.go:64] FLAG: --pods-per-core="0" Apr 22 15:08:21.089466 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084439 2575 flags.go:64] FLAG: --port="10250" Apr 22 15:08:21.089466 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084442 2575 flags.go:64] FLAG: --protect-kernel-defaults="false" Apr 22 15:08:21.089466 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084445 2575 flags.go:64] FLAG: --provider-id="aws:///us-east-1a/i-09a4b3901e30b8a79" Apr 22 15:08:21.089466 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084448 2575 flags.go:64] FLAG: --qos-reserved="" Apr 22 15:08:21.089466 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084451 2575 flags.go:64] FLAG: --read-only-port="10255" Apr 22 15:08:21.089466 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084455 2575 flags.go:64] FLAG: --register-node="true" Apr 22 15:08:21.089466 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084459 2575 flags.go:64] FLAG: --register-schedulable="true" Apr 22 15:08:21.089466 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084462 2575 flags.go:64] FLAG: --register-with-taints="" Apr 22 15:08:21.089466 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084466 2575 flags.go:64] FLAG: --registry-burst="10" Apr 22 15:08:21.089466 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084469 2575 flags.go:64] FLAG: --registry-qps="5" Apr 22 15:08:21.089466 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084472 2575 flags.go:64] FLAG: --reserved-cpus="" Apr 22 15:08:21.089466 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084476 2575 flags.go:64] FLAG: --reserved-memory="" Apr 22 15:08:21.089466 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084480 2575 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Apr 22 15:08:21.089466 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084483 2575 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Apr 22 15:08:21.089466 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084486 2575 flags.go:64] FLAG: --rotate-certificates="false" Apr 22 15:08:21.089466 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084489 2575 flags.go:64] FLAG: --rotate-server-certificates="false" Apr 22 15:08:21.089466 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084492 2575 flags.go:64] FLAG: --runonce="false" Apr 22 15:08:21.089466 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084495 2575 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Apr 22 15:08:21.089466 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084498 2575 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Apr 22 15:08:21.089466 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084501 2575 flags.go:64] FLAG: --seccomp-default="false" Apr 22 15:08:21.089466 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084504 2575 flags.go:64] FLAG: --serialize-image-pulls="true" Apr 22 15:08:21.089466 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084508 2575 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Apr 22 15:08:21.089466 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084512 2575 flags.go:64] FLAG: --storage-driver-db="cadvisor" Apr 22 15:08:21.089466 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084517 2575 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Apr 22 15:08:21.089466 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084520 2575 flags.go:64] FLAG: --storage-driver-password="root" Apr 22 15:08:21.089466 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084523 2575 flags.go:64] FLAG: --storage-driver-secure="false" Apr 22 15:08:21.090110 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084526 2575 flags.go:64] FLAG: --storage-driver-table="stats" Apr 22 15:08:21.090110 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084529 2575 flags.go:64] FLAG: --storage-driver-user="root" Apr 22 15:08:21.090110 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084532 2575 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Apr 22 15:08:21.090110 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084535 2575 flags.go:64] FLAG: --sync-frequency="1m0s" Apr 22 15:08:21.090110 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084538 2575 flags.go:64] FLAG: --system-cgroups="" Apr 22 15:08:21.090110 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084541 2575 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Apr 22 15:08:21.090110 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084547 2575 flags.go:64] FLAG: --system-reserved-cgroup="" Apr 22 15:08:21.090110 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084550 2575 flags.go:64] FLAG: --tls-cert-file="" Apr 22 15:08:21.090110 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084553 2575 flags.go:64] FLAG: --tls-cipher-suites="[]" Apr 22 15:08:21.090110 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084559 2575 flags.go:64] FLAG: --tls-min-version="" Apr 22 15:08:21.090110 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084563 2575 flags.go:64] FLAG: --tls-private-key-file="" Apr 22 15:08:21.090110 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084566 2575 flags.go:64] FLAG: --topology-manager-policy="none" Apr 22 15:08:21.090110 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084569 2575 flags.go:64] FLAG: --topology-manager-policy-options="" Apr 22 15:08:21.090110 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084572 2575 flags.go:64] FLAG: --topology-manager-scope="container" Apr 22 15:08:21.090110 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084575 2575 flags.go:64] FLAG: --v="2" Apr 22 15:08:21.090110 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084580 2575 flags.go:64] FLAG: --version="false" Apr 22 15:08:21.090110 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084593 2575 flags.go:64] FLAG: --vmodule="" Apr 22 15:08:21.090110 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084597 2575 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Apr 22 15:08:21.090110 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084601 2575 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Apr 22 15:08:21.090110 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084709 2575 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 22 15:08:21.090110 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084714 2575 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 22 15:08:21.090110 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084717 2575 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 22 15:08:21.090110 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084720 2575 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 22 15:08:21.090110 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084723 2575 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 22 15:08:21.090755 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084727 2575 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 22 15:08:21.090755 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084729 2575 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 22 15:08:21.090755 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084732 2575 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 22 15:08:21.090755 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084736 2575 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 22 15:08:21.090755 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084739 2575 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 22 15:08:21.090755 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084742 2575 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 22 15:08:21.090755 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084746 2575 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 22 15:08:21.090755 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084748 2575 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 22 15:08:21.090755 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084751 2575 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 22 15:08:21.090755 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084754 2575 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 22 15:08:21.090755 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084757 2575 feature_gate.go:328] unrecognized feature gate: Example2 Apr 22 15:08:21.090755 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084759 2575 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 22 15:08:21.090755 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084762 2575 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 22 15:08:21.090755 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084764 2575 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 22 15:08:21.090755 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084767 2575 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 22 15:08:21.090755 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084770 2575 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 22 15:08:21.090755 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084772 2575 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 22 15:08:21.090755 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084775 2575 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 22 15:08:21.090755 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084777 2575 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 22 15:08:21.091244 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084780 2575 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 22 15:08:21.091244 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084782 2575 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 22 15:08:21.091244 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084785 2575 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 22 15:08:21.091244 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084787 2575 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 22 15:08:21.091244 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084790 2575 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 22 15:08:21.091244 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084792 2575 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 22 15:08:21.091244 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084795 2575 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 22 15:08:21.091244 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084797 2575 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 22 15:08:21.091244 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084800 2575 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 22 15:08:21.091244 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084802 2575 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 22 15:08:21.091244 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084805 2575 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 22 15:08:21.091244 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084807 2575 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 22 15:08:21.091244 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084810 2575 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 22 15:08:21.091244 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084812 2575 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 22 15:08:21.091244 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084815 2575 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 22 15:08:21.091244 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084817 2575 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 22 15:08:21.091244 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084821 2575 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 22 15:08:21.091244 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084824 2575 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 22 15:08:21.091244 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084827 2575 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 22 15:08:21.091244 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084829 2575 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 22 15:08:21.091727 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084832 2575 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 22 15:08:21.091727 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084835 2575 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 22 15:08:21.091727 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084837 2575 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 22 15:08:21.091727 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084840 2575 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 22 15:08:21.091727 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084842 2575 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 22 15:08:21.091727 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084845 2575 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 22 15:08:21.091727 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084848 2575 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 22 15:08:21.091727 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084850 2575 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 22 15:08:21.091727 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084853 2575 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 22 15:08:21.091727 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084855 2575 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 22 15:08:21.091727 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084858 2575 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 22 15:08:21.091727 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084874 2575 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 22 15:08:21.091727 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084877 2575 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 22 15:08:21.091727 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084879 2575 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 22 15:08:21.091727 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084882 2575 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 22 15:08:21.091727 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084887 2575 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 22 15:08:21.091727 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084891 2575 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 22 15:08:21.091727 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084894 2575 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 22 15:08:21.091727 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084896 2575 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 22 15:08:21.092206 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084899 2575 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 22 15:08:21.092206 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084902 2575 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 22 15:08:21.092206 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084905 2575 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 22 15:08:21.092206 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084907 2575 feature_gate.go:328] unrecognized feature gate: Example Apr 22 15:08:21.092206 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084910 2575 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 22 15:08:21.092206 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084912 2575 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 22 15:08:21.092206 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084915 2575 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 22 15:08:21.092206 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084918 2575 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 22 15:08:21.092206 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084920 2575 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 22 15:08:21.092206 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084927 2575 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 22 15:08:21.092206 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084930 2575 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 22 15:08:21.092206 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084933 2575 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 22 15:08:21.092206 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084936 2575 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 22 15:08:21.092206 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084940 2575 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 22 15:08:21.092206 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084944 2575 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 22 15:08:21.092206 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084947 2575 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 22 15:08:21.092206 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084950 2575 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 22 15:08:21.092206 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084952 2575 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 22 15:08:21.092206 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084955 2575 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 22 15:08:21.092206 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084957 2575 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 22 15:08:21.092690 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084960 2575 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 22 15:08:21.092690 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084962 2575 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 22 15:08:21.092690 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.084965 2575 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 22 15:08:21.092690 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.084971 2575 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 22 15:08:21.092690 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.091848 2575 server.go:530] "Kubelet version" kubeletVersion="v1.33.9" Apr 22 15:08:21.092690 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.091987 2575 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 22 15:08:21.092690 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092043 2575 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 22 15:08:21.092690 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092049 2575 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 22 15:08:21.092690 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092053 2575 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 22 15:08:21.092690 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092056 2575 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 22 15:08:21.092690 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092059 2575 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 22 15:08:21.092690 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092062 2575 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 22 15:08:21.092690 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092064 2575 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 22 15:08:21.092690 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092067 2575 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 22 15:08:21.092690 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092070 2575 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 22 15:08:21.092690 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092072 2575 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 22 15:08:21.093116 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092075 2575 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 22 15:08:21.093116 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092078 2575 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 22 15:08:21.093116 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092080 2575 feature_gate.go:328] unrecognized feature gate: Example Apr 22 15:08:21.093116 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092083 2575 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 22 15:08:21.093116 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092085 2575 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 22 15:08:21.093116 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092088 2575 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 22 15:08:21.093116 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092092 2575 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 22 15:08:21.093116 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092095 2575 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 22 15:08:21.093116 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092098 2575 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 22 15:08:21.093116 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092100 2575 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 22 15:08:21.093116 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092104 2575 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 22 15:08:21.093116 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092106 2575 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 22 15:08:21.093116 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092109 2575 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 22 15:08:21.093116 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092111 2575 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 22 15:08:21.093116 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092114 2575 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 22 15:08:21.093116 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092117 2575 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 22 15:08:21.093116 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092119 2575 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 22 15:08:21.093116 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092122 2575 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 22 15:08:21.093116 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092124 2575 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 22 15:08:21.093116 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092127 2575 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 22 15:08:21.093630 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092129 2575 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 22 15:08:21.093630 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092132 2575 feature_gate.go:328] unrecognized feature gate: Example2 Apr 22 15:08:21.093630 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092135 2575 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 22 15:08:21.093630 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092137 2575 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 22 15:08:21.093630 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092139 2575 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 22 15:08:21.093630 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092142 2575 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 22 15:08:21.093630 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092145 2575 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 22 15:08:21.093630 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092147 2575 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 22 15:08:21.093630 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092150 2575 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 22 15:08:21.093630 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092152 2575 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 22 15:08:21.093630 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092155 2575 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 22 15:08:21.093630 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092157 2575 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 22 15:08:21.093630 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092159 2575 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 22 15:08:21.093630 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092162 2575 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 22 15:08:21.093630 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092165 2575 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 22 15:08:21.093630 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092167 2575 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 22 15:08:21.093630 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092170 2575 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 22 15:08:21.093630 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092173 2575 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 22 15:08:21.093630 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092176 2575 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 22 15:08:21.093630 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092179 2575 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 22 15:08:21.094180 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092182 2575 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 22 15:08:21.094180 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092184 2575 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 22 15:08:21.094180 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092187 2575 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 22 15:08:21.094180 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092190 2575 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 22 15:08:21.094180 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092193 2575 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 22 15:08:21.094180 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092197 2575 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 22 15:08:21.094180 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092203 2575 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 22 15:08:21.094180 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092207 2575 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 22 15:08:21.094180 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092211 2575 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 22 15:08:21.094180 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092214 2575 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 22 15:08:21.094180 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092217 2575 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 22 15:08:21.094180 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092220 2575 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 22 15:08:21.094180 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092223 2575 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 22 15:08:21.094180 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092226 2575 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 22 15:08:21.094180 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092228 2575 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 22 15:08:21.094180 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092231 2575 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 22 15:08:21.094180 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092234 2575 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 22 15:08:21.094180 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092236 2575 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 22 15:08:21.094180 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092239 2575 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 22 15:08:21.094649 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092241 2575 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 22 15:08:21.094649 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092244 2575 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 22 15:08:21.094649 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092246 2575 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 22 15:08:21.094649 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092249 2575 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 22 15:08:21.094649 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092252 2575 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 22 15:08:21.094649 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092254 2575 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 22 15:08:21.094649 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092257 2575 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 22 15:08:21.094649 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092259 2575 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 22 15:08:21.094649 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092262 2575 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 22 15:08:21.094649 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092264 2575 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 22 15:08:21.094649 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092267 2575 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 22 15:08:21.094649 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092270 2575 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 22 15:08:21.094649 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092273 2575 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 22 15:08:21.094649 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092275 2575 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 22 15:08:21.094649 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092278 2575 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 22 15:08:21.094649 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092280 2575 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 22 15:08:21.094649 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092283 2575 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 22 15:08:21.095099 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.092289 2575 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 22 15:08:21.095099 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092387 2575 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 22 15:08:21.095099 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092392 2575 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 22 15:08:21.095099 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092395 2575 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 22 15:08:21.095099 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092398 2575 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 22 15:08:21.095099 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092401 2575 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 22 15:08:21.095099 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092403 2575 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 22 15:08:21.095099 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092406 2575 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 22 15:08:21.095099 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092408 2575 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 22 15:08:21.095099 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092411 2575 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 22 15:08:21.095099 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092414 2575 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 22 15:08:21.095099 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092417 2575 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 22 15:08:21.095099 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092421 2575 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 22 15:08:21.095099 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092424 2575 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 22 15:08:21.095099 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092427 2575 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 22 15:08:21.095568 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092429 2575 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 22 15:08:21.095568 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092432 2575 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 22 15:08:21.095568 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092434 2575 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 22 15:08:21.095568 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092437 2575 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 22 15:08:21.095568 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092439 2575 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 22 15:08:21.095568 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092442 2575 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 22 15:08:21.095568 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092445 2575 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 22 15:08:21.095568 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092447 2575 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 22 15:08:21.095568 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092450 2575 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 22 15:08:21.095568 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092452 2575 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 22 15:08:21.095568 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092455 2575 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 22 15:08:21.095568 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092457 2575 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 22 15:08:21.095568 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092460 2575 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 22 15:08:21.095568 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092462 2575 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 22 15:08:21.095568 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092465 2575 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 22 15:08:21.095568 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092467 2575 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 22 15:08:21.095568 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092470 2575 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 22 15:08:21.095568 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092473 2575 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 22 15:08:21.095568 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092476 2575 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 22 15:08:21.095568 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092478 2575 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 22 15:08:21.096150 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092480 2575 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 22 15:08:21.096150 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092483 2575 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 22 15:08:21.096150 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092486 2575 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 22 15:08:21.096150 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092488 2575 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 22 15:08:21.096150 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092491 2575 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 22 15:08:21.096150 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092493 2575 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 22 15:08:21.096150 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092495 2575 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 22 15:08:21.096150 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092498 2575 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 22 15:08:21.096150 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092500 2575 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 22 15:08:21.096150 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092503 2575 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 22 15:08:21.096150 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092505 2575 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 22 15:08:21.096150 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092508 2575 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 22 15:08:21.096150 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092510 2575 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 22 15:08:21.096150 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092513 2575 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 22 15:08:21.096150 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092515 2575 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 22 15:08:21.096150 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092518 2575 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 22 15:08:21.096150 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092520 2575 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 22 15:08:21.096150 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092523 2575 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 22 15:08:21.096150 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092525 2575 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 22 15:08:21.096150 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092528 2575 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 22 15:08:21.096647 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092530 2575 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 22 15:08:21.096647 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092533 2575 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 22 15:08:21.096647 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092535 2575 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 22 15:08:21.096647 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092538 2575 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 22 15:08:21.096647 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092541 2575 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 22 15:08:21.096647 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092544 2575 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 22 15:08:21.096647 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092546 2575 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 22 15:08:21.096647 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092548 2575 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 22 15:08:21.096647 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092551 2575 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 22 15:08:21.096647 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092553 2575 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 22 15:08:21.096647 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092556 2575 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 22 15:08:21.096647 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092560 2575 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 22 15:08:21.096647 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092563 2575 feature_gate.go:328] unrecognized feature gate: Example Apr 22 15:08:21.096647 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092566 2575 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 22 15:08:21.096647 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092568 2575 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 22 15:08:21.096647 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092571 2575 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 22 15:08:21.096647 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092574 2575 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 22 15:08:21.096647 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092576 2575 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 22 15:08:21.096647 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092579 2575 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 22 15:08:21.096647 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092582 2575 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 22 15:08:21.097176 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092585 2575 feature_gate.go:328] unrecognized feature gate: Example2 Apr 22 15:08:21.097176 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092587 2575 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 22 15:08:21.097176 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092590 2575 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 22 15:08:21.097176 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092593 2575 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 22 15:08:21.097176 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092596 2575 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 22 15:08:21.097176 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092598 2575 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 22 15:08:21.097176 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092600 2575 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 22 15:08:21.097176 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092603 2575 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 22 15:08:21.097176 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092605 2575 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 22 15:08:21.097176 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092608 2575 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 22 15:08:21.097176 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092610 2575 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 22 15:08:21.097176 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:21.092613 2575 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 22 15:08:21.097176 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.092618 2575 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 22 15:08:21.097176 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.093348 2575 server.go:962] "Client rotation is on, will bootstrap in background" Apr 22 15:08:21.097176 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.095605 2575 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Apr 22 15:08:21.097599 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.096657 2575 server.go:1019] "Starting client certificate rotation" Apr 22 15:08:21.097599 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.096756 2575 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 22 15:08:21.097599 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.096791 2575 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 22 15:08:21.122569 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.122542 2575 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 22 15:08:21.141701 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.141656 2575 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 22 15:08:21.158678 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.158643 2575 log.go:25] "Validated CRI v1 runtime API" Apr 22 15:08:21.159381 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.159347 2575 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 22 15:08:21.164929 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.164910 2575 log.go:25] "Validated CRI v1 image API" Apr 22 15:08:21.166379 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.166355 2575 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 22 15:08:21.170310 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.170288 2575 fs.go:135] Filesystem UUIDs: map[6da88f69-066e-4280-8cef-528669514671:/dev/nvme0n1p4 7B77-95E7:/dev/nvme0n1p2 98424978-f2d7-43c0-81e1-bbaee939d2a4:/dev/nvme0n1p3] Apr 22 15:08:21.170382 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.170309 2575 fs.go:136] Filesystem partitions: map[/dev/nvme0n1p3:{mountpoint:/boot major:259 minor:3 fsType:ext4 blockSize:0} /dev/nvme0n1p4:{mountpoint:/var major:259 minor:4 fsType:xfs blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Apr 22 15:08:21.176620 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.176487 2575 manager.go:217] Machine: {Timestamp:2026-04-22 15:08:21.174322032 +0000 UTC m=+0.409569854 CPUVendorID:GenuineIntel NumCores:8 NumPhysicalCores:4 NumSockets:1 CpuFrequency:3070204 MemoryCapacity:32812167168 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ec2add1938eaa68ac305aaf3beb74fa0 SystemUUID:ec2add19-38ea-a68a-c305-aaf3beb74fa0 BootID:c5e612c7-6951-45d2-bb45-ce9ee0a889e0 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16406081536 Type:vfs Inodes:4005391 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6562435072 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/nvme0n1p4 DeviceMajor:259 DeviceMinor:4 Capacity:128243970048 Type:vfs Inodes:62651840 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6103040 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16406085632 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/nvme0n1p3 DeviceMajor:259 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[259:0:{Name:nvme0n1 Major:259 Minor:0 Size:128849018880 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:02:68:8e:e5:1e:0b Speed:0 Mtu:9001} {Name:ens5 MacAddress:02:68:8e:e5:1e:0b Speed:0 Mtu:9001} {Name:ovs-system MacAddress:4e:4b:c8:2e:f9:39 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:32812167168 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0 4] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:1 Threads:[1 5] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:2 Threads:[2 6] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:3 Threads:[3 7] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:}] Caches:[{Id:0 Size:34603008 Type:Unified Level:3}] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Apr 22 15:08:21.176620 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.176612 2575 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Apr 22 15:08:21.176775 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.176763 2575 manager.go:233] Version: {KernelVersion:5.14.0-570.107.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20260414-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Apr 22 15:08:21.178118 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.178088 2575 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 22 15:08:21.178271 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.178121 2575 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-10-0-134-217.ec2.internal","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 22 15:08:21.178327 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.178282 2575 topology_manager.go:138] "Creating topology manager with none policy" Apr 22 15:08:21.178327 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.178291 2575 container_manager_linux.go:306] "Creating device plugin manager" Apr 22 15:08:21.178327 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.178305 2575 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 22 15:08:21.179000 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.178988 2575 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 22 15:08:21.180416 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.180403 2575 state_mem.go:36] "Initialized new in-memory state store" Apr 22 15:08:21.180535 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.180526 2575 server.go:1267] "Using root directory" path="/var/lib/kubelet" Apr 22 15:08:21.182676 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.182664 2575 kubelet.go:491] "Attempting to sync node with API server" Apr 22 15:08:21.182717 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.182681 2575 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 22 15:08:21.182759 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.182717 2575 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Apr 22 15:08:21.182759 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.182728 2575 kubelet.go:397] "Adding apiserver pod source" Apr 22 15:08:21.182759 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.182739 2575 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 22 15:08:21.184005 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.183988 2575 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 22 15:08:21.184108 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.184019 2575 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 22 15:08:21.187109 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.187093 2575 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.10-2.rhaos4.20.gita4d0894.el9" apiVersion="v1" Apr 22 15:08:21.188753 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.188738 2575 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 22 15:08:21.191068 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.191041 2575 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Apr 22 15:08:21.191146 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.191080 2575 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Apr 22 15:08:21.191146 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.191097 2575 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Apr 22 15:08:21.191146 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.191112 2575 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Apr 22 15:08:21.191146 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.191122 2575 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Apr 22 15:08:21.191146 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.191131 2575 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Apr 22 15:08:21.191146 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.191139 2575 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Apr 22 15:08:21.191304 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.191155 2575 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Apr 22 15:08:21.191304 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.191167 2575 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Apr 22 15:08:21.191304 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.191176 2575 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Apr 22 15:08:21.191304 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.191189 2575 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Apr 22 15:08:21.191304 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.191217 2575 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Apr 22 15:08:21.191304 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.191244 2575 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Apr 22 15:08:21.191304 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.191253 2575 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Apr 22 15:08:21.196153 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.196136 2575 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 22 15:08:21.196244 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.196185 2575 server.go:1295] "Started kubelet" Apr 22 15:08:21.196333 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:21.196298 2575 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-134-217.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 22 15:08:21.196398 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:21.196341 2575 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 22 15:08:21.196451 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.196400 2575 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 22 15:08:21.196451 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.196392 2575 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 22 15:08:21.196538 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.196466 2575 server_v1.go:47] "podresources" method="list" useActivePods=true Apr 22 15:08:21.196769 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.196410 2575 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-134-217.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 22 15:08:21.197217 ip-10-0-134-217 systemd[1]: Started Kubernetes Kubelet. Apr 22 15:08:21.198076 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.197943 2575 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 22 15:08:21.198758 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.198746 2575 server.go:317] "Adding debug handlers to kubelet server" Apr 22 15:08:21.203781 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.203763 2575 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Apr 22 15:08:21.204171 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:21.203208 2575 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-134-217.ec2.internal.18a8b64e7baabf16 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-134-217.ec2.internal,UID:ip-10-0-134-217.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-10-0-134-217.ec2.internal,},FirstTimestamp:2026-04-22 15:08:21.196152598 +0000 UTC m=+0.431400401,LastTimestamp:2026-04-22 15:08:21.196152598 +0000 UTC m=+0.431400401,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-134-217.ec2.internal,}" Apr 22 15:08:21.204452 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.204420 2575 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 22 15:08:21.205278 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.205258 2575 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 22 15:08:21.205363 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.205261 2575 volume_manager.go:295] "The desired_state_of_world populator starts" Apr 22 15:08:21.205363 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.205301 2575 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 22 15:08:21.205473 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.205457 2575 reconstruct.go:97] "Volume reconstruction finished" Apr 22 15:08:21.205512 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.205475 2575 reconciler.go:26] "Reconciler: start to sync state" Apr 22 15:08:21.205512 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:21.205459 2575 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-134-217.ec2.internal\" not found" Apr 22 15:08:21.206006 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.205987 2575 factory.go:55] Registering systemd factory Apr 22 15:08:21.206103 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.206052 2575 factory.go:223] Registration of the systemd container factory successfully Apr 22 15:08:21.206291 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.206276 2575 factory.go:153] Registering CRI-O factory Apr 22 15:08:21.206291 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.206292 2575 factory.go:223] Registration of the crio container factory successfully Apr 22 15:08:21.206391 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:21.206304 2575 kubelet.go:1618] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Apr 22 15:08:21.206391 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.206346 2575 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Apr 22 15:08:21.206391 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.206382 2575 factory.go:103] Registering Raw factory Apr 22 15:08:21.206520 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.206398 2575 manager.go:1196] Started watching for new ooms in manager Apr 22 15:08:21.206919 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.206907 2575 manager.go:319] Starting recovery of all containers Apr 22 15:08:21.213650 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:21.213612 2575 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 22 15:08:21.213773 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:21.213750 2575 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-134-217.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Apr 22 15:08:21.216285 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.216258 2575 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-c82lg" Apr 22 15:08:21.217933 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.217893 2575 manager.go:324] Recovery completed Apr 22 15:08:21.220158 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:21.220119 2575 watcher.go:152] Failed to watch directory "/sys/fs/cgroup/system.slice/systemd-update-utmp-runlevel.service": inotify_add_watch /sys/fs/cgroup/system.slice/systemd-update-utmp-runlevel.service: no such file or directory Apr 22 15:08:21.223643 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.223629 2575 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 22 15:08:21.224521 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.224503 2575 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-c82lg" Apr 22 15:08:21.226209 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.226189 2575 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-217.ec2.internal" event="NodeHasSufficientMemory" Apr 22 15:08:21.226295 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.226229 2575 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-217.ec2.internal" event="NodeHasNoDiskPressure" Apr 22 15:08:21.226295 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.226244 2575 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-217.ec2.internal" event="NodeHasSufficientPID" Apr 22 15:08:21.226846 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.226830 2575 cpu_manager.go:222] "Starting CPU manager" policy="none" Apr 22 15:08:21.226846 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.226844 2575 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Apr 22 15:08:21.226962 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.226873 2575 state_mem.go:36] "Initialized new in-memory state store" Apr 22 15:08:21.228313 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:21.228246 2575 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-134-217.ec2.internal.18a8b64e7d755cf5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-134-217.ec2.internal,UID:ip-10-0-134-217.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-134-217.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-134-217.ec2.internal,},FirstTimestamp:2026-04-22 15:08:21.226208501 +0000 UTC m=+0.461456302,LastTimestamp:2026-04-22 15:08:21.226208501 +0000 UTC m=+0.461456302,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-134-217.ec2.internal,}" Apr 22 15:08:21.229574 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.229559 2575 policy_none.go:49] "None policy: Start" Apr 22 15:08:21.229574 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.229576 2575 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 22 15:08:21.229658 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.229587 2575 state_mem.go:35] "Initializing new in-memory state store" Apr 22 15:08:21.267649 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.267630 2575 manager.go:341] "Starting Device Plugin manager" Apr 22 15:08:21.272725 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:21.267717 2575 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 22 15:08:21.272725 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.267748 2575 server.go:85] "Starting device plugin registration server" Apr 22 15:08:21.272725 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.268033 2575 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 22 15:08:21.272725 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.268051 2575 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 22 15:08:21.272725 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.268207 2575 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Apr 22 15:08:21.272725 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.268287 2575 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Apr 22 15:08:21.272725 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.268296 2575 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 22 15:08:21.272725 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:21.269094 2575 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Apr 22 15:08:21.272725 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:21.269127 2575 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-134-217.ec2.internal\" not found" Apr 22 15:08:21.321910 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.321849 2575 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 22 15:08:21.323190 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.323173 2575 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 22 15:08:21.323281 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.323204 2575 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 22 15:08:21.323281 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.323247 2575 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 22 15:08:21.323281 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.323257 2575 kubelet.go:2451] "Starting kubelet main sync loop" Apr 22 15:08:21.323406 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:21.323363 2575 kubelet.go:2475] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Apr 22 15:08:21.327231 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.327171 2575 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 22 15:08:21.368736 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.368703 2575 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 22 15:08:21.369658 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.369639 2575 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-217.ec2.internal" event="NodeHasSufficientMemory" Apr 22 15:08:21.369731 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.369670 2575 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-217.ec2.internal" event="NodeHasNoDiskPressure" Apr 22 15:08:21.369731 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.369681 2575 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-217.ec2.internal" event="NodeHasSufficientPID" Apr 22 15:08:21.369731 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.369706 2575 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-134-217.ec2.internal" Apr 22 15:08:21.379513 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.379487 2575 kubelet_node_status.go:81] "Successfully registered node" node="ip-10-0-134-217.ec2.internal" Apr 22 15:08:21.379587 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:21.379516 2575 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-134-217.ec2.internal\": node \"ip-10-0-134-217.ec2.internal\" not found" Apr 22 15:08:21.412654 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:21.412622 2575 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-134-217.ec2.internal\" not found" Apr 22 15:08:21.423628 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.423600 2575 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["kube-system/kube-apiserver-proxy-ip-10-0-134-217.ec2.internal","openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-134-217.ec2.internal"] Apr 22 15:08:21.423700 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.423671 2575 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 22 15:08:21.424676 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.424659 2575 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-217.ec2.internal" event="NodeHasSufficientMemory" Apr 22 15:08:21.424748 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.424691 2575 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-217.ec2.internal" event="NodeHasNoDiskPressure" Apr 22 15:08:21.424748 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.424703 2575 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-217.ec2.internal" event="NodeHasSufficientPID" Apr 22 15:08:21.426044 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.426032 2575 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 22 15:08:21.426200 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.426185 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-134-217.ec2.internal" Apr 22 15:08:21.426243 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.426219 2575 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 22 15:08:21.426831 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.426811 2575 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-217.ec2.internal" event="NodeHasSufficientMemory" Apr 22 15:08:21.426955 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.426841 2575 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-217.ec2.internal" event="NodeHasNoDiskPressure" Apr 22 15:08:21.426955 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.426851 2575 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-217.ec2.internal" event="NodeHasSufficientPID" Apr 22 15:08:21.426955 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.426811 2575 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-217.ec2.internal" event="NodeHasSufficientMemory" Apr 22 15:08:21.426955 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.426912 2575 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-217.ec2.internal" event="NodeHasNoDiskPressure" Apr 22 15:08:21.426955 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.426929 2575 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-217.ec2.internal" event="NodeHasSufficientPID" Apr 22 15:08:21.428311 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.428293 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-134-217.ec2.internal" Apr 22 15:08:21.428380 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.428329 2575 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 22 15:08:21.429084 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.429068 2575 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-217.ec2.internal" event="NodeHasSufficientMemory" Apr 22 15:08:21.429166 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.429101 2575 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-217.ec2.internal" event="NodeHasNoDiskPressure" Apr 22 15:08:21.429166 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.429118 2575 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-217.ec2.internal" event="NodeHasSufficientPID" Apr 22 15:08:21.450701 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:21.450671 2575 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-134-217.ec2.internal\" not found" node="ip-10-0-134-217.ec2.internal" Apr 22 15:08:21.455271 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:21.455254 2575 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-134-217.ec2.internal\" not found" node="ip-10-0-134-217.ec2.internal" Apr 22 15:08:21.507093 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.507054 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/a1f9ca9b98f3c0aa38bb3225e3c68dd3-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-134-217.ec2.internal\" (UID: \"a1f9ca9b98f3c0aa38bb3225e3c68dd3\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-134-217.ec2.internal" Apr 22 15:08:21.507093 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.507093 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a1f9ca9b98f3c0aa38bb3225e3c68dd3-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-134-217.ec2.internal\" (UID: \"a1f9ca9b98f3c0aa38bb3225e3c68dd3\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-134-217.ec2.internal" Apr 22 15:08:21.507301 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.507111 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/9b396163e7a8a1c1709913f4b2fb7b1e-config\") pod \"kube-apiserver-proxy-ip-10-0-134-217.ec2.internal\" (UID: \"9b396163e7a8a1c1709913f4b2fb7b1e\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-134-217.ec2.internal" Apr 22 15:08:21.513128 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:21.513104 2575 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-134-217.ec2.internal\" not found" Apr 22 15:08:21.607642 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.607573 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/a1f9ca9b98f3c0aa38bb3225e3c68dd3-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-134-217.ec2.internal\" (UID: \"a1f9ca9b98f3c0aa38bb3225e3c68dd3\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-134-217.ec2.internal" Apr 22 15:08:21.607642 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.607608 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a1f9ca9b98f3c0aa38bb3225e3c68dd3-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-134-217.ec2.internal\" (UID: \"a1f9ca9b98f3c0aa38bb3225e3c68dd3\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-134-217.ec2.internal" Apr 22 15:08:21.607642 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.607625 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/9b396163e7a8a1c1709913f4b2fb7b1e-config\") pod \"kube-apiserver-proxy-ip-10-0-134-217.ec2.internal\" (UID: \"9b396163e7a8a1c1709913f4b2fb7b1e\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-134-217.ec2.internal" Apr 22 15:08:21.607798 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.607668 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a1f9ca9b98f3c0aa38bb3225e3c68dd3-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-134-217.ec2.internal\" (UID: \"a1f9ca9b98f3c0aa38bb3225e3c68dd3\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-134-217.ec2.internal" Apr 22 15:08:21.607798 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.607674 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/a1f9ca9b98f3c0aa38bb3225e3c68dd3-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-134-217.ec2.internal\" (UID: \"a1f9ca9b98f3c0aa38bb3225e3c68dd3\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-134-217.ec2.internal" Apr 22 15:08:21.607798 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.607710 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/9b396163e7a8a1c1709913f4b2fb7b1e-config\") pod \"kube-apiserver-proxy-ip-10-0-134-217.ec2.internal\" (UID: \"9b396163e7a8a1c1709913f4b2fb7b1e\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-134-217.ec2.internal" Apr 22 15:08:21.613641 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:21.613620 2575 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-134-217.ec2.internal\" not found" Apr 22 15:08:21.714574 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:21.714533 2575 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-134-217.ec2.internal\" not found" Apr 22 15:08:21.752733 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.752705 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-134-217.ec2.internal" Apr 22 15:08:21.758458 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:21.758432 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-134-217.ec2.internal" Apr 22 15:08:21.814860 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:21.814822 2575 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-134-217.ec2.internal\" not found" Apr 22 15:08:21.915497 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:21.915419 2575 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-134-217.ec2.internal\" not found" Apr 22 15:08:22.015928 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:22.015900 2575 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-134-217.ec2.internal\" not found" Apr 22 15:08:22.096127 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:22.096095 2575 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Apr 22 15:08:22.096756 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:22.096261 2575 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Apr 22 15:08:22.116701 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:22.116668 2575 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-134-217.ec2.internal\" not found" Apr 22 15:08:22.204992 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:22.204899 2575 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Apr 22 15:08:22.217346 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:22.217313 2575 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-134-217.ec2.internal\" not found" Apr 22 15:08:22.221633 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:22.221600 2575 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 22 15:08:22.226756 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:22.226728 2575 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2028-04-21 15:03:21 +0000 UTC" deadline="2027-11-14 11:59:43.851582328 +0000 UTC" Apr 22 15:08:22.226756 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:22.226756 2575 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="13700h51m21.624829053s" Apr 22 15:08:22.247012 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:22.246983 2575 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Apr 22 15:08:22.250592 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:22.250572 2575 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-kk7cx" Apr 22 15:08:22.260210 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:22.260190 2575 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-kk7cx" Apr 22 15:08:22.318237 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:22.318205 2575 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-134-217.ec2.internal\" not found" Apr 22 15:08:22.418620 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:22.418586 2575 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-134-217.ec2.internal\" not found" Apr 22 15:08:22.456680 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:22.456605 2575 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b396163e7a8a1c1709913f4b2fb7b1e.slice/crio-56aed3d0c99779aa955ea8ffa0ab4891c5619d2e6ef0c02f46e2bbe3ee2985e3 WatchSource:0}: Error finding container 56aed3d0c99779aa955ea8ffa0ab4891c5619d2e6ef0c02f46e2bbe3ee2985e3: Status 404 returned error can't find the container with id 56aed3d0c99779aa955ea8ffa0ab4891c5619d2e6ef0c02f46e2bbe3ee2985e3 Apr 22 15:08:22.457109 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:22.457095 2575 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1f9ca9b98f3c0aa38bb3225e3c68dd3.slice/crio-0ee38e06006135ea1e4a4d904af38266067b625f30a81a876a94fb4e807fbc98 WatchSource:0}: Error finding container 0ee38e06006135ea1e4a4d904af38266067b625f30a81a876a94fb4e807fbc98: Status 404 returned error can't find the container with id 0ee38e06006135ea1e4a4d904af38266067b625f30a81a876a94fb4e807fbc98 Apr 22 15:08:22.463371 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:22.463352 2575 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 22 15:08:22.519469 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:22.519431 2575 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-134-217.ec2.internal\" not found" Apr 22 15:08:22.588158 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:22.588128 2575 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Apr 22 15:08:22.619950 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:22.619905 2575 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-134-217.ec2.internal\" not found" Apr 22 15:08:22.712977 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:22.712918 2575 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Apr 22 15:08:22.804986 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:22.804950 2575 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-134-217.ec2.internal" Apr 22 15:08:22.815759 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:22.815735 2575 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 22 15:08:22.816719 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:22.816704 2575 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-134-217.ec2.internal" Apr 22 15:08:22.828686 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:22.828662 2575 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 22 15:08:23.184183 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.184104 2575 apiserver.go:52] "Watching apiserver" Apr 22 15:08:23.190687 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.190660 2575 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Apr 22 15:08:23.191052 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.191017 2575 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-xrffc","kube-system/konnectivity-agent-qxtv4","openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gqldx","openshift-dns/node-resolver-hqq4l","openshift-multus/network-metrics-daemon-b6hrq","openshift-network-diagnostics/network-check-target-j6s9c","openshift-network-operator/iptables-alerter-x467p","openshift-ovn-kubernetes/ovnkube-node-lt5hd","kube-system/kube-apiserver-proxy-ip-10-0-134-217.ec2.internal","openshift-cluster-node-tuning-operator/tuned-g4dbx","openshift-image-registry/node-ca-rw9wr","openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-134-217.ec2.internal","openshift-multus/multus-4rqkv"] Apr 22 15:08:23.193232 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.193204 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-x467p" Apr 22 15:08:23.194736 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.194553 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-qxtv4" Apr 22 15:08:23.196111 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.196094 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Apr 22 15:08:23.196216 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.196140 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Apr 22 15:08:23.196216 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.196175 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Apr 22 15:08:23.196322 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.196091 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-dockercfg-w64wz\"" Apr 22 15:08:23.196854 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.196824 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"konnectivity-agent\"" Apr 22 15:08:23.196980 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.196919 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kube-system\"/\"konnectivity-ca-bundle\"" Apr 22 15:08:23.197165 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.197109 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"default-dockercfg-dmdzt\"" Apr 22 15:08:23.198364 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.198344 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gqldx" Apr 22 15:08:23.198456 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.198421 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hqq4l" Apr 22 15:08:23.199791 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.199773 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6hrq" Apr 22 15:08:23.199903 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:23.199849 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6hrq" podUID="be8c5f47-6214-42a7-8e36-1c852cc48be6" Apr 22 15:08:23.200693 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.200672 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"openshift-service-ca.crt\"" Apr 22 15:08:23.201410 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.201040 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-sa-dockercfg-txldn\"" Apr 22 15:08:23.201410 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.201145 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Apr 22 15:08:23.201410 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.201351 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Apr 22 15:08:23.201584 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.201526 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-b2fpn\"" Apr 22 15:08:23.201857 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.201840 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"kube-root-ca.crt\"" Apr 22 15:08:23.202035 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.201841 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-metrics-serving-cert\"" Apr 22 15:08:23.202774 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.202690 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j6s9c" Apr 22 15:08:23.202774 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:23.202755 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j6s9c" podUID="0e7fe577-78a6-4227-b074-218a66e869bc" Apr 22 15:08:23.204235 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.204216 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xrffc" Apr 22 15:08:23.204400 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.204378 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.206059 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.205786 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.207508 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.207489 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-rw9wr" Apr 22 15:08:23.208195 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.207956 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Apr 22 15:08:23.210179 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.208942 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Apr 22 15:08:23.210179 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.209228 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-82vnw\"" Apr 22 15:08:23.210179 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.209272 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Apr 22 15:08:23.210179 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.209315 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Apr 22 15:08:23.210179 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.209646 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Apr 22 15:08:23.210179 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.209706 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Apr 22 15:08:23.210488 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.210360 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Apr 22 15:08:23.210488 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.210417 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"tuned-dockercfg-5g8cc\"" Apr 22 15:08:23.210569 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.210493 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Apr 22 15:08:23.210759 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.210740 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Apr 22 15:08:23.210950 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.210916 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Apr 22 15:08:23.211035 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.210951 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Apr 22 15:08:23.211035 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.210966 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.211035 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.211017 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"kube-root-ca.crt\"" Apr 22 15:08:23.211466 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.211443 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-g4ghc\"" Apr 22 15:08:23.211545 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.211473 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Apr 22 15:08:23.211676 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.211661 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"openshift-service-ca.crt\"" Apr 22 15:08:23.211842 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.211825 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Apr 22 15:08:23.212223 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.212203 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Apr 22 15:08:23.212462 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.212434 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-42w7s\"" Apr 22 15:08:23.214743 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.214620 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-cg6jl\"" Apr 22 15:08:23.214743 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.214625 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Apr 22 15:08:23.215614 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.215546 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a5f9bf55-b089-4f8e-8313-0f7409db1455-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xrffc\" (UID: \"a5f9bf55-b089-4f8e-8313-0f7409db1455\") " pod="openshift-multus/multus-additional-cni-plugins-xrffc" Apr 22 15:08:23.215614 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.215571 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a5f9bf55-b089-4f8e-8313-0f7409db1455-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xrffc\" (UID: \"a5f9bf55-b089-4f8e-8313-0f7409db1455\") " pod="openshift-multus/multus-additional-cni-plugins-xrffc" Apr 22 15:08:23.215614 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.215590 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27s5c\" (UniqueName: \"kubernetes.io/projected/be8c5f47-6214-42a7-8e36-1c852cc48be6-kube-api-access-27s5c\") pod \"network-metrics-daemon-b6hrq\" (UID: \"be8c5f47-6214-42a7-8e36-1c852cc48be6\") " pod="openshift-multus/network-metrics-daemon-b6hrq" Apr 22 15:08:23.215614 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.215612 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7b9c0073-689d-408d-ac2b-84411c925f02-ovnkube-config\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.215883 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.215629 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7b9c0073-689d-408d-ac2b-84411c925f02-ovn-node-metrics-cert\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.215883 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.215642 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-sys\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.215883 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.215657 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/a17c3e99-1108-4fee-af0c-ec3741b68100-konnectivity-ca\") pod \"konnectivity-agent-qxtv4\" (UID: \"a17c3e99-1108-4fee-af0c-ec3741b68100\") " pod="kube-system/konnectivity-agent-qxtv4" Apr 22 15:08:23.215883 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.215674 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-host-cni-netd\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.215883 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.215689 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-tmp\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.215883 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.215703 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4-socket-dir\") pod \"aws-ebs-csi-driver-node-gqldx\" (UID: \"1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gqldx" Apr 22 15:08:23.215883 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.215717 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-run-ovn\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.215883 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.215730 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4-device-dir\") pod \"aws-ebs-csi-driver-node-gqldx\" (UID: \"1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gqldx" Apr 22 15:08:23.215883 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.215744 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-host-slash\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.215883 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.215767 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be8c5f47-6214-42a7-8e36-1c852cc48be6-metrics-certs\") pod \"network-metrics-daemon-b6hrq\" (UID: \"be8c5f47-6214-42a7-8e36-1c852cc48be6\") " pod="openshift-multus/network-metrics-daemon-b6hrq" Apr 22 15:08:23.215883 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.215781 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-etc-openvswitch\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.215883 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.215796 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/af1546e5-60a5-4932-8506-3627e007c4b6-iptables-alerter-script\") pod \"iptables-alerter-x467p\" (UID: \"af1546e5-60a5-4932-8506-3627e007c4b6\") " pod="openshift-network-operator/iptables-alerter-x467p" Apr 22 15:08:23.215883 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.215818 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ds9c\" (UniqueName: \"kubernetes.io/projected/1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4-kube-api-access-2ds9c\") pod \"aws-ebs-csi-driver-node-gqldx\" (UID: \"1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gqldx" Apr 22 15:08:23.215883 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.215835 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/a5f9bf55-b089-4f8e-8313-0f7409db1455-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-xrffc\" (UID: \"a5f9bf55-b089-4f8e-8313-0f7409db1455\") " pod="openshift-multus/multus-additional-cni-plugins-xrffc" Apr 22 15:08:23.215883 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.215850 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-run-systemd\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.215883 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.215892 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-run-openvswitch\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.216553 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.215908 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-node-log\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.216553 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.215928 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-etc-sysconfig\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.216553 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.215943 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-etc-sysctl-d\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.216553 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.215966 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-run\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.216553 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.215982 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhkz8\" (UniqueName: \"kubernetes.io/projected/0e7fe577-78a6-4227-b074-218a66e869bc-kube-api-access-zhkz8\") pod \"network-check-target-j6s9c\" (UID: \"0e7fe577-78a6-4227-b074-218a66e869bc\") " pod="openshift-network-diagnostics/network-check-target-j6s9c" Apr 22 15:08:23.216553 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.215999 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-host-kubelet\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.216553 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.216013 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-var-lib-openvswitch\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.216553 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.216028 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a5f9bf55-b089-4f8e-8313-0f7409db1455-cni-binary-copy\") pod \"multus-additional-cni-plugins-xrffc\" (UID: \"a5f9bf55-b089-4f8e-8313-0f7409db1455\") " pod="openshift-multus/multus-additional-cni-plugins-xrffc" Apr 22 15:08:23.216553 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.216043 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-log-socket\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.216553 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.216057 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-etc-tuned\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.216553 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.216071 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkqwd\" (UniqueName: \"kubernetes.io/projected/af1546e5-60a5-4932-8506-3627e007c4b6-kube-api-access-tkqwd\") pod \"iptables-alerter-x467p\" (UID: \"af1546e5-60a5-4932-8506-3627e007c4b6\") " pod="openshift-network-operator/iptables-alerter-x467p" Apr 22 15:08:23.216553 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.216086 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4-sys-fs\") pod \"aws-ebs-csi-driver-node-gqldx\" (UID: \"1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gqldx" Apr 22 15:08:23.216553 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.216103 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-host-run-netns\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.216553 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.216119 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7b9c0073-689d-408d-ac2b-84411c925f02-env-overrides\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.216553 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.216135 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4-etc-selinux\") pod \"aws-ebs-csi-driver-node-gqldx\" (UID: \"1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gqldx" Apr 22 15:08:23.216553 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.216158 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztn7g\" (UniqueName: \"kubernetes.io/projected/54264bd4-ce9e-4010-b213-56e5f4bfe070-kube-api-access-ztn7g\") pod \"node-resolver-hqq4l\" (UID: \"54264bd4-ce9e-4010-b213-56e5f4bfe070\") " pod="openshift-dns/node-resolver-hqq4l" Apr 22 15:08:23.217086 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.216179 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.217086 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.216198 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-etc-systemd\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.217086 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.216210 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-host\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.217086 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.216225 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4-kubelet-dir\") pod \"aws-ebs-csi-driver-node-gqldx\" (UID: \"1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gqldx" Apr 22 15:08:23.217086 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.216243 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4-registration-dir\") pod \"aws-ebs-csi-driver-node-gqldx\" (UID: \"1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gqldx" Apr 22 15:08:23.217086 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.216258 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/54264bd4-ce9e-4010-b213-56e5f4bfe070-hosts-file\") pod \"node-resolver-hqq4l\" (UID: \"54264bd4-ce9e-4010-b213-56e5f4bfe070\") " pod="openshift-dns/node-resolver-hqq4l" Apr 22 15:08:23.217086 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.216272 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hhfl\" (UniqueName: \"kubernetes.io/projected/a5f9bf55-b089-4f8e-8313-0f7409db1455-kube-api-access-8hhfl\") pod \"multus-additional-cni-plugins-xrffc\" (UID: \"a5f9bf55-b089-4f8e-8313-0f7409db1455\") " pod="openshift-multus/multus-additional-cni-plugins-xrffc" Apr 22 15:08:23.217086 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.216288 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv69j\" (UniqueName: \"kubernetes.io/projected/7b9c0073-689d-408d-ac2b-84411c925f02-kube-api-access-wv69j\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.217086 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.216301 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/54264bd4-ce9e-4010-b213-56e5f4bfe070-tmp-dir\") pod \"node-resolver-hqq4l\" (UID: \"54264bd4-ce9e-4010-b213-56e5f4bfe070\") " pod="openshift-dns/node-resolver-hqq4l" Apr 22 15:08:23.217086 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.216315 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a5f9bf55-b089-4f8e-8313-0f7409db1455-os-release\") pod \"multus-additional-cni-plugins-xrffc\" (UID: \"a5f9bf55-b089-4f8e-8313-0f7409db1455\") " pod="openshift-multus/multus-additional-cni-plugins-xrffc" Apr 22 15:08:23.217086 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.216337 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-systemd-units\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.217086 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.216351 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-host-run-ovn-kubernetes\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.217086 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.216368 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-host-cni-bin\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.217086 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.216381 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-etc-sysctl-conf\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.217086 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.216395 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-var-lib-kubelet\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.217086 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.216409 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a5f9bf55-b089-4f8e-8313-0f7409db1455-cnibin\") pod \"multus-additional-cni-plugins-xrffc\" (UID: \"a5f9bf55-b089-4f8e-8313-0f7409db1455\") " pod="openshift-multus/multus-additional-cni-plugins-xrffc" Apr 22 15:08:23.217801 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.216436 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7b9c0073-689d-408d-ac2b-84411c925f02-ovnkube-script-lib\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.217801 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.216450 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-etc-modprobe-d\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.217801 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.216468 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-etc-kubernetes\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.217801 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.216494 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-lib-modules\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.217801 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.216518 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/af1546e5-60a5-4932-8506-3627e007c4b6-host-slash\") pod \"iptables-alerter-x467p\" (UID: \"af1546e5-60a5-4932-8506-3627e007c4b6\") " pod="openshift-network-operator/iptables-alerter-x467p" Apr 22 15:08:23.217801 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.216543 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a5f9bf55-b089-4f8e-8313-0f7409db1455-system-cni-dir\") pod \"multus-additional-cni-plugins-xrffc\" (UID: \"a5f9bf55-b089-4f8e-8313-0f7409db1455\") " pod="openshift-multus/multus-additional-cni-plugins-xrffc" Apr 22 15:08:23.217801 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.216567 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kwk6\" (UniqueName: \"kubernetes.io/projected/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-kube-api-access-2kwk6\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.217801 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.216592 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/a17c3e99-1108-4fee-af0c-ec3741b68100-agent-certs\") pod \"konnectivity-agent-qxtv4\" (UID: \"a17c3e99-1108-4fee-af0c-ec3741b68100\") " pod="kube-system/konnectivity-agent-qxtv4" Apr 22 15:08:23.262367 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.262288 2575 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-21 15:03:22 +0000 UTC" deadline="2028-01-26 05:53:51.765636974 +0000 UTC" Apr 22 15:08:23.262367 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.262364 2575 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="15446h45m28.503275251s" Apr 22 15:08:23.293930 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.293900 2575 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 22 15:08:23.307166 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.307131 2575 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 22 15:08:23.316782 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.316730 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-host-cni-netd\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.316782 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.316765 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-tmp\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.316782 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.316783 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4-socket-dir\") pod \"aws-ebs-csi-driver-node-gqldx\" (UID: \"1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gqldx" Apr 22 15:08:23.317081 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.316798 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-run-ovn\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.317081 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.316819 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/24727a23-7950-43c6-9a15-92416687fab7-serviceca\") pod \"node-ca-rw9wr\" (UID: \"24727a23-7950-43c6-9a15-92416687fab7\") " pod="openshift-image-registry/node-ca-rw9wr" Apr 22 15:08:23.317081 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.316853 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4-device-dir\") pod \"aws-ebs-csi-driver-node-gqldx\" (UID: \"1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gqldx" Apr 22 15:08:23.317081 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.316892 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-host-slash\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.317081 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.316908 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-host-run-netns\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.317081 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.316923 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-host-run-multus-certs\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.317081 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.316939 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-etc-kubernetes\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.317081 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.316958 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be8c5f47-6214-42a7-8e36-1c852cc48be6-metrics-certs\") pod \"network-metrics-daemon-b6hrq\" (UID: \"be8c5f47-6214-42a7-8e36-1c852cc48be6\") " pod="openshift-multus/network-metrics-daemon-b6hrq" Apr 22 15:08:23.317081 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.316974 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-etc-openvswitch\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.317081 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.316989 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/af1546e5-60a5-4932-8506-3627e007c4b6-iptables-alerter-script\") pod \"iptables-alerter-x467p\" (UID: \"af1546e5-60a5-4932-8506-3627e007c4b6\") " pod="openshift-network-operator/iptables-alerter-x467p" Apr 22 15:08:23.317081 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317006 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvqxp\" (UniqueName: \"kubernetes.io/projected/24727a23-7950-43c6-9a15-92416687fab7-kube-api-access-rvqxp\") pod \"node-ca-rw9wr\" (UID: \"24727a23-7950-43c6-9a15-92416687fab7\") " pod="openshift-image-registry/node-ca-rw9wr" Apr 22 15:08:23.317081 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317024 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2ds9c\" (UniqueName: \"kubernetes.io/projected/1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4-kube-api-access-2ds9c\") pod \"aws-ebs-csi-driver-node-gqldx\" (UID: \"1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gqldx" Apr 22 15:08:23.317081 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317049 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/a5f9bf55-b089-4f8e-8313-0f7409db1455-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-xrffc\" (UID: \"a5f9bf55-b089-4f8e-8313-0f7409db1455\") " pod="openshift-multus/multus-additional-cni-plugins-xrffc" Apr 22 15:08:23.317081 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317073 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-run-systemd\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.317081 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317088 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-run-openvswitch\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.317799 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317102 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-node-log\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.317799 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317118 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-etc-sysconfig\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.317799 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317132 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-etc-sysctl-d\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.317799 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317147 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-run\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.317799 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317165 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zhkz8\" (UniqueName: \"kubernetes.io/projected/0e7fe577-78a6-4227-b074-218a66e869bc-kube-api-access-zhkz8\") pod \"network-check-target-j6s9c\" (UID: \"0e7fe577-78a6-4227-b074-218a66e869bc\") " pod="openshift-network-diagnostics/network-check-target-j6s9c" Apr 22 15:08:23.317799 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317180 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-host-kubelet\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.317799 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317194 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-var-lib-openvswitch\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.317799 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317209 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/24727a23-7950-43c6-9a15-92416687fab7-host\") pod \"node-ca-rw9wr\" (UID: \"24727a23-7950-43c6-9a15-92416687fab7\") " pod="openshift-image-registry/node-ca-rw9wr" Apr 22 15:08:23.317799 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317252 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-system-cni-dir\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.317799 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317269 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-cnibin\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.317799 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317286 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-multus-socket-dir-parent\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.317799 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317305 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a5f9bf55-b089-4f8e-8313-0f7409db1455-cni-binary-copy\") pod \"multus-additional-cni-plugins-xrffc\" (UID: \"a5f9bf55-b089-4f8e-8313-0f7409db1455\") " pod="openshift-multus/multus-additional-cni-plugins-xrffc" Apr 22 15:08:23.317799 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317326 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-cni-binary-copy\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.317799 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317344 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-log-socket\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.317799 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317366 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-etc-tuned\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.317799 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317381 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tkqwd\" (UniqueName: \"kubernetes.io/projected/af1546e5-60a5-4932-8506-3627e007c4b6-kube-api-access-tkqwd\") pod \"iptables-alerter-x467p\" (UID: \"af1546e5-60a5-4932-8506-3627e007c4b6\") " pod="openshift-network-operator/iptables-alerter-x467p" Apr 22 15:08:23.317799 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317397 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-host-var-lib-kubelet\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.318593 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317415 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4-sys-fs\") pod \"aws-ebs-csi-driver-node-gqldx\" (UID: \"1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gqldx" Apr 22 15:08:23.318593 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317438 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-host-run-netns\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.318593 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317455 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7b9c0073-689d-408d-ac2b-84411c925f02-env-overrides\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.318593 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317470 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4-etc-selinux\") pod \"aws-ebs-csi-driver-node-gqldx\" (UID: \"1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gqldx" Apr 22 15:08:23.318593 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317485 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ztn7g\" (UniqueName: \"kubernetes.io/projected/54264bd4-ce9e-4010-b213-56e5f4bfe070-kube-api-access-ztn7g\") pod \"node-resolver-hqq4l\" (UID: \"54264bd4-ce9e-4010-b213-56e5f4bfe070\") " pod="openshift-dns/node-resolver-hqq4l" Apr 22 15:08:23.318593 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317512 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.318593 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317529 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-etc-systemd\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.318593 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317545 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-host\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.318593 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317564 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-multus-conf-dir\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.318593 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317585 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4-kubelet-dir\") pod \"aws-ebs-csi-driver-node-gqldx\" (UID: \"1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gqldx" Apr 22 15:08:23.318593 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317610 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4-registration-dir\") pod \"aws-ebs-csi-driver-node-gqldx\" (UID: \"1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gqldx" Apr 22 15:08:23.318593 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317626 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/54264bd4-ce9e-4010-b213-56e5f4bfe070-hosts-file\") pod \"node-resolver-hqq4l\" (UID: \"54264bd4-ce9e-4010-b213-56e5f4bfe070\") " pod="openshift-dns/node-resolver-hqq4l" Apr 22 15:08:23.318593 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317643 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8hhfl\" (UniqueName: \"kubernetes.io/projected/a5f9bf55-b089-4f8e-8313-0f7409db1455-kube-api-access-8hhfl\") pod \"multus-additional-cni-plugins-xrffc\" (UID: \"a5f9bf55-b089-4f8e-8313-0f7409db1455\") " pod="openshift-multus/multus-additional-cni-plugins-xrffc" Apr 22 15:08:23.318593 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317658 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wv69j\" (UniqueName: \"kubernetes.io/projected/7b9c0073-689d-408d-ac2b-84411c925f02-kube-api-access-wv69j\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.318593 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317681 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/54264bd4-ce9e-4010-b213-56e5f4bfe070-tmp-dir\") pod \"node-resolver-hqq4l\" (UID: \"54264bd4-ce9e-4010-b213-56e5f4bfe070\") " pod="openshift-dns/node-resolver-hqq4l" Apr 22 15:08:23.318593 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317697 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a5f9bf55-b089-4f8e-8313-0f7409db1455-os-release\") pod \"multus-additional-cni-plugins-xrffc\" (UID: \"a5f9bf55-b089-4f8e-8313-0f7409db1455\") " pod="openshift-multus/multus-additional-cni-plugins-xrffc" Apr 22 15:08:23.318593 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317713 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-systemd-units\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.319307 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317731 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-host-run-ovn-kubernetes\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.319307 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317752 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-host-cni-bin\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.319307 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317762 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-var-lib-openvswitch\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.319307 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317786 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-etc-sysctl-conf\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.319307 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317804 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-var-lib-kubelet\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.319307 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317825 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-host-var-lib-cni-bin\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.319307 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317830 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-host-cni-netd\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.319307 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317844 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a5f9bf55-b089-4f8e-8313-0f7409db1455-cnibin\") pod \"multus-additional-cni-plugins-xrffc\" (UID: \"a5f9bf55-b089-4f8e-8313-0f7409db1455\") " pod="openshift-multus/multus-additional-cni-plugins-xrffc" Apr 22 15:08:23.319307 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317890 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7b9c0073-689d-408d-ac2b-84411c925f02-ovnkube-script-lib\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.319307 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317912 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-etc-modprobe-d\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.319307 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317928 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-etc-kubernetes\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.319307 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317943 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-lib-modules\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.319307 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317958 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/af1546e5-60a5-4932-8506-3627e007c4b6-host-slash\") pod \"iptables-alerter-x467p\" (UID: \"af1546e5-60a5-4932-8506-3627e007c4b6\") " pod="openshift-network-operator/iptables-alerter-x467p" Apr 22 15:08:23.319307 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317981 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-multus-cni-dir\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.319307 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.317999 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a5f9bf55-b089-4f8e-8313-0f7409db1455-system-cni-dir\") pod \"multus-additional-cni-plugins-xrffc\" (UID: \"a5f9bf55-b089-4f8e-8313-0f7409db1455\") " pod="openshift-multus/multus-additional-cni-plugins-xrffc" Apr 22 15:08:23.319307 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.318021 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2kwk6\" (UniqueName: \"kubernetes.io/projected/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-kube-api-access-2kwk6\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.319307 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.318046 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/a17c3e99-1108-4fee-af0c-ec3741b68100-agent-certs\") pod \"konnectivity-agent-qxtv4\" (UID: \"a17c3e99-1108-4fee-af0c-ec3741b68100\") " pod="kube-system/konnectivity-agent-qxtv4" Apr 22 15:08:23.319799 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.318068 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-host-run-k8s-cni-cncf-io\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.319799 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.318091 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-hostroot\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.319799 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.318106 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-multus-daemon-config\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.319799 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.318148 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a5f9bf55-b089-4f8e-8313-0f7409db1455-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xrffc\" (UID: \"a5f9bf55-b089-4f8e-8313-0f7409db1455\") " pod="openshift-multus/multus-additional-cni-plugins-xrffc" Apr 22 15:08:23.319799 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.318168 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a5f9bf55-b089-4f8e-8313-0f7409db1455-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xrffc\" (UID: \"a5f9bf55-b089-4f8e-8313-0f7409db1455\") " pod="openshift-multus/multus-additional-cni-plugins-xrffc" Apr 22 15:08:23.319799 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.318187 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-os-release\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.319799 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.318208 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-host-var-lib-cni-multus\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.319799 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.318207 2575 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Apr 22 15:08:23.319799 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.318265 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9mcw\" (UniqueName: \"kubernetes.io/projected/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-kube-api-access-g9mcw\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.319799 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.318296 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-27s5c\" (UniqueName: \"kubernetes.io/projected/be8c5f47-6214-42a7-8e36-1c852cc48be6-kube-api-access-27s5c\") pod \"network-metrics-daemon-b6hrq\" (UID: \"be8c5f47-6214-42a7-8e36-1c852cc48be6\") " pod="openshift-multus/network-metrics-daemon-b6hrq" Apr 22 15:08:23.319799 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.318328 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7b9c0073-689d-408d-ac2b-84411c925f02-ovnkube-config\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.319799 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.318353 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7b9c0073-689d-408d-ac2b-84411c925f02-ovn-node-metrics-cert\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.319799 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.318380 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-sys\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.319799 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.318407 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/a17c3e99-1108-4fee-af0c-ec3741b68100-konnectivity-ca\") pod \"konnectivity-agent-qxtv4\" (UID: \"a17c3e99-1108-4fee-af0c-ec3741b68100\") " pod="kube-system/konnectivity-agent-qxtv4" Apr 22 15:08:23.319799 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.319118 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-systemd-units\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.319799 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.319543 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/a17c3e99-1108-4fee-af0c-ec3741b68100-konnectivity-ca\") pod \"konnectivity-agent-qxtv4\" (UID: \"a17c3e99-1108-4fee-af0c-ec3741b68100\") " pod="kube-system/konnectivity-agent-qxtv4" Apr 22 15:08:23.319799 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.319618 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-host-run-ovn-kubernetes\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.320488 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.319649 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-host-cni-bin\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.320488 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.319742 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-etc-sysctl-conf\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.320488 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.319762 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a5f9bf55-b089-4f8e-8313-0f7409db1455-cni-binary-copy\") pod \"multus-additional-cni-plugins-xrffc\" (UID: \"a5f9bf55-b089-4f8e-8313-0f7409db1455\") " pod="openshift-multus/multus-additional-cni-plugins-xrffc" Apr 22 15:08:23.320488 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.319780 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-var-lib-kubelet\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.320488 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.319825 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a5f9bf55-b089-4f8e-8313-0f7409db1455-cnibin\") pod \"multus-additional-cni-plugins-xrffc\" (UID: \"a5f9bf55-b089-4f8e-8313-0f7409db1455\") " pod="openshift-multus/multus-additional-cni-plugins-xrffc" Apr 22 15:08:23.320488 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.319850 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-log-socket\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.320488 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.320044 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-etc-modprobe-d\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.320488 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.320230 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4-sys-fs\") pod \"aws-ebs-csi-driver-node-gqldx\" (UID: \"1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gqldx" Apr 22 15:08:23.320488 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.320255 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7b9c0073-689d-408d-ac2b-84411c925f02-ovnkube-script-lib\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.320488 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.320262 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-host-run-netns\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.320488 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.320415 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a5f9bf55-b089-4f8e-8313-0f7409db1455-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xrffc\" (UID: \"a5f9bf55-b089-4f8e-8313-0f7409db1455\") " pod="openshift-multus/multus-additional-cni-plugins-xrffc" Apr 22 15:08:23.320962 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.320511 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7b9c0073-689d-408d-ac2b-84411c925f02-env-overrides\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.320962 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.320554 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4-etc-selinux\") pod \"aws-ebs-csi-driver-node-gqldx\" (UID: \"1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gqldx" Apr 22 15:08:23.320962 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.320734 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/af1546e5-60a5-4932-8506-3627e007c4b6-host-slash\") pod \"iptables-alerter-x467p\" (UID: \"af1546e5-60a5-4932-8506-3627e007c4b6\") " pod="openshift-network-operator/iptables-alerter-x467p" Apr 22 15:08:23.320962 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.320745 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-etc-kubernetes\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.320962 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.320786 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-host\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.320962 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.320790 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-etc-systemd\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.320962 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.320846 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a5f9bf55-b089-4f8e-8313-0f7409db1455-system-cni-dir\") pod \"multus-additional-cni-plugins-xrffc\" (UID: \"a5f9bf55-b089-4f8e-8313-0f7409db1455\") " pod="openshift-multus/multus-additional-cni-plugins-xrffc" Apr 22 15:08:23.320962 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.320885 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4-kubelet-dir\") pod \"aws-ebs-csi-driver-node-gqldx\" (UID: \"1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gqldx" Apr 22 15:08:23.320962 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.320899 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-lib-modules\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.321403 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.321012 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a5f9bf55-b089-4f8e-8313-0f7409db1455-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xrffc\" (UID: \"a5f9bf55-b089-4f8e-8313-0f7409db1455\") " pod="openshift-multus/multus-additional-cni-plugins-xrffc" Apr 22 15:08:23.321403 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.321104 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4-registration-dir\") pod \"aws-ebs-csi-driver-node-gqldx\" (UID: \"1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gqldx" Apr 22 15:08:23.321403 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.321180 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/54264bd4-ce9e-4010-b213-56e5f4bfe070-hosts-file\") pod \"node-resolver-hqq4l\" (UID: \"54264bd4-ce9e-4010-b213-56e5f4bfe070\") " pod="openshift-dns/node-resolver-hqq4l" Apr 22 15:08:23.321403 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.321290 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4-socket-dir\") pod \"aws-ebs-csi-driver-node-gqldx\" (UID: \"1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gqldx" Apr 22 15:08:23.321622 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.321525 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-run-ovn\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.321622 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.320738 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.321622 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.321528 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/54264bd4-ce9e-4010-b213-56e5f4bfe070-tmp-dir\") pod \"node-resolver-hqq4l\" (UID: \"54264bd4-ce9e-4010-b213-56e5f4bfe070\") " pod="openshift-dns/node-resolver-hqq4l" Apr 22 15:08:23.321811 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.321627 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7b9c0073-689d-408d-ac2b-84411c925f02-ovnkube-config\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.321811 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.321645 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a5f9bf55-b089-4f8e-8313-0f7409db1455-os-release\") pod \"multus-additional-cni-plugins-xrffc\" (UID: \"a5f9bf55-b089-4f8e-8313-0f7409db1455\") " pod="openshift-multus/multus-additional-cni-plugins-xrffc" Apr 22 15:08:23.321811 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.321592 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4-device-dir\") pod \"aws-ebs-csi-driver-node-gqldx\" (UID: \"1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gqldx" Apr 22 15:08:23.321811 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.321684 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-run-openvswitch\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.321811 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.321717 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-etc-openvswitch\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.321811 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.321716 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-node-log\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.321811 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:23.321744 2575 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 22 15:08:23.321811 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.321757 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-etc-sysconfig\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.322546 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:23.322510 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be8c5f47-6214-42a7-8e36-1c852cc48be6-metrics-certs podName:be8c5f47-6214-42a7-8e36-1c852cc48be6 nodeName:}" failed. No retries permitted until 2026-04-22 15:08:23.822456307 +0000 UTC m=+3.057704114 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/be8c5f47-6214-42a7-8e36-1c852cc48be6-metrics-certs") pod "network-metrics-daemon-b6hrq" (UID: "be8c5f47-6214-42a7-8e36-1c852cc48be6") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 22 15:08:23.322668 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.322582 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-etc-sysctl-d\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.323014 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.322923 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-run\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.323257 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.323225 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-host-kubelet\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.323787 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.323761 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-sys\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.323787 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.321629 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-host-slash\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.324326 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.324293 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/a5f9bf55-b089-4f8e-8313-0f7409db1455-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-xrffc\" (UID: \"a5f9bf55-b089-4f8e-8313-0f7409db1455\") " pod="openshift-multus/multus-additional-cni-plugins-xrffc" Apr 22 15:08:23.324401 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.324387 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7b9c0073-689d-408d-ac2b-84411c925f02-run-systemd\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.325088 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.325037 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/af1546e5-60a5-4932-8506-3627e007c4b6-iptables-alerter-script\") pod \"iptables-alerter-x467p\" (UID: \"af1546e5-60a5-4932-8506-3627e007c4b6\") " pod="openshift-network-operator/iptables-alerter-x467p" Apr 22 15:08:23.325761 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.325687 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/a17c3e99-1108-4fee-af0c-ec3741b68100-agent-certs\") pod \"konnectivity-agent-qxtv4\" (UID: \"a17c3e99-1108-4fee-af0c-ec3741b68100\") " pod="kube-system/konnectivity-agent-qxtv4" Apr 22 15:08:23.325980 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.325960 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-tmp\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.329748 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.329727 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztn7g\" (UniqueName: \"kubernetes.io/projected/54264bd4-ce9e-4010-b213-56e5f4bfe070-kube-api-access-ztn7g\") pod \"node-resolver-hqq4l\" (UID: \"54264bd4-ce9e-4010-b213-56e5f4bfe070\") " pod="openshift-dns/node-resolver-hqq4l" Apr 22 15:08:23.330626 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:23.330603 2575 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 22 15:08:23.330700 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:23.330630 2575 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 22 15:08:23.330700 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:23.330643 2575 projected.go:194] Error preparing data for projected volume kube-api-access-zhkz8 for pod openshift-network-diagnostics/network-check-target-j6s9c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 22 15:08:23.330764 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:23.330716 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0e7fe577-78a6-4227-b074-218a66e869bc-kube-api-access-zhkz8 podName:0e7fe577-78a6-4227-b074-218a66e869bc nodeName:}" failed. No retries permitted until 2026-04-22 15:08:23.830690109 +0000 UTC m=+3.065937899 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zhkz8" (UniqueName: "kubernetes.io/projected/0e7fe577-78a6-4227-b074-218a66e869bc-kube-api-access-zhkz8") pod "network-check-target-j6s9c" (UID: "0e7fe577-78a6-4227-b074-218a66e869bc") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 22 15:08:23.334505 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.332461 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hhfl\" (UniqueName: \"kubernetes.io/projected/a5f9bf55-b089-4f8e-8313-0f7409db1455-kube-api-access-8hhfl\") pod \"multus-additional-cni-plugins-xrffc\" (UID: \"a5f9bf55-b089-4f8e-8313-0f7409db1455\") " pod="openshift-multus/multus-additional-cni-plugins-xrffc" Apr 22 15:08:23.334505 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.333408 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-134-217.ec2.internal" event={"ID":"9b396163e7a8a1c1709913f4b2fb7b1e","Type":"ContainerStarted","Data":"56aed3d0c99779aa955ea8ffa0ab4891c5619d2e6ef0c02f46e2bbe3ee2985e3"} Apr 22 15:08:23.334505 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.333902 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-etc-tuned\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.334505 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.334399 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ds9c\" (UniqueName: \"kubernetes.io/projected/1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4-kube-api-access-2ds9c\") pod \"aws-ebs-csi-driver-node-gqldx\" (UID: \"1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gqldx" Apr 22 15:08:23.334753 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.334638 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7b9c0073-689d-408d-ac2b-84411c925f02-ovn-node-metrics-cert\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.334858 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.334820 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkqwd\" (UniqueName: \"kubernetes.io/projected/af1546e5-60a5-4932-8506-3627e007c4b6-kube-api-access-tkqwd\") pod \"iptables-alerter-x467p\" (UID: \"af1546e5-60a5-4932-8506-3627e007c4b6\") " pod="openshift-network-operator/iptables-alerter-x467p" Apr 22 15:08:23.335095 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.335076 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kwk6\" (UniqueName: \"kubernetes.io/projected/ffffeec3-bd38-4d24-8d3d-36ee2cdbe144-kube-api-access-2kwk6\") pod \"tuned-g4dbx\" (UID: \"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144\") " pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.337450 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.337419 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-134-217.ec2.internal" event={"ID":"a1f9ca9b98f3c0aa38bb3225e3c68dd3","Type":"ContainerStarted","Data":"0ee38e06006135ea1e4a4d904af38266067b625f30a81a876a94fb4e807fbc98"} Apr 22 15:08:23.337563 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.337540 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv69j\" (UniqueName: \"kubernetes.io/projected/7b9c0073-689d-408d-ac2b-84411c925f02-kube-api-access-wv69j\") pod \"ovnkube-node-lt5hd\" (UID: \"7b9c0073-689d-408d-ac2b-84411c925f02\") " pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.337843 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.337822 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-27s5c\" (UniqueName: \"kubernetes.io/projected/be8c5f47-6214-42a7-8e36-1c852cc48be6-kube-api-access-27s5c\") pod \"network-metrics-daemon-b6hrq\" (UID: \"be8c5f47-6214-42a7-8e36-1c852cc48be6\") " pod="openshift-multus/network-metrics-daemon-b6hrq" Apr 22 15:08:23.419686 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.419639 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-multus-conf-dir\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.419686 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.419694 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-host-var-lib-cni-bin\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.419951 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.419723 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-multus-cni-dir\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.419951 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.419753 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-multus-conf-dir\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.419951 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.419809 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-host-var-lib-cni-bin\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.419951 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.419827 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-host-run-k8s-cni-cncf-io\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.419951 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.419755 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-host-run-k8s-cni-cncf-io\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.419951 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.419882 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-hostroot\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.419951 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.419904 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-multus-daemon-config\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.419951 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.419908 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-multus-cni-dir\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.419951 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.419941 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-os-release\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.420294 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.419967 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-host-var-lib-cni-multus\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.420294 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.419993 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g9mcw\" (UniqueName: \"kubernetes.io/projected/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-kube-api-access-g9mcw\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.420294 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.420043 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/24727a23-7950-43c6-9a15-92416687fab7-serviceca\") pod \"node-ca-rw9wr\" (UID: \"24727a23-7950-43c6-9a15-92416687fab7\") " pod="openshift-image-registry/node-ca-rw9wr" Apr 22 15:08:23.420294 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.420065 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-host-run-netns\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.420294 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.420087 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-host-run-multus-certs\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.420294 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.420108 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-etc-kubernetes\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.420294 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.420164 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rvqxp\" (UniqueName: \"kubernetes.io/projected/24727a23-7950-43c6-9a15-92416687fab7-kube-api-access-rvqxp\") pod \"node-ca-rw9wr\" (UID: \"24727a23-7950-43c6-9a15-92416687fab7\") " pod="openshift-image-registry/node-ca-rw9wr" Apr 22 15:08:23.420294 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.420226 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/24727a23-7950-43c6-9a15-92416687fab7-host\") pod \"node-ca-rw9wr\" (UID: \"24727a23-7950-43c6-9a15-92416687fab7\") " pod="openshift-image-registry/node-ca-rw9wr" Apr 22 15:08:23.420294 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.420250 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-system-cni-dir\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.420294 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.420282 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-cnibin\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.420743 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.420301 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-multus-socket-dir-parent\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.420743 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.420332 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-cni-binary-copy\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.420743 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.420361 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-host-var-lib-kubelet\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.420743 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.420429 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-host-var-lib-kubelet\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.420743 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.420430 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-multus-daemon-config\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.420743 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.420471 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-hostroot\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.420743 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.420548 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-os-release\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.420743 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.420579 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-host-var-lib-cni-multus\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.421090 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.420830 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/24727a23-7950-43c6-9a15-92416687fab7-host\") pod \"node-ca-rw9wr\" (UID: \"24727a23-7950-43c6-9a15-92416687fab7\") " pod="openshift-image-registry/node-ca-rw9wr" Apr 22 15:08:23.421090 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.420904 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-system-cni-dir\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.421090 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.420967 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-cnibin\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.421090 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.421014 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-multus-socket-dir-parent\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.421090 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.421081 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/24727a23-7950-43c6-9a15-92416687fab7-serviceca\") pod \"node-ca-rw9wr\" (UID: \"24727a23-7950-43c6-9a15-92416687fab7\") " pod="openshift-image-registry/node-ca-rw9wr" Apr 22 15:08:23.421313 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.421117 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-host-run-netns\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.421313 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.421173 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-host-run-multus-certs\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.421313 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.421207 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-etc-kubernetes\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.421450 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.421430 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-cni-binary-copy\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.429529 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.429500 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvqxp\" (UniqueName: \"kubernetes.io/projected/24727a23-7950-43c6-9a15-92416687fab7-kube-api-access-rvqxp\") pod \"node-ca-rw9wr\" (UID: \"24727a23-7950-43c6-9a15-92416687fab7\") " pod="openshift-image-registry/node-ca-rw9wr" Apr 22 15:08:23.429711 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.429501 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9mcw\" (UniqueName: \"kubernetes.io/projected/df2d7157-ac73-43ed-adb1-0db7ad5e65fd-kube-api-access-g9mcw\") pod \"multus-4rqkv\" (UID: \"df2d7157-ac73-43ed-adb1-0db7ad5e65fd\") " pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.507472 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.507384 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-x467p" Apr 22 15:08:23.519351 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.519308 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-qxtv4" Apr 22 15:08:23.534300 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.534267 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gqldx" Apr 22 15:08:23.540994 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.540967 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hqq4l" Apr 22 15:08:23.548780 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.548748 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:23.556550 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.556520 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xrffc" Apr 22 15:08:23.564414 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.564377 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" Apr 22 15:08:23.573152 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.573117 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-rw9wr" Apr 22 15:08:23.578970 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.578938 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4rqkv" Apr 22 15:08:23.823291 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.823160 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be8c5f47-6214-42a7-8e36-1c852cc48be6-metrics-certs\") pod \"network-metrics-daemon-b6hrq\" (UID: \"be8c5f47-6214-42a7-8e36-1c852cc48be6\") " pod="openshift-multus/network-metrics-daemon-b6hrq" Apr 22 15:08:23.823291 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:23.823280 2575 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 22 15:08:23.823470 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:23.823345 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be8c5f47-6214-42a7-8e36-1c852cc48be6-metrics-certs podName:be8c5f47-6214-42a7-8e36-1c852cc48be6 nodeName:}" failed. No retries permitted until 2026-04-22 15:08:24.82333 +0000 UTC m=+4.058577792 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/be8c5f47-6214-42a7-8e36-1c852cc48be6-metrics-certs") pod "network-metrics-daemon-b6hrq" (UID: "be8c5f47-6214-42a7-8e36-1c852cc48be6") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 22 15:08:23.924045 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:23.924001 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zhkz8\" (UniqueName: \"kubernetes.io/projected/0e7fe577-78a6-4227-b074-218a66e869bc-kube-api-access-zhkz8\") pod \"network-check-target-j6s9c\" (UID: \"0e7fe577-78a6-4227-b074-218a66e869bc\") " pod="openshift-network-diagnostics/network-check-target-j6s9c" Apr 22 15:08:23.924220 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:23.924162 2575 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 22 15:08:23.924220 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:23.924178 2575 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 22 15:08:23.924220 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:23.924189 2575 projected.go:194] Error preparing data for projected volume kube-api-access-zhkz8 for pod openshift-network-diagnostics/network-check-target-j6s9c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 22 15:08:23.924333 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:23.924247 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0e7fe577-78a6-4227-b074-218a66e869bc-kube-api-access-zhkz8 podName:0e7fe577-78a6-4227-b074-218a66e869bc nodeName:}" failed. No retries permitted until 2026-04-22 15:08:24.924229868 +0000 UTC m=+4.159477658 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-zhkz8" (UniqueName: "kubernetes.io/projected/0e7fe577-78a6-4227-b074-218a66e869bc-kube-api-access-zhkz8") pod "network-check-target-j6s9c" (UID: "0e7fe577-78a6-4227-b074-218a66e869bc") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 22 15:08:24.169773 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:24.169744 2575 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf2d7157_ac73_43ed_adb1_0db7ad5e65fd.slice/crio-ffee30716adae4ebedf5b0cb1b4f2911adb9626248a8922e076204a003b167cb WatchSource:0}: Error finding container ffee30716adae4ebedf5b0cb1b4f2911adb9626248a8922e076204a003b167cb: Status 404 returned error can't find the container with id ffee30716adae4ebedf5b0cb1b4f2911adb9626248a8922e076204a003b167cb Apr 22 15:08:24.172394 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:24.172344 2575 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5f9bf55_b089_4f8e_8313_0f7409db1455.slice/crio-764ebe1e344b47da9f73a3e56f903b76876fb998b5b308a4e45bb66623d52fac WatchSource:0}: Error finding container 764ebe1e344b47da9f73a3e56f903b76876fb998b5b308a4e45bb66623d52fac: Status 404 returned error can't find the container with id 764ebe1e344b47da9f73a3e56f903b76876fb998b5b308a4e45bb66623d52fac Apr 22 15:08:24.175663 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:24.175628 2575 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b9c0073_689d_408d_ac2b_84411c925f02.slice/crio-f0d590fbddc91ccba19f9da0a2b5964ea3431e3e296c0d69bcda29dc9e65e357 WatchSource:0}: Error finding container f0d590fbddc91ccba19f9da0a2b5964ea3431e3e296c0d69bcda29dc9e65e357: Status 404 returned error can't find the container with id f0d590fbddc91ccba19f9da0a2b5964ea3431e3e296c0d69bcda29dc9e65e357 Apr 22 15:08:24.176618 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:24.176592 2575 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda17c3e99_1108_4fee_af0c_ec3741b68100.slice/crio-e14dfc988cf7bdfd78460cd89a2d2968e7fd247608c6b1cf86eaefcc5126d10f WatchSource:0}: Error finding container e14dfc988cf7bdfd78460cd89a2d2968e7fd247608c6b1cf86eaefcc5126d10f: Status 404 returned error can't find the container with id e14dfc988cf7bdfd78460cd89a2d2968e7fd247608c6b1cf86eaefcc5126d10f Apr 22 15:08:24.178036 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:24.177546 2575 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a90c4d2_3036_4e89_802f_c4dcb2e6bdd4.slice/crio-772bc636428502abe5772285880cf4a3a9571bfc99ba2223f1e727552483a762 WatchSource:0}: Error finding container 772bc636428502abe5772285880cf4a3a9571bfc99ba2223f1e727552483a762: Status 404 returned error can't find the container with id 772bc636428502abe5772285880cf4a3a9571bfc99ba2223f1e727552483a762 Apr 22 15:08:24.184349 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:24.183949 2575 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54264bd4_ce9e_4010_b213_56e5f4bfe070.slice/crio-cc9aa837923bf192ca44a7f2b4d4c0eeb740c73483e362e71c2955adb3bfa287 WatchSource:0}: Error finding container cc9aa837923bf192ca44a7f2b4d4c0eeb740c73483e362e71c2955adb3bfa287: Status 404 returned error can't find the container with id cc9aa837923bf192ca44a7f2b4d4c0eeb740c73483e362e71c2955adb3bfa287 Apr 22 15:08:24.262800 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:24.262763 2575 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-21 15:03:22 +0000 UTC" deadline="2027-11-11 06:03:41.984779572 +0000 UTC" Apr 22 15:08:24.262923 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:24.262800 2575 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="13622h55m17.721982988s" Apr 22 15:08:24.323484 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:24.323447 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6hrq" Apr 22 15:08:24.323658 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:24.323563 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6hrq" podUID="be8c5f47-6214-42a7-8e36-1c852cc48be6" Apr 22 15:08:24.339878 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:24.339823 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-x467p" event={"ID":"af1546e5-60a5-4932-8506-3627e007c4b6","Type":"ContainerStarted","Data":"8815dac3987382612a1fbd086e6bf8367457ff0cb42a7d650f9b1a4939faf263"} Apr 22 15:08:24.340849 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:24.340810 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gqldx" event={"ID":"1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4","Type":"ContainerStarted","Data":"772bc636428502abe5772285880cf4a3a9571bfc99ba2223f1e727552483a762"} Apr 22 15:08:24.341811 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:24.341780 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" event={"ID":"7b9c0073-689d-408d-ac2b-84411c925f02","Type":"ContainerStarted","Data":"f0d590fbddc91ccba19f9da0a2b5964ea3431e3e296c0d69bcda29dc9e65e357"} Apr 22 15:08:24.342826 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:24.342795 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4rqkv" event={"ID":"df2d7157-ac73-43ed-adb1-0db7ad5e65fd","Type":"ContainerStarted","Data":"ffee30716adae4ebedf5b0cb1b4f2911adb9626248a8922e076204a003b167cb"} Apr 22 15:08:24.344352 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:24.344331 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-134-217.ec2.internal" event={"ID":"9b396163e7a8a1c1709913f4b2fb7b1e","Type":"ContainerStarted","Data":"66c56adf1e54cc31a60f37196219b6d4bd108a3169fb1e5da7d3b00e3fb88e40"} Apr 22 15:08:24.345429 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:24.345404 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hqq4l" event={"ID":"54264bd4-ce9e-4010-b213-56e5f4bfe070","Type":"ContainerStarted","Data":"cc9aa837923bf192ca44a7f2b4d4c0eeb740c73483e362e71c2955adb3bfa287"} Apr 22 15:08:24.346390 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:24.346365 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-rw9wr" event={"ID":"24727a23-7950-43c6-9a15-92416687fab7","Type":"ContainerStarted","Data":"af9405a0a12b56e033bbd75a1ce121eb39cefceb1e8ed1fe692aa12d15aa9ec3"} Apr 22 15:08:24.347353 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:24.347328 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" event={"ID":"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144","Type":"ContainerStarted","Data":"a4a7cdc5f3e43c66e86b86ff19fb69d925c6c10b78179001c7e4c3b6164fd448"} Apr 22 15:08:24.348446 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:24.348423 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-qxtv4" event={"ID":"a17c3e99-1108-4fee-af0c-ec3741b68100","Type":"ContainerStarted","Data":"e14dfc988cf7bdfd78460cd89a2d2968e7fd247608c6b1cf86eaefcc5126d10f"} Apr 22 15:08:24.349447 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:24.349427 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xrffc" event={"ID":"a5f9bf55-b089-4f8e-8313-0f7409db1455","Type":"ContainerStarted","Data":"764ebe1e344b47da9f73a3e56f903b76876fb998b5b308a4e45bb66623d52fac"} Apr 22 15:08:24.359339 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:24.359278 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-proxy-ip-10-0-134-217.ec2.internal" podStartSLOduration=2.359264752 podStartE2EDuration="2.359264752s" podCreationTimestamp="2026-04-22 15:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-22 15:08:24.359085979 +0000 UTC m=+3.594333789" watchObservedRunningTime="2026-04-22 15:08:24.359264752 +0000 UTC m=+3.594512562" Apr 22 15:08:24.829440 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:24.829392 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be8c5f47-6214-42a7-8e36-1c852cc48be6-metrics-certs\") pod \"network-metrics-daemon-b6hrq\" (UID: \"be8c5f47-6214-42a7-8e36-1c852cc48be6\") " pod="openshift-multus/network-metrics-daemon-b6hrq" Apr 22 15:08:24.829623 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:24.829598 2575 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 22 15:08:24.829710 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:24.829666 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be8c5f47-6214-42a7-8e36-1c852cc48be6-metrics-certs podName:be8c5f47-6214-42a7-8e36-1c852cc48be6 nodeName:}" failed. No retries permitted until 2026-04-22 15:08:26.82964618 +0000 UTC m=+6.064893982 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/be8c5f47-6214-42a7-8e36-1c852cc48be6-metrics-certs") pod "network-metrics-daemon-b6hrq" (UID: "be8c5f47-6214-42a7-8e36-1c852cc48be6") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 22 15:08:24.930610 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:24.930569 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zhkz8\" (UniqueName: \"kubernetes.io/projected/0e7fe577-78a6-4227-b074-218a66e869bc-kube-api-access-zhkz8\") pod \"network-check-target-j6s9c\" (UID: \"0e7fe577-78a6-4227-b074-218a66e869bc\") " pod="openshift-network-diagnostics/network-check-target-j6s9c" Apr 22 15:08:24.930788 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:24.930765 2575 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 22 15:08:24.930788 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:24.930784 2575 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 22 15:08:24.930915 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:24.930797 2575 projected.go:194] Error preparing data for projected volume kube-api-access-zhkz8 for pod openshift-network-diagnostics/network-check-target-j6s9c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 22 15:08:24.930915 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:24.930854 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0e7fe577-78a6-4227-b074-218a66e869bc-kube-api-access-zhkz8 podName:0e7fe577-78a6-4227-b074-218a66e869bc nodeName:}" failed. No retries permitted until 2026-04-22 15:08:26.930835888 +0000 UTC m=+6.166083682 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-zhkz8" (UniqueName: "kubernetes.io/projected/0e7fe577-78a6-4227-b074-218a66e869bc-kube-api-access-zhkz8") pod "network-check-target-j6s9c" (UID: "0e7fe577-78a6-4227-b074-218a66e869bc") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 22 15:08:25.326680 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:25.326601 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j6s9c" Apr 22 15:08:25.327217 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:25.326739 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j6s9c" podUID="0e7fe577-78a6-4227-b074-218a66e869bc" Apr 22 15:08:25.395095 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:25.394987 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-134-217.ec2.internal" event={"ID":"a1f9ca9b98f3c0aa38bb3225e3c68dd3","Type":"ContainerStarted","Data":"9e1ada70a49427cf7304589cea761f4cd3d6c9ee2ee07a341bd997f4f9b5f937"} Apr 22 15:08:26.323569 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:26.323533 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6hrq" Apr 22 15:08:26.323768 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:26.323679 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6hrq" podUID="be8c5f47-6214-42a7-8e36-1c852cc48be6" Apr 22 15:08:26.409496 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:26.408628 2575 generic.go:358] "Generic (PLEG): container finished" podID="a1f9ca9b98f3c0aa38bb3225e3c68dd3" containerID="9e1ada70a49427cf7304589cea761f4cd3d6c9ee2ee07a341bd997f4f9b5f937" exitCode=0 Apr 22 15:08:26.409496 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:26.408697 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-134-217.ec2.internal" event={"ID":"a1f9ca9b98f3c0aa38bb3225e3c68dd3","Type":"ContainerDied","Data":"9e1ada70a49427cf7304589cea761f4cd3d6c9ee2ee07a341bd997f4f9b5f937"} Apr 22 15:08:26.846432 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:26.846389 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be8c5f47-6214-42a7-8e36-1c852cc48be6-metrics-certs\") pod \"network-metrics-daemon-b6hrq\" (UID: \"be8c5f47-6214-42a7-8e36-1c852cc48be6\") " pod="openshift-multus/network-metrics-daemon-b6hrq" Apr 22 15:08:26.846609 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:26.846595 2575 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 22 15:08:26.846679 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:26.846666 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be8c5f47-6214-42a7-8e36-1c852cc48be6-metrics-certs podName:be8c5f47-6214-42a7-8e36-1c852cc48be6 nodeName:}" failed. No retries permitted until 2026-04-22 15:08:30.846645438 +0000 UTC m=+10.081893227 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/be8c5f47-6214-42a7-8e36-1c852cc48be6-metrics-certs") pod "network-metrics-daemon-b6hrq" (UID: "be8c5f47-6214-42a7-8e36-1c852cc48be6") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 22 15:08:26.947940 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:26.947194 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zhkz8\" (UniqueName: \"kubernetes.io/projected/0e7fe577-78a6-4227-b074-218a66e869bc-kube-api-access-zhkz8\") pod \"network-check-target-j6s9c\" (UID: \"0e7fe577-78a6-4227-b074-218a66e869bc\") " pod="openshift-network-diagnostics/network-check-target-j6s9c" Apr 22 15:08:26.947940 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:26.947450 2575 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 22 15:08:26.947940 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:26.947473 2575 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 22 15:08:26.947940 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:26.947486 2575 projected.go:194] Error preparing data for projected volume kube-api-access-zhkz8 for pod openshift-network-diagnostics/network-check-target-j6s9c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 22 15:08:26.947940 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:26.947552 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0e7fe577-78a6-4227-b074-218a66e869bc-kube-api-access-zhkz8 podName:0e7fe577-78a6-4227-b074-218a66e869bc nodeName:}" failed. No retries permitted until 2026-04-22 15:08:30.947530466 +0000 UTC m=+10.182778260 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-zhkz8" (UniqueName: "kubernetes.io/projected/0e7fe577-78a6-4227-b074-218a66e869bc-kube-api-access-zhkz8") pod "network-check-target-j6s9c" (UID: "0e7fe577-78a6-4227-b074-218a66e869bc") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 22 15:08:27.328669 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:27.328586 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j6s9c" Apr 22 15:08:27.328815 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:27.328724 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j6s9c" podUID="0e7fe577-78a6-4227-b074-218a66e869bc" Apr 22 15:08:28.324059 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:28.324028 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6hrq" Apr 22 15:08:28.324542 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:28.324491 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6hrq" podUID="be8c5f47-6214-42a7-8e36-1c852cc48be6" Apr 22 15:08:29.324197 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:29.324155 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j6s9c" Apr 22 15:08:29.324601 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:29.324295 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j6s9c" podUID="0e7fe577-78a6-4227-b074-218a66e869bc" Apr 22 15:08:30.324023 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:30.323973 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6hrq" Apr 22 15:08:30.324223 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:30.324197 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6hrq" podUID="be8c5f47-6214-42a7-8e36-1c852cc48be6" Apr 22 15:08:30.882629 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:30.882139 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be8c5f47-6214-42a7-8e36-1c852cc48be6-metrics-certs\") pod \"network-metrics-daemon-b6hrq\" (UID: \"be8c5f47-6214-42a7-8e36-1c852cc48be6\") " pod="openshift-multus/network-metrics-daemon-b6hrq" Apr 22 15:08:30.882629 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:30.882318 2575 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 22 15:08:30.882629 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:30.882370 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be8c5f47-6214-42a7-8e36-1c852cc48be6-metrics-certs podName:be8c5f47-6214-42a7-8e36-1c852cc48be6 nodeName:}" failed. No retries permitted until 2026-04-22 15:08:38.882357347 +0000 UTC m=+18.117605140 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/be8c5f47-6214-42a7-8e36-1c852cc48be6-metrics-certs") pod "network-metrics-daemon-b6hrq" (UID: "be8c5f47-6214-42a7-8e36-1c852cc48be6") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 22 15:08:30.983310 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:30.983260 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zhkz8\" (UniqueName: \"kubernetes.io/projected/0e7fe577-78a6-4227-b074-218a66e869bc-kube-api-access-zhkz8\") pod \"network-check-target-j6s9c\" (UID: \"0e7fe577-78a6-4227-b074-218a66e869bc\") " pod="openshift-network-diagnostics/network-check-target-j6s9c" Apr 22 15:08:30.983496 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:30.983410 2575 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 22 15:08:30.983496 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:30.983433 2575 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 22 15:08:30.983496 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:30.983447 2575 projected.go:194] Error preparing data for projected volume kube-api-access-zhkz8 for pod openshift-network-diagnostics/network-check-target-j6s9c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 22 15:08:30.983650 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:30.983507 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0e7fe577-78a6-4227-b074-218a66e869bc-kube-api-access-zhkz8 podName:0e7fe577-78a6-4227-b074-218a66e869bc nodeName:}" failed. No retries permitted until 2026-04-22 15:08:38.983487773 +0000 UTC m=+18.218735566 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-zhkz8" (UniqueName: "kubernetes.io/projected/0e7fe577-78a6-4227-b074-218a66e869bc-kube-api-access-zhkz8") pod "network-check-target-j6s9c" (UID: "0e7fe577-78a6-4227-b074-218a66e869bc") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 22 15:08:31.324293 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:31.324259 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j6s9c" Apr 22 15:08:31.324730 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:31.324383 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j6s9c" podUID="0e7fe577-78a6-4227-b074-218a66e869bc" Apr 22 15:08:32.324812 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:32.324280 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6hrq" Apr 22 15:08:32.324812 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:32.324423 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6hrq" podUID="be8c5f47-6214-42a7-8e36-1c852cc48be6" Apr 22 15:08:33.326724 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:33.326697 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j6s9c" Apr 22 15:08:33.327118 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:33.326814 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j6s9c" podUID="0e7fe577-78a6-4227-b074-218a66e869bc" Apr 22 15:08:34.323774 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:34.323733 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6hrq" Apr 22 15:08:34.323979 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:34.323875 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6hrq" podUID="be8c5f47-6214-42a7-8e36-1c852cc48be6" Apr 22 15:08:35.323659 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:35.323624 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j6s9c" Apr 22 15:08:35.324149 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:35.323732 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j6s9c" podUID="0e7fe577-78a6-4227-b074-218a66e869bc" Apr 22 15:08:36.323546 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:36.323501 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6hrq" Apr 22 15:08:36.323728 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:36.323684 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6hrq" podUID="be8c5f47-6214-42a7-8e36-1c852cc48be6" Apr 22 15:08:37.324263 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:37.324185 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j6s9c" Apr 22 15:08:37.324695 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:37.324318 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j6s9c" podUID="0e7fe577-78a6-4227-b074-218a66e869bc" Apr 22 15:08:38.154823 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:38.154791 2575 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kube-system/global-pull-secret-syncer-6fwnt"] Apr 22 15:08:38.160720 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:38.160698 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-6fwnt" Apr 22 15:08:38.160855 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:38.160781 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-6fwnt" podUID="95af4bf4-9e09-49ec-bfb1-f16c11110db8" Apr 22 15:08:38.238803 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:38.238761 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/95af4bf4-9e09-49ec-bfb1-f16c11110db8-kubelet-config\") pod \"global-pull-secret-syncer-6fwnt\" (UID: \"95af4bf4-9e09-49ec-bfb1-f16c11110db8\") " pod="kube-system/global-pull-secret-syncer-6fwnt" Apr 22 15:08:38.238803 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:38.238800 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/95af4bf4-9e09-49ec-bfb1-f16c11110db8-original-pull-secret\") pod \"global-pull-secret-syncer-6fwnt\" (UID: \"95af4bf4-9e09-49ec-bfb1-f16c11110db8\") " pod="kube-system/global-pull-secret-syncer-6fwnt" Apr 22 15:08:38.239041 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:38.238896 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/95af4bf4-9e09-49ec-bfb1-f16c11110db8-dbus\") pod \"global-pull-secret-syncer-6fwnt\" (UID: \"95af4bf4-9e09-49ec-bfb1-f16c11110db8\") " pod="kube-system/global-pull-secret-syncer-6fwnt" Apr 22 15:08:38.324704 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:38.324247 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6hrq" Apr 22 15:08:38.324704 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:38.324373 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6hrq" podUID="be8c5f47-6214-42a7-8e36-1c852cc48be6" Apr 22 15:08:38.339469 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:38.339432 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/95af4bf4-9e09-49ec-bfb1-f16c11110db8-kubelet-config\") pod \"global-pull-secret-syncer-6fwnt\" (UID: \"95af4bf4-9e09-49ec-bfb1-f16c11110db8\") " pod="kube-system/global-pull-secret-syncer-6fwnt" Apr 22 15:08:38.339641 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:38.339485 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/95af4bf4-9e09-49ec-bfb1-f16c11110db8-original-pull-secret\") pod \"global-pull-secret-syncer-6fwnt\" (UID: \"95af4bf4-9e09-49ec-bfb1-f16c11110db8\") " pod="kube-system/global-pull-secret-syncer-6fwnt" Apr 22 15:08:38.339641 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:38.339521 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/95af4bf4-9e09-49ec-bfb1-f16c11110db8-dbus\") pod \"global-pull-secret-syncer-6fwnt\" (UID: \"95af4bf4-9e09-49ec-bfb1-f16c11110db8\") " pod="kube-system/global-pull-secret-syncer-6fwnt" Apr 22 15:08:38.339641 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:38.339587 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/95af4bf4-9e09-49ec-bfb1-f16c11110db8-kubelet-config\") pod \"global-pull-secret-syncer-6fwnt\" (UID: \"95af4bf4-9e09-49ec-bfb1-f16c11110db8\") " pod="kube-system/global-pull-secret-syncer-6fwnt" Apr 22 15:08:38.339800 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:38.339640 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/95af4bf4-9e09-49ec-bfb1-f16c11110db8-dbus\") pod \"global-pull-secret-syncer-6fwnt\" (UID: \"95af4bf4-9e09-49ec-bfb1-f16c11110db8\") " pod="kube-system/global-pull-secret-syncer-6fwnt" Apr 22 15:08:38.339800 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:38.339693 2575 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 22 15:08:38.339800 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:38.339749 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95af4bf4-9e09-49ec-bfb1-f16c11110db8-original-pull-secret podName:95af4bf4-9e09-49ec-bfb1-f16c11110db8 nodeName:}" failed. No retries permitted until 2026-04-22 15:08:38.839731517 +0000 UTC m=+18.074979501 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/95af4bf4-9e09-49ec-bfb1-f16c11110db8-original-pull-secret") pod "global-pull-secret-syncer-6fwnt" (UID: "95af4bf4-9e09-49ec-bfb1-f16c11110db8") : object "kube-system"/"original-pull-secret" not registered Apr 22 15:08:38.841831 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:38.841797 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/95af4bf4-9e09-49ec-bfb1-f16c11110db8-original-pull-secret\") pod \"global-pull-secret-syncer-6fwnt\" (UID: \"95af4bf4-9e09-49ec-bfb1-f16c11110db8\") " pod="kube-system/global-pull-secret-syncer-6fwnt" Apr 22 15:08:38.842018 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:38.841985 2575 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 22 15:08:38.842074 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:38.842061 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95af4bf4-9e09-49ec-bfb1-f16c11110db8-original-pull-secret podName:95af4bf4-9e09-49ec-bfb1-f16c11110db8 nodeName:}" failed. No retries permitted until 2026-04-22 15:08:39.842041221 +0000 UTC m=+19.077289026 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/95af4bf4-9e09-49ec-bfb1-f16c11110db8-original-pull-secret") pod "global-pull-secret-syncer-6fwnt" (UID: "95af4bf4-9e09-49ec-bfb1-f16c11110db8") : object "kube-system"/"original-pull-secret" not registered Apr 22 15:08:38.942668 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:38.942622 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be8c5f47-6214-42a7-8e36-1c852cc48be6-metrics-certs\") pod \"network-metrics-daemon-b6hrq\" (UID: \"be8c5f47-6214-42a7-8e36-1c852cc48be6\") " pod="openshift-multus/network-metrics-daemon-b6hrq" Apr 22 15:08:38.942834 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:38.942746 2575 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 22 15:08:38.942834 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:38.942824 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be8c5f47-6214-42a7-8e36-1c852cc48be6-metrics-certs podName:be8c5f47-6214-42a7-8e36-1c852cc48be6 nodeName:}" failed. No retries permitted until 2026-04-22 15:08:54.94280487 +0000 UTC m=+34.178052676 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/be8c5f47-6214-42a7-8e36-1c852cc48be6-metrics-certs") pod "network-metrics-daemon-b6hrq" (UID: "be8c5f47-6214-42a7-8e36-1c852cc48be6") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 22 15:08:39.043807 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:39.043770 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zhkz8\" (UniqueName: \"kubernetes.io/projected/0e7fe577-78a6-4227-b074-218a66e869bc-kube-api-access-zhkz8\") pod \"network-check-target-j6s9c\" (UID: \"0e7fe577-78a6-4227-b074-218a66e869bc\") " pod="openshift-network-diagnostics/network-check-target-j6s9c" Apr 22 15:08:39.044010 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:39.043915 2575 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 22 15:08:39.044010 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:39.043930 2575 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 22 15:08:39.044010 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:39.043941 2575 projected.go:194] Error preparing data for projected volume kube-api-access-zhkz8 for pod openshift-network-diagnostics/network-check-target-j6s9c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 22 15:08:39.044010 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:39.043997 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0e7fe577-78a6-4227-b074-218a66e869bc-kube-api-access-zhkz8 podName:0e7fe577-78a6-4227-b074-218a66e869bc nodeName:}" failed. No retries permitted until 2026-04-22 15:08:55.0439834 +0000 UTC m=+34.279231188 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-zhkz8" (UniqueName: "kubernetes.io/projected/0e7fe577-78a6-4227-b074-218a66e869bc-kube-api-access-zhkz8") pod "network-check-target-j6s9c" (UID: "0e7fe577-78a6-4227-b074-218a66e869bc") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 22 15:08:39.324008 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:39.323978 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-6fwnt" Apr 22 15:08:39.324213 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:39.324111 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-6fwnt" podUID="95af4bf4-9e09-49ec-bfb1-f16c11110db8" Apr 22 15:08:39.324538 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:39.324505 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j6s9c" Apr 22 15:08:39.324619 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:39.324598 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j6s9c" podUID="0e7fe577-78a6-4227-b074-218a66e869bc" Apr 22 15:08:39.849726 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:39.849694 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/95af4bf4-9e09-49ec-bfb1-f16c11110db8-original-pull-secret\") pod \"global-pull-secret-syncer-6fwnt\" (UID: \"95af4bf4-9e09-49ec-bfb1-f16c11110db8\") " pod="kube-system/global-pull-secret-syncer-6fwnt" Apr 22 15:08:39.850089 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:39.849834 2575 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 22 15:08:39.850089 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:39.849907 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95af4bf4-9e09-49ec-bfb1-f16c11110db8-original-pull-secret podName:95af4bf4-9e09-49ec-bfb1-f16c11110db8 nodeName:}" failed. No retries permitted until 2026-04-22 15:08:41.849892092 +0000 UTC m=+21.085139885 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/95af4bf4-9e09-49ec-bfb1-f16c11110db8-original-pull-secret") pod "global-pull-secret-syncer-6fwnt" (UID: "95af4bf4-9e09-49ec-bfb1-f16c11110db8") : object "kube-system"/"original-pull-secret" not registered Apr 22 15:08:40.324068 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:40.324035 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6hrq" Apr 22 15:08:40.324239 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:40.324164 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6hrq" podUID="be8c5f47-6214-42a7-8e36-1c852cc48be6" Apr 22 15:08:41.324639 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:41.324601 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j6s9c" Apr 22 15:08:41.325018 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:41.324697 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j6s9c" podUID="0e7fe577-78a6-4227-b074-218a66e869bc" Apr 22 15:08:41.325018 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:41.324738 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-6fwnt" Apr 22 15:08:41.325018 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:41.324804 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-6fwnt" podUID="95af4bf4-9e09-49ec-bfb1-f16c11110db8" Apr 22 15:08:41.868458 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:41.868225 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/95af4bf4-9e09-49ec-bfb1-f16c11110db8-original-pull-secret\") pod \"global-pull-secret-syncer-6fwnt\" (UID: \"95af4bf4-9e09-49ec-bfb1-f16c11110db8\") " pod="kube-system/global-pull-secret-syncer-6fwnt" Apr 22 15:08:41.868674 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:41.868654 2575 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 22 15:08:41.868733 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:41.868722 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95af4bf4-9e09-49ec-bfb1-f16c11110db8-original-pull-secret podName:95af4bf4-9e09-49ec-bfb1-f16c11110db8 nodeName:}" failed. No retries permitted until 2026-04-22 15:08:45.868705051 +0000 UTC m=+25.103952838 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/95af4bf4-9e09-49ec-bfb1-f16c11110db8-original-pull-secret") pod "global-pull-secret-syncer-6fwnt" (UID: "95af4bf4-9e09-49ec-bfb1-f16c11110db8") : object "kube-system"/"original-pull-secret" not registered Apr 22 15:08:42.324343 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:42.324160 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6hrq" Apr 22 15:08:42.324495 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:42.324412 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6hrq" podUID="be8c5f47-6214-42a7-8e36-1c852cc48be6" Apr 22 15:08:42.439237 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:42.439190 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-134-217.ec2.internal" event={"ID":"a1f9ca9b98f3c0aa38bb3225e3c68dd3","Type":"ContainerStarted","Data":"9a4586ed893c8b1f40236da6f2541d60c15394d7733d1d1e3cc5f546e09fb874"} Apr 22 15:08:42.440801 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:42.440776 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gqldx" event={"ID":"1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4","Type":"ContainerStarted","Data":"507d7103e1b8ad6d8fe3a1e93b36bfdae3e5f2524415dac793f361e61536858e"} Apr 22 15:08:42.443396 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:42.443375 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lt5hd_7b9c0073-689d-408d-ac2b-84411c925f02/ovn-acl-logging/0.log" Apr 22 15:08:42.443703 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:42.443682 2575 generic.go:358] "Generic (PLEG): container finished" podID="7b9c0073-689d-408d-ac2b-84411c925f02" containerID="7a109474687c746a22f8798b8721d85d4f12fdaea96ca0c40e053742816b85c5" exitCode=1 Apr 22 15:08:42.443779 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:42.443730 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" event={"ID":"7b9c0073-689d-408d-ac2b-84411c925f02","Type":"ContainerStarted","Data":"031cd7eda67863d04c38f4fc33f601bb9876147e4bb906f383fbba2ac5f9fc79"} Apr 22 15:08:42.443779 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:42.443752 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" event={"ID":"7b9c0073-689d-408d-ac2b-84411c925f02","Type":"ContainerStarted","Data":"c998da2084e63ea821fca2ffa923ebb462bb6afe48ca0bc97111c2456689b167"} Apr 22 15:08:42.443779 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:42.443766 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" event={"ID":"7b9c0073-689d-408d-ac2b-84411c925f02","Type":"ContainerStarted","Data":"bac930a4861057307987a7bdf2bedf66da0a9f9ad8904d1e0e11f42f9697f67f"} Apr 22 15:08:42.443934 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:42.443779 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" event={"ID":"7b9c0073-689d-408d-ac2b-84411c925f02","Type":"ContainerStarted","Data":"1950c263f879fc3d810402ac093da627bbb784c896c8e01f79aa7688e0e72e1c"} Apr 22 15:08:42.443934 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:42.443791 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" event={"ID":"7b9c0073-689d-408d-ac2b-84411c925f02","Type":"ContainerDied","Data":"7a109474687c746a22f8798b8721d85d4f12fdaea96ca0c40e053742816b85c5"} Apr 22 15:08:42.443934 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:42.443805 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" event={"ID":"7b9c0073-689d-408d-ac2b-84411c925f02","Type":"ContainerStarted","Data":"6570af9028bbe8afcb50dee3ff6b061cc6ecd7118d0cedb296e11300b346308b"} Apr 22 15:08:42.445142 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:42.445119 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4rqkv" event={"ID":"df2d7157-ac73-43ed-adb1-0db7ad5e65fd","Type":"ContainerStarted","Data":"b6f392a906c49afb11a77d40e228b564879424ae2f62d172b80c157c5c0a8176"} Apr 22 15:08:42.447029 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:42.447002 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hqq4l" event={"ID":"54264bd4-ce9e-4010-b213-56e5f4bfe070","Type":"ContainerStarted","Data":"91d5b279a4f5592762c15ade325fdea89af42e244249c6f2b92095aed8ecd043"} Apr 22 15:08:42.448461 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:42.448439 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-rw9wr" event={"ID":"24727a23-7950-43c6-9a15-92416687fab7","Type":"ContainerStarted","Data":"68e1fb5176b4f6ab30504351e96dd493879dbc8321cfc5626ee2005ca1c50059"} Apr 22 15:08:42.449779 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:42.449753 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" event={"ID":"ffffeec3-bd38-4d24-8d3d-36ee2cdbe144","Type":"ContainerStarted","Data":"2fe2fa8c0a7ab69b57c5bf9d7e250c198503b95b80b4e4c71bf836881181f4eb"} Apr 22 15:08:42.451113 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:42.451087 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-qxtv4" event={"ID":"a17c3e99-1108-4fee-af0c-ec3741b68100","Type":"ContainerStarted","Data":"3f00b01245259ff15e313f7b8b966d99a6d3d7223b94cd4cd9e034d4852ad91d"} Apr 22 15:08:42.453031 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:42.452577 2575 generic.go:358] "Generic (PLEG): container finished" podID="a5f9bf55-b089-4f8e-8313-0f7409db1455" containerID="4533c42ce63fceffe681ad22bf3502c59baa650ce402546d718a3b77e988812a" exitCode=0 Apr 22 15:08:42.453031 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:42.452611 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xrffc" event={"ID":"a5f9bf55-b089-4f8e-8313-0f7409db1455","Type":"ContainerDied","Data":"4533c42ce63fceffe681ad22bf3502c59baa650ce402546d718a3b77e988812a"} Apr 22 15:08:42.494695 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:42.494644 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-134-217.ec2.internal" podStartSLOduration=20.494628572 podStartE2EDuration="20.494628572s" podCreationTimestamp="2026-04-22 15:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-22 15:08:42.463622319 +0000 UTC m=+21.698870155" watchObservedRunningTime="2026-04-22 15:08:42.494628572 +0000 UTC m=+21.729876381" Apr 22 15:08:42.517956 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:42.517895 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/konnectivity-agent-qxtv4" podStartSLOduration=4.268707505 podStartE2EDuration="21.517858511s" podCreationTimestamp="2026-04-22 15:08:21 +0000 UTC" firstStartedPulling="2026-04-22 15:08:24.180420265 +0000 UTC m=+3.415668052" lastFinishedPulling="2026-04-22 15:08:41.429571268 +0000 UTC m=+20.664819058" observedRunningTime="2026-04-22 15:08:42.494333078 +0000 UTC m=+21.729580888" watchObservedRunningTime="2026-04-22 15:08:42.517858511 +0000 UTC m=+21.753106323" Apr 22 15:08:42.537299 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:42.536936 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-g4dbx" podStartSLOduration=3.959852936 podStartE2EDuration="21.536920332s" podCreationTimestamp="2026-04-22 15:08:21 +0000 UTC" firstStartedPulling="2026-04-22 15:08:24.185194031 +0000 UTC m=+3.420441824" lastFinishedPulling="2026-04-22 15:08:41.762261425 +0000 UTC m=+20.997509220" observedRunningTime="2026-04-22 15:08:42.519588441 +0000 UTC m=+21.754836275" watchObservedRunningTime="2026-04-22 15:08:42.536920332 +0000 UTC m=+21.772168145" Apr 22 15:08:42.566643 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:42.566591 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-hqq4l" podStartSLOduration=3.991481612 podStartE2EDuration="21.566572805s" podCreationTimestamp="2026-04-22 15:08:21 +0000 UTC" firstStartedPulling="2026-04-22 15:08:24.187171499 +0000 UTC m=+3.422419288" lastFinishedPulling="2026-04-22 15:08:41.762262689 +0000 UTC m=+20.997510481" observedRunningTime="2026-04-22 15:08:42.536662165 +0000 UTC m=+21.771909969" watchObservedRunningTime="2026-04-22 15:08:42.566572805 +0000 UTC m=+21.801820619" Apr 22 15:08:42.628475 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:42.628417 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-rw9wr" podStartSLOduration=4.051972587 podStartE2EDuration="21.628401329s" podCreationTimestamp="2026-04-22 15:08:21 +0000 UTC" firstStartedPulling="2026-04-22 15:08:24.185846667 +0000 UTC m=+3.421094470" lastFinishedPulling="2026-04-22 15:08:41.762275421 +0000 UTC m=+20.997523212" observedRunningTime="2026-04-22 15:08:42.592820093 +0000 UTC m=+21.828067917" watchObservedRunningTime="2026-04-22 15:08:42.628401329 +0000 UTC m=+21.863649139" Apr 22 15:08:42.628661 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:42.628550 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-4rqkv" podStartSLOduration=4.029227068 podStartE2EDuration="21.628541925s" podCreationTimestamp="2026-04-22 15:08:21 +0000 UTC" firstStartedPulling="2026-04-22 15:08:24.171467302 +0000 UTC m=+3.406715092" lastFinishedPulling="2026-04-22 15:08:41.770782156 +0000 UTC m=+21.006029949" observedRunningTime="2026-04-22 15:08:42.627063328 +0000 UTC m=+21.862311138" watchObservedRunningTime="2026-04-22 15:08:42.628541925 +0000 UTC m=+21.863789737" Apr 22 15:08:42.955050 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:42.955015 2575 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock" Apr 22 15:08:43.278994 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:43.278897 2575 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock","Timestamp":"2026-04-22T15:08:42.955034486Z","UUID":"be4c31ba-ca6f-4695-b6e6-d5d3364e522f","Handler":null,"Name":"","Endpoint":""} Apr 22 15:08:43.280533 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:43.280509 2575 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: ebs.csi.aws.com endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock versions: 1.0.0 Apr 22 15:08:43.280649 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:43.280542 2575 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: ebs.csi.aws.com at endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock Apr 22 15:08:43.324448 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:43.324415 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j6s9c" Apr 22 15:08:43.324623 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:43.324415 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-6fwnt" Apr 22 15:08:43.324623 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:43.324542 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j6s9c" podUID="0e7fe577-78a6-4227-b074-218a66e869bc" Apr 22 15:08:43.324743 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:43.324645 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-6fwnt" podUID="95af4bf4-9e09-49ec-bfb1-f16c11110db8" Apr 22 15:08:43.456112 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:43.456077 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-x467p" event={"ID":"af1546e5-60a5-4932-8506-3627e007c4b6","Type":"ContainerStarted","Data":"fb5d6c73036eac173f25d0c185e9579e4236615a06ee237679561e94058a5b74"} Apr 22 15:08:43.458373 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:43.458340 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gqldx" event={"ID":"1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4","Type":"ContainerStarted","Data":"f61fea52f18c530e73bbac816f87d8089dd5fbc06aeb2a0e9d7973f2c913028a"} Apr 22 15:08:43.484512 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:43.484457 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/iptables-alerter-x467p" podStartSLOduration=4.90614681 podStartE2EDuration="22.484441067s" podCreationTimestamp="2026-04-22 15:08:21 +0000 UTC" firstStartedPulling="2026-04-22 15:08:24.184263241 +0000 UTC m=+3.419511029" lastFinishedPulling="2026-04-22 15:08:41.762557491 +0000 UTC m=+20.997805286" observedRunningTime="2026-04-22 15:08:43.484344375 +0000 UTC m=+22.719592184" watchObservedRunningTime="2026-04-22 15:08:43.484441067 +0000 UTC m=+22.719688876" Apr 22 15:08:44.324609 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:44.324372 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6hrq" Apr 22 15:08:44.324781 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:44.324655 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6hrq" podUID="be8c5f47-6214-42a7-8e36-1c852cc48be6" Apr 22 15:08:44.465043 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:44.464921 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gqldx" event={"ID":"1a90c4d2-3036-4e89-802f-c4dcb2e6bdd4","Type":"ContainerStarted","Data":"ccbc03002602ea882e0e95a19b79c4a6db9fbdf12f990cb49fedff7e16df7c89"} Apr 22 15:08:45.323548 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:45.323513 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-6fwnt" Apr 22 15:08:45.323752 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:45.323622 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-6fwnt" podUID="95af4bf4-9e09-49ec-bfb1-f16c11110db8" Apr 22 15:08:45.323925 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:45.323513 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j6s9c" Apr 22 15:08:45.324053 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:45.324030 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j6s9c" podUID="0e7fe577-78a6-4227-b074-218a66e869bc" Apr 22 15:08:45.470000 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:45.469973 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lt5hd_7b9c0073-689d-408d-ac2b-84411c925f02/ovn-acl-logging/0.log" Apr 22 15:08:45.470544 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:45.470330 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" event={"ID":"7b9c0073-689d-408d-ac2b-84411c925f02","Type":"ContainerStarted","Data":"9bb2ad366576aea99bdbcf588ee895dfd3d4d20e0eff6e80ea118e71e5557b1a"} Apr 22 15:08:45.900643 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:45.900597 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/95af4bf4-9e09-49ec-bfb1-f16c11110db8-original-pull-secret\") pod \"global-pull-secret-syncer-6fwnt\" (UID: \"95af4bf4-9e09-49ec-bfb1-f16c11110db8\") " pod="kube-system/global-pull-secret-syncer-6fwnt" Apr 22 15:08:45.900894 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:45.900770 2575 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 22 15:08:45.900894 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:45.900854 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95af4bf4-9e09-49ec-bfb1-f16c11110db8-original-pull-secret podName:95af4bf4-9e09-49ec-bfb1-f16c11110db8 nodeName:}" failed. No retries permitted until 2026-04-22 15:08:53.900827252 +0000 UTC m=+33.136075043 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/95af4bf4-9e09-49ec-bfb1-f16c11110db8-original-pull-secret") pod "global-pull-secret-syncer-6fwnt" (UID: "95af4bf4-9e09-49ec-bfb1-f16c11110db8") : object "kube-system"/"original-pull-secret" not registered Apr 22 15:08:46.090812 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:46.090767 2575 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/konnectivity-agent-qxtv4" Apr 22 15:08:46.091426 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:46.091406 2575 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/konnectivity-agent-qxtv4" Apr 22 15:08:46.108589 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:46.108533 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gqldx" podStartSLOduration=5.318765876 podStartE2EDuration="25.108515968s" podCreationTimestamp="2026-04-22 15:08:21 +0000 UTC" firstStartedPulling="2026-04-22 15:08:24.182554373 +0000 UTC m=+3.417802166" lastFinishedPulling="2026-04-22 15:08:43.972304456 +0000 UTC m=+23.207552258" observedRunningTime="2026-04-22 15:08:44.493693174 +0000 UTC m=+23.728940983" watchObservedRunningTime="2026-04-22 15:08:46.108515968 +0000 UTC m=+25.343763779" Apr 22 15:08:46.324031 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:46.323991 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6hrq" Apr 22 15:08:46.324230 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:46.324128 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6hrq" podUID="be8c5f47-6214-42a7-8e36-1c852cc48be6" Apr 22 15:08:46.472621 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:46.472590 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kube-system/konnectivity-agent-qxtv4" Apr 22 15:08:46.473142 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:46.473114 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/konnectivity-agent-qxtv4" Apr 22 15:08:47.324427 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:47.324155 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j6s9c" Apr 22 15:08:47.324661 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:47.324156 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-6fwnt" Apr 22 15:08:47.324661 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:47.324533 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j6s9c" podUID="0e7fe577-78a6-4227-b074-218a66e869bc" Apr 22 15:08:47.324661 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:47.324575 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-6fwnt" podUID="95af4bf4-9e09-49ec-bfb1-f16c11110db8" Apr 22 15:08:47.476977 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:47.476946 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lt5hd_7b9c0073-689d-408d-ac2b-84411c925f02/ovn-acl-logging/0.log" Apr 22 15:08:47.477483 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:47.477305 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" event={"ID":"7b9c0073-689d-408d-ac2b-84411c925f02","Type":"ContainerStarted","Data":"7b2f8d25c0405d9b135c3dbf94a195a94ded728c74eadc859b2113245a6d7836"} Apr 22 15:08:47.477704 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:47.477690 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:47.477884 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:47.477841 2575 scope.go:117] "RemoveContainer" containerID="7a109474687c746a22f8798b8721d85d4f12fdaea96ca0c40e053742816b85c5" Apr 22 15:08:47.482698 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:47.482671 2575 generic.go:358] "Generic (PLEG): container finished" podID="a5f9bf55-b089-4f8e-8313-0f7409db1455" containerID="c37a6b2aff3cda2ad1359234cf84e1eb3b009efbcf75b8eaea9b7bb8a5d3daf0" exitCode=0 Apr 22 15:08:47.482840 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:47.482740 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xrffc" event={"ID":"a5f9bf55-b089-4f8e-8313-0f7409db1455","Type":"ContainerDied","Data":"c37a6b2aff3cda2ad1359234cf84e1eb3b009efbcf75b8eaea9b7bb8a5d3daf0"} Apr 22 15:08:47.495345 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:47.495251 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:48.324033 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:48.323997 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6hrq" Apr 22 15:08:48.324250 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:48.324104 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6hrq" podUID="be8c5f47-6214-42a7-8e36-1c852cc48be6" Apr 22 15:08:48.489186 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:48.489162 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lt5hd_7b9c0073-689d-408d-ac2b-84411c925f02/ovn-acl-logging/0.log" Apr 22 15:08:48.489624 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:48.489598 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" event={"ID":"7b9c0073-689d-408d-ac2b-84411c925f02","Type":"ContainerStarted","Data":"5d1899c06017b325be8c79739ecb2e7b6875b3b339baf0dea4d855f41b40af1d"} Apr 22 15:08:48.494391 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:48.494312 2575 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 22 15:08:48.494726 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:48.494701 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:48.514182 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:48.514153 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:48.528431 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:48.528373 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" podStartSLOduration=9.869181983 podStartE2EDuration="27.528359766s" podCreationTimestamp="2026-04-22 15:08:21 +0000 UTC" firstStartedPulling="2026-04-22 15:08:24.178170821 +0000 UTC m=+3.413418613" lastFinishedPulling="2026-04-22 15:08:41.837348592 +0000 UTC m=+21.072596396" observedRunningTime="2026-04-22 15:08:48.527282991 +0000 UTC m=+27.762530840" watchObservedRunningTime="2026-04-22 15:08:48.528359766 +0000 UTC m=+27.763607579" Apr 22 15:08:48.722054 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:48.721840 2575 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-j6s9c"] Apr 22 15:08:48.722219 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:48.722164 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j6s9c" Apr 22 15:08:48.722280 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:48.722250 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j6s9c" podUID="0e7fe577-78a6-4227-b074-218a66e869bc" Apr 22 15:08:48.727700 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:48.727658 2575 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-6fwnt"] Apr 22 15:08:48.727846 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:48.727786 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-6fwnt" Apr 22 15:08:48.727956 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:48.727909 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-6fwnt" podUID="95af4bf4-9e09-49ec-bfb1-f16c11110db8" Apr 22 15:08:48.729230 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:48.729206 2575 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-b6hrq"] Apr 22 15:08:48.729324 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:48.729306 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6hrq" Apr 22 15:08:48.729408 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:48.729385 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6hrq" podUID="be8c5f47-6214-42a7-8e36-1c852cc48be6" Apr 22 15:08:49.493088 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:49.493047 2575 generic.go:358] "Generic (PLEG): container finished" podID="a5f9bf55-b089-4f8e-8313-0f7409db1455" containerID="4083bb004f8788c972ae7dff27f4b34af3186d8ed256bf118945df1f031173d7" exitCode=0 Apr 22 15:08:49.493477 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:49.493149 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xrffc" event={"ID":"a5f9bf55-b089-4f8e-8313-0f7409db1455","Type":"ContainerDied","Data":"4083bb004f8788c972ae7dff27f4b34af3186d8ed256bf118945df1f031173d7"} Apr 22 15:08:49.493477 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:49.493355 2575 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 22 15:08:50.040483 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:50.040447 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:08:50.324439 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:50.324357 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-6fwnt" Apr 22 15:08:50.324609 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:50.324361 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6hrq" Apr 22 15:08:50.324609 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:50.324466 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-6fwnt" podUID="95af4bf4-9e09-49ec-bfb1-f16c11110db8" Apr 22 15:08:50.324609 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:50.324565 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6hrq" podUID="be8c5f47-6214-42a7-8e36-1c852cc48be6" Apr 22 15:08:50.324609 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:50.324360 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j6s9c" Apr 22 15:08:50.324809 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:50.324655 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j6s9c" podUID="0e7fe577-78a6-4227-b074-218a66e869bc" Apr 22 15:08:51.499059 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:51.499025 2575 generic.go:358] "Generic (PLEG): container finished" podID="a5f9bf55-b089-4f8e-8313-0f7409db1455" containerID="53a9b6e1907384e0adbf757563a8d96504debf49efee38f692fafbaf370fb668" exitCode=0 Apr 22 15:08:51.499680 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:51.499111 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xrffc" event={"ID":"a5f9bf55-b089-4f8e-8313-0f7409db1455","Type":"ContainerDied","Data":"53a9b6e1907384e0adbf757563a8d96504debf49efee38f692fafbaf370fb668"} Apr 22 15:08:51.516085 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:51.516031 2575 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" podUID="7b9c0073-689d-408d-ac2b-84411c925f02" containerName="ovnkube-controller" probeResult="failure" output="" Apr 22 15:08:52.324070 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:52.324037 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-6fwnt" Apr 22 15:08:52.324280 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:52.324038 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j6s9c" Apr 22 15:08:52.324280 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:52.324154 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-6fwnt" podUID="95af4bf4-9e09-49ec-bfb1-f16c11110db8" Apr 22 15:08:52.324280 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:52.324170 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6hrq" Apr 22 15:08:52.324280 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:52.324239 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j6s9c" podUID="0e7fe577-78a6-4227-b074-218a66e869bc" Apr 22 15:08:52.324506 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:52.324313 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6hrq" podUID="be8c5f47-6214-42a7-8e36-1c852cc48be6" Apr 22 15:08:53.960171 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:53.960130 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/95af4bf4-9e09-49ec-bfb1-f16c11110db8-original-pull-secret\") pod \"global-pull-secret-syncer-6fwnt\" (UID: \"95af4bf4-9e09-49ec-bfb1-f16c11110db8\") " pod="kube-system/global-pull-secret-syncer-6fwnt" Apr 22 15:08:53.960676 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:53.960277 2575 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 22 15:08:53.960676 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:53.960359 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95af4bf4-9e09-49ec-bfb1-f16c11110db8-original-pull-secret podName:95af4bf4-9e09-49ec-bfb1-f16c11110db8 nodeName:}" failed. No retries permitted until 2026-04-22 15:09:09.960342272 +0000 UTC m=+49.195590065 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/95af4bf4-9e09-49ec-bfb1-f16c11110db8-original-pull-secret") pod "global-pull-secret-syncer-6fwnt" (UID: "95af4bf4-9e09-49ec-bfb1-f16c11110db8") : object "kube-system"/"original-pull-secret" not registered Apr 22 15:08:54.324470 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:54.324390 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j6s9c" Apr 22 15:08:54.324650 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:54.324508 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-6fwnt" Apr 22 15:08:54.324650 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:54.324511 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j6s9c" podUID="0e7fe577-78a6-4227-b074-218a66e869bc" Apr 22 15:08:54.324650 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:54.324587 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-6fwnt" podUID="95af4bf4-9e09-49ec-bfb1-f16c11110db8" Apr 22 15:08:54.324650 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:54.324615 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6hrq" Apr 22 15:08:54.324800 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:54.324715 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6hrq" podUID="be8c5f47-6214-42a7-8e36-1c852cc48be6" Apr 22 15:08:54.968616 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:54.968575 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be8c5f47-6214-42a7-8e36-1c852cc48be6-metrics-certs\") pod \"network-metrics-daemon-b6hrq\" (UID: \"be8c5f47-6214-42a7-8e36-1c852cc48be6\") " pod="openshift-multus/network-metrics-daemon-b6hrq" Apr 22 15:08:54.969154 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:54.968724 2575 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 22 15:08:54.969154 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:54.968800 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be8c5f47-6214-42a7-8e36-1c852cc48be6-metrics-certs podName:be8c5f47-6214-42a7-8e36-1c852cc48be6 nodeName:}" failed. No retries permitted until 2026-04-22 15:09:26.96878013 +0000 UTC m=+66.204027935 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/be8c5f47-6214-42a7-8e36-1c852cc48be6-metrics-certs") pod "network-metrics-daemon-b6hrq" (UID: "be8c5f47-6214-42a7-8e36-1c852cc48be6") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 22 15:08:55.069668 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.069613 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zhkz8\" (UniqueName: \"kubernetes.io/projected/0e7fe577-78a6-4227-b074-218a66e869bc-kube-api-access-zhkz8\") pod \"network-check-target-j6s9c\" (UID: \"0e7fe577-78a6-4227-b074-218a66e869bc\") " pod="openshift-network-diagnostics/network-check-target-j6s9c" Apr 22 15:08:55.069834 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:55.069779 2575 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 22 15:08:55.069834 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:55.069805 2575 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 22 15:08:55.069834 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:55.069816 2575 projected.go:194] Error preparing data for projected volume kube-api-access-zhkz8 for pod openshift-network-diagnostics/network-check-target-j6s9c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 22 15:08:55.069979 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:55.069887 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0e7fe577-78a6-4227-b074-218a66e869bc-kube-api-access-zhkz8 podName:0e7fe577-78a6-4227-b074-218a66e869bc nodeName:}" failed. No retries permitted until 2026-04-22 15:09:27.069858869 +0000 UTC m=+66.305106657 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-zhkz8" (UniqueName: "kubernetes.io/projected/0e7fe577-78a6-4227-b074-218a66e869bc-kube-api-access-zhkz8") pod "network-check-target-j6s9c" (UID: "0e7fe577-78a6-4227-b074-218a66e869bc") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 22 15:08:55.635516 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.633987 2575 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-217.ec2.internal" event="NodeReady" Apr 22 15:08:55.635516 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.634154 2575 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Apr 22 15:08:55.692163 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.692129 2575 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-648d979695-ch7nn"] Apr 22 15:08:55.727946 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.727161 2575 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["open-cluster-management-agent-addon/cluster-proxy-proxy-agent-86d5fdd478-9njxg"] Apr 22 15:08:55.751449 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.751401 2575 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv"] Apr 22 15:08:55.751633 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.751493 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-86d5fdd478-9njxg" Apr 22 15:08:55.751633 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.751416 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-648d979695-ch7nn" Apr 22 15:08:55.757509 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.757484 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-open-cluster-management.io-proxy-agent-signer-client-cert\"" Apr 22 15:08:55.757889 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.757848 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-ca\"" Apr 22 15:08:55.758003 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.757924 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"open-cluster-management-agent-addon\"/\"kube-root-ca.crt\"" Apr 22 15:08:55.758099 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.758081 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"open-cluster-management-image-pull-credentials\"" Apr 22 15:08:55.758475 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.758455 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-hub-kubeconfig\"" Apr 22 15:08:55.758565 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.758534 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"managed-serviceaccount-hub-kubeconfig\"" Apr 22 15:08:55.758660 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.758645 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"open-cluster-management-agent-addon\"/\"openshift-service-ca.crt\"" Apr 22 15:08:55.759010 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.758988 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"managed-serviceaccount-dockercfg-t8stb\"" Apr 22 15:08:55.762798 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.762767 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-service-proxy-server-certificates\"" Apr 22 15:08:55.767043 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.767024 2575 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66f5f8d5cd-rgqhw"] Apr 22 15:08:55.767282 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.767255 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" Apr 22 15:08:55.774758 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.774738 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"work-manager-hub-kubeconfig\"" Apr 22 15:08:55.782403 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.782380 2575 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-648d979695-ch7nn"] Apr 22 15:08:55.782403 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.782413 2575 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/cluster-proxy-proxy-agent-86d5fdd478-9njxg"] Apr 22 15:08:55.782567 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.782429 2575 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv"] Apr 22 15:08:55.782567 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.782445 2575 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-rb7d6"] Apr 22 15:08:55.782567 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.782538 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:08:55.792584 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.792048 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Apr 22 15:08:55.792584 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.792224 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-private-configuration\"" Apr 22 15:08:55.792584 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.792230 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-dblpk\"" Apr 22 15:08:55.792990 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.792971 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Apr 22 15:08:55.802057 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.802032 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Apr 22 15:08:55.807344 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.807199 2575 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-4lslj"] Apr 22 15:08:55.807429 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.807408 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rb7d6" Apr 22 15:08:55.813938 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.813917 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Apr 22 15:08:55.814666 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.814644 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Apr 22 15:08:55.815100 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.815083 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-fkblf\"" Apr 22 15:08:55.828439 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.828407 2575 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66f5f8d5cd-rgqhw"] Apr 22 15:08:55.828439 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.828445 2575 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4lslj"] Apr 22 15:08:55.828645 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.828460 2575 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-rb7d6"] Apr 22 15:08:55.828645 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.828550 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4lslj" Apr 22 15:08:55.834287 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.834263 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Apr 22 15:08:55.834471 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.834384 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-cbqpc\"" Apr 22 15:08:55.834471 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.834432 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Apr 22 15:08:55.834800 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.834775 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Apr 22 15:08:55.877478 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.877426 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2adf4441-467a-46c0-a616-97afe2eb9fe8-installation-pull-secrets\") pod \"image-registry-66f5f8d5cd-rgqhw\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:08:55.877478 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.877472 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27jm2\" (UniqueName: \"kubernetes.io/projected/700dd30f-621e-4e1b-970e-a0fe55861cb9-kube-api-access-27jm2\") pod \"cluster-proxy-proxy-agent-86d5fdd478-9njxg\" (UID: \"700dd30f-621e-4e1b-970e-a0fe55861cb9\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-86d5fdd478-9njxg" Apr 22 15:08:55.877711 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.877501 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwx2n\" (UniqueName: \"kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-kube-api-access-dwx2n\") pod \"image-registry-66f5f8d5cd-rgqhw\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:08:55.877711 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.877629 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plpfp\" (UniqueName: \"kubernetes.io/projected/b50382c5-ef34-4d73-9526-989655d2e11f-kube-api-access-plpfp\") pod \"managed-serviceaccount-addon-agent-648d979695-ch7nn\" (UID: \"b50382c5-ef34-4d73-9526-989655d2e11f\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-648d979695-ch7nn" Apr 22 15:08:55.877711 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.877707 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2adf4441-467a-46c0-a616-97afe2eb9fe8-ca-trust-extracted\") pod \"image-registry-66f5f8d5cd-rgqhw\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:08:55.877839 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.877730 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/047276c4-c2f7-4f16-a7be-64fba5485b6c-klusterlet-config\") pod \"klusterlet-addon-workmgr-567b4745f5-tj4cv\" (UID: \"047276c4-c2f7-4f16-a7be-64fba5485b6c\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" Apr 22 15:08:55.877839 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.877747 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxbdg\" (UniqueName: \"kubernetes.io/projected/047276c4-c2f7-4f16-a7be-64fba5485b6c-kube-api-access-gxbdg\") pod \"klusterlet-addon-workmgr-567b4745f5-tj4cv\" (UID: \"047276c4-c2f7-4f16-a7be-64fba5485b6c\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" Apr 22 15:08:55.877839 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.877771 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-proxy-server-cert\" (UniqueName: \"kubernetes.io/secret/700dd30f-621e-4e1b-970e-a0fe55861cb9-service-proxy-server-cert\") pod \"cluster-proxy-proxy-agent-86d5fdd478-9njxg\" (UID: \"700dd30f-621e-4e1b-970e-a0fe55861cb9\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-86d5fdd478-9njxg" Apr 22 15:08:55.877981 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.877830 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/b50382c5-ef34-4d73-9526-989655d2e11f-hub-kubeconfig\") pod \"managed-serviceaccount-addon-agent-648d979695-ch7nn\" (UID: \"b50382c5-ef34-4d73-9526-989655d2e11f\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-648d979695-ch7nn" Apr 22 15:08:55.877981 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.877896 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-registry-tls\") pod \"image-registry-66f5f8d5cd-rgqhw\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:08:55.877981 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.877925 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/047276c4-c2f7-4f16-a7be-64fba5485b6c-tmp\") pod \"klusterlet-addon-workmgr-567b4745f5-tj4cv\" (UID: \"047276c4-c2f7-4f16-a7be-64fba5485b6c\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" Apr 22 15:08:55.877981 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.877958 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca\" (UniqueName: \"kubernetes.io/secret/700dd30f-621e-4e1b-970e-a0fe55861cb9-ca\") pod \"cluster-proxy-proxy-agent-86d5fdd478-9njxg\" (UID: \"700dd30f-621e-4e1b-970e-a0fe55861cb9\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-86d5fdd478-9njxg" Apr 22 15:08:55.878139 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.877987 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/700dd30f-621e-4e1b-970e-a0fe55861cb9-hub-kubeconfig\") pod \"cluster-proxy-proxy-agent-86d5fdd478-9njxg\" (UID: \"700dd30f-621e-4e1b-970e-a0fe55861cb9\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-86d5fdd478-9njxg" Apr 22 15:08:55.878139 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.878011 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ocpservice-ca\" (UniqueName: \"kubernetes.io/configmap/700dd30f-621e-4e1b-970e-a0fe55861cb9-ocpservice-ca\") pod \"cluster-proxy-proxy-agent-86d5fdd478-9njxg\" (UID: \"700dd30f-621e-4e1b-970e-a0fe55861cb9\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-86d5fdd478-9njxg" Apr 22 15:08:55.878139 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.878059 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2adf4441-467a-46c0-a616-97afe2eb9fe8-trusted-ca\") pod \"image-registry-66f5f8d5cd-rgqhw\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:08:55.878139 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.878095 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/2adf4441-467a-46c0-a616-97afe2eb9fe8-image-registry-private-configuration\") pod \"image-registry-66f5f8d5cd-rgqhw\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:08:55.878139 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.878120 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2adf4441-467a-46c0-a616-97afe2eb9fe8-registry-certificates\") pod \"image-registry-66f5f8d5cd-rgqhw\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:08:55.878341 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.878142 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-bound-sa-token\") pod \"image-registry-66f5f8d5cd-rgqhw\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:08:55.878341 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.878160 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hub\" (UniqueName: \"kubernetes.io/secret/700dd30f-621e-4e1b-970e-a0fe55861cb9-hub\") pod \"cluster-proxy-proxy-agent-86d5fdd478-9njxg\" (UID: \"700dd30f-621e-4e1b-970e-a0fe55861cb9\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-86d5fdd478-9njxg" Apr 22 15:08:55.978931 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.978834 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/700dd30f-621e-4e1b-970e-a0fe55861cb9-hub-kubeconfig\") pod \"cluster-proxy-proxy-agent-86d5fdd478-9njxg\" (UID: \"700dd30f-621e-4e1b-970e-a0fe55861cb9\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-86d5fdd478-9njxg" Apr 22 15:08:55.978931 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.978886 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ocpservice-ca\" (UniqueName: \"kubernetes.io/configmap/700dd30f-621e-4e1b-970e-a0fe55861cb9-ocpservice-ca\") pod \"cluster-proxy-proxy-agent-86d5fdd478-9njxg\" (UID: \"700dd30f-621e-4e1b-970e-a0fe55861cb9\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-86d5fdd478-9njxg" Apr 22 15:08:55.978931 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.978918 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2adf4441-467a-46c0-a616-97afe2eb9fe8-trusted-ca\") pod \"image-registry-66f5f8d5cd-rgqhw\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:08:55.979493 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.978978 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b-config-volume\") pod \"dns-default-rb7d6\" (UID: \"63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b\") " pod="openshift-dns/dns-default-rb7d6" Apr 22 15:08:55.979493 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.979017 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/60651bed-aafc-4a23-b90f-3110ee68359c-cert\") pod \"ingress-canary-4lslj\" (UID: \"60651bed-aafc-4a23-b90f-3110ee68359c\") " pod="openshift-ingress-canary/ingress-canary-4lslj" Apr 22 15:08:55.979493 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.979053 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/2adf4441-467a-46c0-a616-97afe2eb9fe8-image-registry-private-configuration\") pod \"image-registry-66f5f8d5cd-rgqhw\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:08:55.979493 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.979078 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2adf4441-467a-46c0-a616-97afe2eb9fe8-registry-certificates\") pod \"image-registry-66f5f8d5cd-rgqhw\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:08:55.979493 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.979102 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-bound-sa-token\") pod \"image-registry-66f5f8d5cd-rgqhw\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:08:55.979493 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.979132 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hub\" (UniqueName: \"kubernetes.io/secret/700dd30f-621e-4e1b-970e-a0fe55861cb9-hub\") pod \"cluster-proxy-proxy-agent-86d5fdd478-9njxg\" (UID: \"700dd30f-621e-4e1b-970e-a0fe55861cb9\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-86d5fdd478-9njxg" Apr 22 15:08:55.979800 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.979734 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2adf4441-467a-46c0-a616-97afe2eb9fe8-registry-certificates\") pod \"image-registry-66f5f8d5cd-rgqhw\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:08:55.979857 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.979785 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg9pw\" (UniqueName: \"kubernetes.io/projected/63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b-kube-api-access-zg9pw\") pod \"dns-default-rb7d6\" (UID: \"63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b\") " pod="openshift-dns/dns-default-rb7d6" Apr 22 15:08:55.979857 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.979832 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2adf4441-467a-46c0-a616-97afe2eb9fe8-installation-pull-secrets\") pod \"image-registry-66f5f8d5cd-rgqhw\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:08:55.979987 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.979853 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2adf4441-467a-46c0-a616-97afe2eb9fe8-trusted-ca\") pod \"image-registry-66f5f8d5cd-rgqhw\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:08:55.979987 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.979886 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-27jm2\" (UniqueName: \"kubernetes.io/projected/700dd30f-621e-4e1b-970e-a0fe55861cb9-kube-api-access-27jm2\") pod \"cluster-proxy-proxy-agent-86d5fdd478-9njxg\" (UID: \"700dd30f-621e-4e1b-970e-a0fe55861cb9\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-86d5fdd478-9njxg" Apr 22 15:08:55.979987 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.979930 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dwx2n\" (UniqueName: \"kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-kube-api-access-dwx2n\") pod \"image-registry-66f5f8d5cd-rgqhw\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:08:55.979987 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.979962 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b-metrics-tls\") pod \"dns-default-rb7d6\" (UID: \"63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b\") " pod="openshift-dns/dns-default-rb7d6" Apr 22 15:08:55.979987 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.979985 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b-tmp-dir\") pod \"dns-default-rb7d6\" (UID: \"63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b\") " pod="openshift-dns/dns-default-rb7d6" Apr 22 15:08:55.980211 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.980010 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l284x\" (UniqueName: \"kubernetes.io/projected/60651bed-aafc-4a23-b90f-3110ee68359c-kube-api-access-l284x\") pod \"ingress-canary-4lslj\" (UID: \"60651bed-aafc-4a23-b90f-3110ee68359c\") " pod="openshift-ingress-canary/ingress-canary-4lslj" Apr 22 15:08:55.980211 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.980061 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-plpfp\" (UniqueName: \"kubernetes.io/projected/b50382c5-ef34-4d73-9526-989655d2e11f-kube-api-access-plpfp\") pod \"managed-serviceaccount-addon-agent-648d979695-ch7nn\" (UID: \"b50382c5-ef34-4d73-9526-989655d2e11f\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-648d979695-ch7nn" Apr 22 15:08:55.980211 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.980120 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2adf4441-467a-46c0-a616-97afe2eb9fe8-ca-trust-extracted\") pod \"image-registry-66f5f8d5cd-rgqhw\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:08:55.980211 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.980156 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/047276c4-c2f7-4f16-a7be-64fba5485b6c-klusterlet-config\") pod \"klusterlet-addon-workmgr-567b4745f5-tj4cv\" (UID: \"047276c4-c2f7-4f16-a7be-64fba5485b6c\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" Apr 22 15:08:55.980211 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.980187 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gxbdg\" (UniqueName: \"kubernetes.io/projected/047276c4-c2f7-4f16-a7be-64fba5485b6c-kube-api-access-gxbdg\") pod \"klusterlet-addon-workmgr-567b4745f5-tj4cv\" (UID: \"047276c4-c2f7-4f16-a7be-64fba5485b6c\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" Apr 22 15:08:55.980454 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.980212 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-proxy-server-cert\" (UniqueName: \"kubernetes.io/secret/700dd30f-621e-4e1b-970e-a0fe55861cb9-service-proxy-server-cert\") pod \"cluster-proxy-proxy-agent-86d5fdd478-9njxg\" (UID: \"700dd30f-621e-4e1b-970e-a0fe55861cb9\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-86d5fdd478-9njxg" Apr 22 15:08:55.980454 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.980251 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/b50382c5-ef34-4d73-9526-989655d2e11f-hub-kubeconfig\") pod \"managed-serviceaccount-addon-agent-648d979695-ch7nn\" (UID: \"b50382c5-ef34-4d73-9526-989655d2e11f\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-648d979695-ch7nn" Apr 22 15:08:55.980454 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.980276 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-registry-tls\") pod \"image-registry-66f5f8d5cd-rgqhw\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:08:55.980454 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.980302 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/047276c4-c2f7-4f16-a7be-64fba5485b6c-tmp\") pod \"klusterlet-addon-workmgr-567b4745f5-tj4cv\" (UID: \"047276c4-c2f7-4f16-a7be-64fba5485b6c\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" Apr 22 15:08:55.980454 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.980334 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca\" (UniqueName: \"kubernetes.io/secret/700dd30f-621e-4e1b-970e-a0fe55861cb9-ca\") pod \"cluster-proxy-proxy-agent-86d5fdd478-9njxg\" (UID: \"700dd30f-621e-4e1b-970e-a0fe55861cb9\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-86d5fdd478-9njxg" Apr 22 15:08:55.981698 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:55.980805 2575 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 22 15:08:55.981698 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.980814 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2adf4441-467a-46c0-a616-97afe2eb9fe8-ca-trust-extracted\") pod \"image-registry-66f5f8d5cd-rgqhw\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:08:55.981698 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:55.980825 2575 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-66f5f8d5cd-rgqhw: secret "image-registry-tls" not found Apr 22 15:08:55.981698 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:55.980915 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-registry-tls podName:2adf4441-467a-46c0-a616-97afe2eb9fe8 nodeName:}" failed. No retries permitted until 2026-04-22 15:08:56.480897563 +0000 UTC m=+35.716145367 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-registry-tls") pod "image-registry-66f5f8d5cd-rgqhw" (UID: "2adf4441-467a-46c0-a616-97afe2eb9fe8") : secret "image-registry-tls" not found Apr 22 15:08:55.981698 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.981631 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/047276c4-c2f7-4f16-a7be-64fba5485b6c-tmp\") pod \"klusterlet-addon-workmgr-567b4745f5-tj4cv\" (UID: \"047276c4-c2f7-4f16-a7be-64fba5485b6c\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" Apr 22 15:08:55.984647 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.984621 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-proxy-server-cert\" (UniqueName: \"kubernetes.io/secret/700dd30f-621e-4e1b-970e-a0fe55861cb9-service-proxy-server-cert\") pod \"cluster-proxy-proxy-agent-86d5fdd478-9njxg\" (UID: \"700dd30f-621e-4e1b-970e-a0fe55861cb9\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-86d5fdd478-9njxg" Apr 22 15:08:55.984760 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.984723 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/700dd30f-621e-4e1b-970e-a0fe55861cb9-hub-kubeconfig\") pod \"cluster-proxy-proxy-agent-86d5fdd478-9njxg\" (UID: \"700dd30f-621e-4e1b-970e-a0fe55861cb9\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-86d5fdd478-9njxg" Apr 22 15:08:55.985067 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.985047 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/2adf4441-467a-46c0-a616-97afe2eb9fe8-image-registry-private-configuration\") pod \"image-registry-66f5f8d5cd-rgqhw\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:08:55.985160 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.985073 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2adf4441-467a-46c0-a616-97afe2eb9fe8-installation-pull-secrets\") pod \"image-registry-66f5f8d5cd-rgqhw\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:08:55.985717 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.985680 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca\" (UniqueName: \"kubernetes.io/secret/700dd30f-621e-4e1b-970e-a0fe55861cb9-ca\") pod \"cluster-proxy-proxy-agent-86d5fdd478-9njxg\" (UID: \"700dd30f-621e-4e1b-970e-a0fe55861cb9\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-86d5fdd478-9njxg" Apr 22 15:08:55.985852 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.985834 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hub\" (UniqueName: \"kubernetes.io/secret/700dd30f-621e-4e1b-970e-a0fe55861cb9-hub\") pod \"cluster-proxy-proxy-agent-86d5fdd478-9njxg\" (UID: \"700dd30f-621e-4e1b-970e-a0fe55861cb9\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-86d5fdd478-9njxg" Apr 22 15:08:55.986021 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.985990 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/b50382c5-ef34-4d73-9526-989655d2e11f-hub-kubeconfig\") pod \"managed-serviceaccount-addon-agent-648d979695-ch7nn\" (UID: \"b50382c5-ef34-4d73-9526-989655d2e11f\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-648d979695-ch7nn" Apr 22 15:08:55.986186 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.986168 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/047276c4-c2f7-4f16-a7be-64fba5485b6c-klusterlet-config\") pod \"klusterlet-addon-workmgr-567b4745f5-tj4cv\" (UID: \"047276c4-c2f7-4f16-a7be-64fba5485b6c\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" Apr 22 15:08:55.989484 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.989459 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ocpservice-ca\" (UniqueName: \"kubernetes.io/configmap/700dd30f-621e-4e1b-970e-a0fe55861cb9-ocpservice-ca\") pod \"cluster-proxy-proxy-agent-86d5fdd478-9njxg\" (UID: \"700dd30f-621e-4e1b-970e-a0fe55861cb9\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-86d5fdd478-9njxg" Apr 22 15:08:55.997619 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.997593 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-plpfp\" (UniqueName: \"kubernetes.io/projected/b50382c5-ef34-4d73-9526-989655d2e11f-kube-api-access-plpfp\") pod \"managed-serviceaccount-addon-agent-648d979695-ch7nn\" (UID: \"b50382c5-ef34-4d73-9526-989655d2e11f\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-648d979695-ch7nn" Apr 22 15:08:55.998013 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.997848 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwx2n\" (UniqueName: \"kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-kube-api-access-dwx2n\") pod \"image-registry-66f5f8d5cd-rgqhw\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:08:55.999520 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.999498 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-27jm2\" (UniqueName: \"kubernetes.io/projected/700dd30f-621e-4e1b-970e-a0fe55861cb9-kube-api-access-27jm2\") pod \"cluster-proxy-proxy-agent-86d5fdd478-9njxg\" (UID: \"700dd30f-621e-4e1b-970e-a0fe55861cb9\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-86d5fdd478-9njxg" Apr 22 15:08:55.999949 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:55.999928 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-bound-sa-token\") pod \"image-registry-66f5f8d5cd-rgqhw\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:08:56.000360 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:56.000337 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxbdg\" (UniqueName: \"kubernetes.io/projected/047276c4-c2f7-4f16-a7be-64fba5485b6c-kube-api-access-gxbdg\") pod \"klusterlet-addon-workmgr-567b4745f5-tj4cv\" (UID: \"047276c4-c2f7-4f16-a7be-64fba5485b6c\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" Apr 22 15:08:56.077592 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:56.077550 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-86d5fdd478-9njxg" Apr 22 15:08:56.081902 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:56.081641 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b-config-volume\") pod \"dns-default-rb7d6\" (UID: \"63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b\") " pod="openshift-dns/dns-default-rb7d6" Apr 22 15:08:56.082016 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:56.081925 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/60651bed-aafc-4a23-b90f-3110ee68359c-cert\") pod \"ingress-canary-4lslj\" (UID: \"60651bed-aafc-4a23-b90f-3110ee68359c\") " pod="openshift-ingress-canary/ingress-canary-4lslj" Apr 22 15:08:56.082016 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:56.081963 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zg9pw\" (UniqueName: \"kubernetes.io/projected/63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b-kube-api-access-zg9pw\") pod \"dns-default-rb7d6\" (UID: \"63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b\") " pod="openshift-dns/dns-default-rb7d6" Apr 22 15:08:56.082016 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:56.081997 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b-metrics-tls\") pod \"dns-default-rb7d6\" (UID: \"63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b\") " pod="openshift-dns/dns-default-rb7d6" Apr 22 15:08:56.082143 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:56.082024 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b-tmp-dir\") pod \"dns-default-rb7d6\" (UID: \"63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b\") " pod="openshift-dns/dns-default-rb7d6" Apr 22 15:08:56.082143 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:56.082051 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l284x\" (UniqueName: \"kubernetes.io/projected/60651bed-aafc-4a23-b90f-3110ee68359c-kube-api-access-l284x\") pod \"ingress-canary-4lslj\" (UID: \"60651bed-aafc-4a23-b90f-3110ee68359c\") " pod="openshift-ingress-canary/ingress-canary-4lslj" Apr 22 15:08:56.082143 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:56.082072 2575 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 22 15:08:56.082143 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:56.082109 2575 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 22 15:08:56.082143 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:56.082140 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60651bed-aafc-4a23-b90f-3110ee68359c-cert podName:60651bed-aafc-4a23-b90f-3110ee68359c nodeName:}" failed. No retries permitted until 2026-04-22 15:08:56.582121963 +0000 UTC m=+35.817369769 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/60651bed-aafc-4a23-b90f-3110ee68359c-cert") pod "ingress-canary-4lslj" (UID: "60651bed-aafc-4a23-b90f-3110ee68359c") : secret "canary-serving-cert" not found Apr 22 15:08:56.082311 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:56.082172 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b-metrics-tls podName:63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b nodeName:}" failed. No retries permitted until 2026-04-22 15:08:56.582156064 +0000 UTC m=+35.817403855 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b-metrics-tls") pod "dns-default-rb7d6" (UID: "63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b") : secret "dns-default-metrics-tls" not found Apr 22 15:08:56.082311 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:56.082221 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b-config-volume\") pod \"dns-default-rb7d6\" (UID: \"63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b\") " pod="openshift-dns/dns-default-rb7d6" Apr 22 15:08:56.082494 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:56.082478 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b-tmp-dir\") pod \"dns-default-rb7d6\" (UID: \"63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b\") " pod="openshift-dns/dns-default-rb7d6" Apr 22 15:08:56.087759 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:56.087734 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-648d979695-ch7nn" Apr 22 15:08:56.100300 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:56.100274 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg9pw\" (UniqueName: \"kubernetes.io/projected/63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b-kube-api-access-zg9pw\") pod \"dns-default-rb7d6\" (UID: \"63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b\") " pod="openshift-dns/dns-default-rb7d6" Apr 22 15:08:56.102146 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:56.102126 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" Apr 22 15:08:56.108116 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:56.108094 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l284x\" (UniqueName: \"kubernetes.io/projected/60651bed-aafc-4a23-b90f-3110ee68359c-kube-api-access-l284x\") pod \"ingress-canary-4lslj\" (UID: \"60651bed-aafc-4a23-b90f-3110ee68359c\") " pod="openshift-ingress-canary/ingress-canary-4lslj" Apr 22 15:08:56.323931 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:56.323836 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-6fwnt" Apr 22 15:08:56.323931 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:56.323882 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j6s9c" Apr 22 15:08:56.324124 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:56.324022 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6hrq" Apr 22 15:08:56.327115 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:56.327091 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"original-pull-secret\"" Apr 22 15:08:56.327321 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:56.327290 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-62z2n\"" Apr 22 15:08:56.327498 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:56.327484 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-4nbn2\"" Apr 22 15:08:56.328282 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:56.328266 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 22 15:08:56.328416 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:56.328400 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 22 15:08:56.328802 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:56.328787 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 22 15:08:56.485419 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:56.485376 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-registry-tls\") pod \"image-registry-66f5f8d5cd-rgqhw\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:08:56.485587 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:56.485493 2575 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 22 15:08:56.485587 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:56.485515 2575 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-66f5f8d5cd-rgqhw: secret "image-registry-tls" not found Apr 22 15:08:56.485587 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:56.485586 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-registry-tls podName:2adf4441-467a-46c0-a616-97afe2eb9fe8 nodeName:}" failed. No retries permitted until 2026-04-22 15:08:57.485564332 +0000 UTC m=+36.720812138 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-registry-tls") pod "image-registry-66f5f8d5cd-rgqhw" (UID: "2adf4441-467a-46c0-a616-97afe2eb9fe8") : secret "image-registry-tls" not found Apr 22 15:08:56.586492 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:56.586339 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/60651bed-aafc-4a23-b90f-3110ee68359c-cert\") pod \"ingress-canary-4lslj\" (UID: \"60651bed-aafc-4a23-b90f-3110ee68359c\") " pod="openshift-ingress-canary/ingress-canary-4lslj" Apr 22 15:08:56.586492 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:56.586415 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b-metrics-tls\") pod \"dns-default-rb7d6\" (UID: \"63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b\") " pod="openshift-dns/dns-default-rb7d6" Apr 22 15:08:56.586713 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:56.586519 2575 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 22 15:08:56.586713 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:56.586572 2575 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 22 15:08:56.586713 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:56.586593 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60651bed-aafc-4a23-b90f-3110ee68359c-cert podName:60651bed-aafc-4a23-b90f-3110ee68359c nodeName:}" failed. No retries permitted until 2026-04-22 15:08:57.586572282 +0000 UTC m=+36.821820085 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/60651bed-aafc-4a23-b90f-3110ee68359c-cert") pod "ingress-canary-4lslj" (UID: "60651bed-aafc-4a23-b90f-3110ee68359c") : secret "canary-serving-cert" not found Apr 22 15:08:56.586713 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:56.586626 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b-metrics-tls podName:63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b nodeName:}" failed. No retries permitted until 2026-04-22 15:08:57.58661542 +0000 UTC m=+36.821863208 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b-metrics-tls") pod "dns-default-rb7d6" (UID: "63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b") : secret "dns-default-metrics-tls" not found Apr 22 15:08:57.346288 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:57.346256 2575 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-648d979695-ch7nn"] Apr 22 15:08:57.353006 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:57.352984 2575 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/cluster-proxy-proxy-agent-86d5fdd478-9njxg"] Apr 22 15:08:57.365966 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:57.365939 2575 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv"] Apr 22 15:08:57.415475 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:57.415437 2575 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb50382c5_ef34_4d73_9526_989655d2e11f.slice/crio-284a95a214551a60385128ba350a293f3655400c2216b227dcf12d5a9ffb693d WatchSource:0}: Error finding container 284a95a214551a60385128ba350a293f3655400c2216b227dcf12d5a9ffb693d: Status 404 returned error can't find the container with id 284a95a214551a60385128ba350a293f3655400c2216b227dcf12d5a9ffb693d Apr 22 15:08:57.415996 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:57.415959 2575 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod700dd30f_621e_4e1b_970e_a0fe55861cb9.slice/crio-26274ea3a7615d7da83929c3095ec7bb94cd6bb991e1a81e0ff27b1d1b741f5f WatchSource:0}: Error finding container 26274ea3a7615d7da83929c3095ec7bb94cd6bb991e1a81e0ff27b1d1b741f5f: Status 404 returned error can't find the container with id 26274ea3a7615d7da83929c3095ec7bb94cd6bb991e1a81e0ff27b1d1b741f5f Apr 22 15:08:57.416798 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:08:57.416772 2575 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod047276c4_c2f7_4f16_a7be_64fba5485b6c.slice/crio-86f3dbeae470c6c581bed9892b29837620240ec2efd7af3d32ddd3edc88e34c3 WatchSource:0}: Error finding container 86f3dbeae470c6c581bed9892b29837620240ec2efd7af3d32ddd3edc88e34c3: Status 404 returned error can't find the container with id 86f3dbeae470c6c581bed9892b29837620240ec2efd7af3d32ddd3edc88e34c3 Apr 22 15:08:57.495499 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:57.495471 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-registry-tls\") pod \"image-registry-66f5f8d5cd-rgqhw\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:08:57.495613 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:57.495600 2575 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 22 15:08:57.495613 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:57.495614 2575 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-66f5f8d5cd-rgqhw: secret "image-registry-tls" not found Apr 22 15:08:57.495708 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:57.495671 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-registry-tls podName:2adf4441-467a-46c0-a616-97afe2eb9fe8 nodeName:}" failed. No retries permitted until 2026-04-22 15:08:59.495652939 +0000 UTC m=+38.730900730 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-registry-tls") pod "image-registry-66f5f8d5cd-rgqhw" (UID: "2adf4441-467a-46c0-a616-97afe2eb9fe8") : secret "image-registry-tls" not found Apr 22 15:08:57.512300 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:57.512254 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-86d5fdd478-9njxg" event={"ID":"700dd30f-621e-4e1b-970e-a0fe55861cb9","Type":"ContainerStarted","Data":"26274ea3a7615d7da83929c3095ec7bb94cd6bb991e1a81e0ff27b1d1b741f5f"} Apr 22 15:08:57.513559 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:57.513533 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-648d979695-ch7nn" event={"ID":"b50382c5-ef34-4d73-9526-989655d2e11f","Type":"ContainerStarted","Data":"284a95a214551a60385128ba350a293f3655400c2216b227dcf12d5a9ffb693d"} Apr 22 15:08:57.514637 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:57.514612 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" event={"ID":"047276c4-c2f7-4f16-a7be-64fba5485b6c","Type":"ContainerStarted","Data":"86f3dbeae470c6c581bed9892b29837620240ec2efd7af3d32ddd3edc88e34c3"} Apr 22 15:08:57.596138 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:57.596110 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b-metrics-tls\") pod \"dns-default-rb7d6\" (UID: \"63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b\") " pod="openshift-dns/dns-default-rb7d6" Apr 22 15:08:57.596260 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:57.596221 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/60651bed-aafc-4a23-b90f-3110ee68359c-cert\") pod \"ingress-canary-4lslj\" (UID: \"60651bed-aafc-4a23-b90f-3110ee68359c\") " pod="openshift-ingress-canary/ingress-canary-4lslj" Apr 22 15:08:57.596309 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:57.596261 2575 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 22 15:08:57.596309 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:57.596296 2575 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 22 15:08:57.596368 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:57.596327 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b-metrics-tls podName:63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b nodeName:}" failed. No retries permitted until 2026-04-22 15:08:59.596311836 +0000 UTC m=+38.831559624 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b-metrics-tls") pod "dns-default-rb7d6" (UID: "63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b") : secret "dns-default-metrics-tls" not found Apr 22 15:08:57.596368 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:57.596345 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60651bed-aafc-4a23-b90f-3110ee68359c-cert podName:60651bed-aafc-4a23-b90f-3110ee68359c nodeName:}" failed. No retries permitted until 2026-04-22 15:08:59.596335817 +0000 UTC m=+38.831583605 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/60651bed-aafc-4a23-b90f-3110ee68359c-cert") pod "ingress-canary-4lslj" (UID: "60651bed-aafc-4a23-b90f-3110ee68359c") : secret "canary-serving-cert" not found Apr 22 15:08:58.523448 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:58.523408 2575 generic.go:358] "Generic (PLEG): container finished" podID="a5f9bf55-b089-4f8e-8313-0f7409db1455" containerID="5506d4a532601477875f25ba8ad0b153eb3526faa505b4e39e404832581beaf5" exitCode=0 Apr 22 15:08:58.523968 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:58.523492 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xrffc" event={"ID":"a5f9bf55-b089-4f8e-8313-0f7409db1455","Type":"ContainerDied","Data":"5506d4a532601477875f25ba8ad0b153eb3526faa505b4e39e404832581beaf5"} Apr 22 15:08:59.517503 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:59.517232 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-registry-tls\") pod \"image-registry-66f5f8d5cd-rgqhw\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:08:59.517904 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:59.517734 2575 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 22 15:08:59.517904 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:59.517752 2575 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-66f5f8d5cd-rgqhw: secret "image-registry-tls" not found Apr 22 15:08:59.517904 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:59.517811 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-registry-tls podName:2adf4441-467a-46c0-a616-97afe2eb9fe8 nodeName:}" failed. No retries permitted until 2026-04-22 15:09:03.517792019 +0000 UTC m=+42.753039822 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-registry-tls") pod "image-registry-66f5f8d5cd-rgqhw" (UID: "2adf4441-467a-46c0-a616-97afe2eb9fe8") : secret "image-registry-tls" not found Apr 22 15:08:59.543266 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:59.542301 2575 generic.go:358] "Generic (PLEG): container finished" podID="a5f9bf55-b089-4f8e-8313-0f7409db1455" containerID="4ef2974c31b81b48364d027e454ec31b587d6e32bf97a8d1c03a77245ff0a793" exitCode=0 Apr 22 15:08:59.543266 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:59.542380 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xrffc" event={"ID":"a5f9bf55-b089-4f8e-8313-0f7409db1455","Type":"ContainerDied","Data":"4ef2974c31b81b48364d027e454ec31b587d6e32bf97a8d1c03a77245ff0a793"} Apr 22 15:08:59.618083 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:59.618043 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/60651bed-aafc-4a23-b90f-3110ee68359c-cert\") pod \"ingress-canary-4lslj\" (UID: \"60651bed-aafc-4a23-b90f-3110ee68359c\") " pod="openshift-ingress-canary/ingress-canary-4lslj" Apr 22 15:08:59.618253 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:08:59.618113 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b-metrics-tls\") pod \"dns-default-rb7d6\" (UID: \"63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b\") " pod="openshift-dns/dns-default-rb7d6" Apr 22 15:08:59.618412 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:59.618393 2575 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 22 15:08:59.618488 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:59.618465 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60651bed-aafc-4a23-b90f-3110ee68359c-cert podName:60651bed-aafc-4a23-b90f-3110ee68359c nodeName:}" failed. No retries permitted until 2026-04-22 15:09:03.618445096 +0000 UTC m=+42.853692885 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/60651bed-aafc-4a23-b90f-3110ee68359c-cert") pod "ingress-canary-4lslj" (UID: "60651bed-aafc-4a23-b90f-3110ee68359c") : secret "canary-serving-cert" not found Apr 22 15:08:59.618915 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:59.618888 2575 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 22 15:08:59.619022 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:08:59.618964 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b-metrics-tls podName:63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b nodeName:}" failed. No retries permitted until 2026-04-22 15:09:03.618945029 +0000 UTC m=+42.854192820 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b-metrics-tls") pod "dns-default-rb7d6" (UID: "63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b") : secret "dns-default-metrics-tls" not found Apr 22 15:09:00.547566 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:00.547530 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xrffc" event={"ID":"a5f9bf55-b089-4f8e-8313-0f7409db1455","Type":"ContainerStarted","Data":"9de8461c4e60a20abcd601056b78251b7d8597bee92c4f078eb78408a86260f6"} Apr 22 15:09:00.575393 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:00.575307 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-xrffc" podStartSLOduration=6.291762077 podStartE2EDuration="39.575291186s" podCreationTimestamp="2026-04-22 15:08:21 +0000 UTC" firstStartedPulling="2026-04-22 15:08:24.174516992 +0000 UTC m=+3.409764794" lastFinishedPulling="2026-04-22 15:08:57.458045973 +0000 UTC m=+36.693293903" observedRunningTime="2026-04-22 15:09:00.574435466 +0000 UTC m=+39.809683276" watchObservedRunningTime="2026-04-22 15:09:00.575291186 +0000 UTC m=+39.810538995" Apr 22 15:09:03.555257 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:03.555031 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-648d979695-ch7nn" event={"ID":"b50382c5-ef34-4d73-9526-989655d2e11f","Type":"ContainerStarted","Data":"a34538fd36d3bfda0920a91ef8f3b0f36d554e7a47196f3ffb70fbe88bdc2f7d"} Apr 22 15:09:03.555662 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:03.555640 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-registry-tls\") pod \"image-registry-66f5f8d5cd-rgqhw\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:09:03.555811 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:09:03.555789 2575 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 22 15:09:03.555811 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:09:03.555811 2575 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-66f5f8d5cd-rgqhw: secret "image-registry-tls" not found Apr 22 15:09:03.555983 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:09:03.555857 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-registry-tls podName:2adf4441-467a-46c0-a616-97afe2eb9fe8 nodeName:}" failed. No retries permitted until 2026-04-22 15:09:11.555843791 +0000 UTC m=+50.791091579 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-registry-tls") pod "image-registry-66f5f8d5cd-rgqhw" (UID: "2adf4441-467a-46c0-a616-97afe2eb9fe8") : secret "image-registry-tls" not found Apr 22 15:09:03.557173 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:03.557149 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" event={"ID":"047276c4-c2f7-4f16-a7be-64fba5485b6c","Type":"ContainerStarted","Data":"76ad31e6f176e4c33c585ab6bd257ebdd0ec300424c8a574b9ca24b69f97657e"} Apr 22 15:09:03.557854 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:03.557837 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" Apr 22 15:09:03.559322 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:03.559299 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-86d5fdd478-9njxg" event={"ID":"700dd30f-621e-4e1b-970e-a0fe55861cb9","Type":"ContainerStarted","Data":"6d9f7fcf06cfcc6c7d5cb12e3f1d48cd076ce6809a8728c160a6a51565d28082"} Apr 22 15:09:03.559404 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:03.559383 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" Apr 22 15:09:03.579349 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:03.579278 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-648d979695-ch7nn" podStartSLOduration=31.624150187 podStartE2EDuration="37.579265056s" podCreationTimestamp="2026-04-22 15:08:26 +0000 UTC" firstStartedPulling="2026-04-22 15:08:57.432822651 +0000 UTC m=+36.668070441" lastFinishedPulling="2026-04-22 15:09:03.38793751 +0000 UTC m=+42.623185310" observedRunningTime="2026-04-22 15:09:03.57879806 +0000 UTC m=+42.814045872" watchObservedRunningTime="2026-04-22 15:09:03.579265056 +0000 UTC m=+42.814512857" Apr 22 15:09:03.656420 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:03.656313 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/60651bed-aafc-4a23-b90f-3110ee68359c-cert\") pod \"ingress-canary-4lslj\" (UID: \"60651bed-aafc-4a23-b90f-3110ee68359c\") " pod="openshift-ingress-canary/ingress-canary-4lslj" Apr 22 15:09:03.656420 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:03.656386 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b-metrics-tls\") pod \"dns-default-rb7d6\" (UID: \"63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b\") " pod="openshift-dns/dns-default-rb7d6" Apr 22 15:09:03.656637 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:09:03.656453 2575 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 22 15:09:03.656637 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:09:03.656480 2575 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 22 15:09:03.656637 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:09:03.656544 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b-metrics-tls podName:63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b nodeName:}" failed. No retries permitted until 2026-04-22 15:09:11.656528237 +0000 UTC m=+50.891776036 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b-metrics-tls") pod "dns-default-rb7d6" (UID: "63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b") : secret "dns-default-metrics-tls" not found Apr 22 15:09:03.656637 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:09:03.656561 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60651bed-aafc-4a23-b90f-3110ee68359c-cert podName:60651bed-aafc-4a23-b90f-3110ee68359c nodeName:}" failed. No retries permitted until 2026-04-22 15:09:11.656553874 +0000 UTC m=+50.891801667 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/60651bed-aafc-4a23-b90f-3110ee68359c-cert") pod "ingress-canary-4lslj" (UID: "60651bed-aafc-4a23-b90f-3110ee68359c") : secret "canary-serving-cert" not found Apr 22 15:09:06.567515 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:06.567472 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-86d5fdd478-9njxg" event={"ID":"700dd30f-621e-4e1b-970e-a0fe55861cb9","Type":"ContainerStarted","Data":"c3ce6816ea71466fb6d93735b8036da1d8799afa8a350e59637628f66f8e8d75"} Apr 22 15:09:06.567515 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:06.567508 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-86d5fdd478-9njxg" event={"ID":"700dd30f-621e-4e1b-970e-a0fe55861cb9","Type":"ContainerStarted","Data":"c6e6b626cd5c51b99c2f598cdaea1b27de7be8da4bb74e1875a7c604e14b511c"} Apr 22 15:09:06.602337 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:06.602272 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-86d5fdd478-9njxg" podStartSLOduration=32.23722999 podStartE2EDuration="40.602257359s" podCreationTimestamp="2026-04-22 15:08:26 +0000 UTC" firstStartedPulling="2026-04-22 15:08:57.432736992 +0000 UTC m=+36.667984794" lastFinishedPulling="2026-04-22 15:09:05.797764375 +0000 UTC m=+45.033012163" observedRunningTime="2026-04-22 15:09:06.602202342 +0000 UTC m=+45.837450163" watchObservedRunningTime="2026-04-22 15:09:06.602257359 +0000 UTC m=+45.837505212" Apr 22 15:09:06.602698 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:06.602669 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" podStartSLOduration=34.632268474 podStartE2EDuration="40.602660729s" podCreationTimestamp="2026-04-22 15:08:26 +0000 UTC" firstStartedPulling="2026-04-22 15:08:57.432700894 +0000 UTC m=+36.667948687" lastFinishedPulling="2026-04-22 15:09:03.403093139 +0000 UTC m=+42.638340942" observedRunningTime="2026-04-22 15:09:03.606784565 +0000 UTC m=+42.842032377" watchObservedRunningTime="2026-04-22 15:09:06.602660729 +0000 UTC m=+45.837908575" Apr 22 15:09:10.005094 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:10.005056 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/95af4bf4-9e09-49ec-bfb1-f16c11110db8-original-pull-secret\") pod \"global-pull-secret-syncer-6fwnt\" (UID: \"95af4bf4-9e09-49ec-bfb1-f16c11110db8\") " pod="kube-system/global-pull-secret-syncer-6fwnt" Apr 22 15:09:10.008711 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:10.008680 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/95af4bf4-9e09-49ec-bfb1-f16c11110db8-original-pull-secret\") pod \"global-pull-secret-syncer-6fwnt\" (UID: \"95af4bf4-9e09-49ec-bfb1-f16c11110db8\") " pod="kube-system/global-pull-secret-syncer-6fwnt" Apr 22 15:09:10.136116 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:10.136077 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-6fwnt" Apr 22 15:09:10.258315 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:10.258239 2575 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-6fwnt"] Apr 22 15:09:10.261404 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:09:10.261370 2575 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95af4bf4_9e09_49ec_bfb1_f16c11110db8.slice/crio-98434f87cc481dad1402b9c083560ea14d91e33459774dd15d8dc8e1880621f8 WatchSource:0}: Error finding container 98434f87cc481dad1402b9c083560ea14d91e33459774dd15d8dc8e1880621f8: Status 404 returned error can't find the container with id 98434f87cc481dad1402b9c083560ea14d91e33459774dd15d8dc8e1880621f8 Apr 22 15:09:10.576900 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:10.576784 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-6fwnt" event={"ID":"95af4bf4-9e09-49ec-bfb1-f16c11110db8","Type":"ContainerStarted","Data":"98434f87cc481dad1402b9c083560ea14d91e33459774dd15d8dc8e1880621f8"} Apr 22 15:09:11.617988 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:11.617953 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-registry-tls\") pod \"image-registry-66f5f8d5cd-rgqhw\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:09:11.618551 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:09:11.618089 2575 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 22 15:09:11.618551 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:09:11.618106 2575 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-66f5f8d5cd-rgqhw: secret "image-registry-tls" not found Apr 22 15:09:11.618551 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:09:11.618173 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-registry-tls podName:2adf4441-467a-46c0-a616-97afe2eb9fe8 nodeName:}" failed. No retries permitted until 2026-04-22 15:09:27.618154145 +0000 UTC m=+66.853401935 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-registry-tls") pod "image-registry-66f5f8d5cd-rgqhw" (UID: "2adf4441-467a-46c0-a616-97afe2eb9fe8") : secret "image-registry-tls" not found Apr 22 15:09:11.718953 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:11.718912 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/60651bed-aafc-4a23-b90f-3110ee68359c-cert\") pod \"ingress-canary-4lslj\" (UID: \"60651bed-aafc-4a23-b90f-3110ee68359c\") " pod="openshift-ingress-canary/ingress-canary-4lslj" Apr 22 15:09:11.718953 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:11.718958 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b-metrics-tls\") pod \"dns-default-rb7d6\" (UID: \"63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b\") " pod="openshift-dns/dns-default-rb7d6" Apr 22 15:09:11.719203 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:09:11.719053 2575 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 22 15:09:11.719203 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:09:11.719067 2575 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 22 15:09:11.719203 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:09:11.719108 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b-metrics-tls podName:63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b nodeName:}" failed. No retries permitted until 2026-04-22 15:09:27.71909407 +0000 UTC m=+66.954341858 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b-metrics-tls") pod "dns-default-rb7d6" (UID: "63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b") : secret "dns-default-metrics-tls" not found Apr 22 15:09:11.719203 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:09:11.719130 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60651bed-aafc-4a23-b90f-3110ee68359c-cert podName:60651bed-aafc-4a23-b90f-3110ee68359c nodeName:}" failed. No retries permitted until 2026-04-22 15:09:27.71911408 +0000 UTC m=+66.954361868 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/60651bed-aafc-4a23-b90f-3110ee68359c-cert") pod "ingress-canary-4lslj" (UID: "60651bed-aafc-4a23-b90f-3110ee68359c") : secret "canary-serving-cert" not found Apr 22 15:09:16.589582 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:16.589542 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-6fwnt" event={"ID":"95af4bf4-9e09-49ec-bfb1-f16c11110db8","Type":"ContainerStarted","Data":"d8472d5c0593161607de85b46f137642fc7b9004e4137b4b39f83a8c5d3fcf95"} Apr 22 15:09:16.606136 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:16.606086 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/global-pull-secret-syncer-6fwnt" podStartSLOduration=33.302755930000004 podStartE2EDuration="38.606071144s" podCreationTimestamp="2026-04-22 15:08:38 +0000 UTC" firstStartedPulling="2026-04-22 15:09:10.263111283 +0000 UTC m=+49.498359072" lastFinishedPulling="2026-04-22 15:09:15.566426498 +0000 UTC m=+54.801674286" observedRunningTime="2026-04-22 15:09:16.605091157 +0000 UTC m=+55.840338980" watchObservedRunningTime="2026-04-22 15:09:16.606071144 +0000 UTC m=+55.841318953" Apr 22 15:09:21.509829 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:21.509796 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lt5hd" Apr 22 15:09:27.049590 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:27.049554 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be8c5f47-6214-42a7-8e36-1c852cc48be6-metrics-certs\") pod \"network-metrics-daemon-b6hrq\" (UID: \"be8c5f47-6214-42a7-8e36-1c852cc48be6\") " pod="openshift-multus/network-metrics-daemon-b6hrq" Apr 22 15:09:27.052450 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:27.052430 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 22 15:09:27.059993 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:09:27.059969 2575 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Apr 22 15:09:27.060076 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:09:27.060043 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be8c5f47-6214-42a7-8e36-1c852cc48be6-metrics-certs podName:be8c5f47-6214-42a7-8e36-1c852cc48be6 nodeName:}" failed. No retries permitted until 2026-04-22 15:10:31.060025545 +0000 UTC m=+130.295273333 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/be8c5f47-6214-42a7-8e36-1c852cc48be6-metrics-certs") pod "network-metrics-daemon-b6hrq" (UID: "be8c5f47-6214-42a7-8e36-1c852cc48be6") : secret "metrics-daemon-secret" not found Apr 22 15:09:27.151013 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:27.150961 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zhkz8\" (UniqueName: \"kubernetes.io/projected/0e7fe577-78a6-4227-b074-218a66e869bc-kube-api-access-zhkz8\") pod \"network-check-target-j6s9c\" (UID: \"0e7fe577-78a6-4227-b074-218a66e869bc\") " pod="openshift-network-diagnostics/network-check-target-j6s9c" Apr 22 15:09:27.153776 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:27.153749 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 22 15:09:27.164518 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:27.164490 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 22 15:09:27.175404 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:27.175372 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhkz8\" (UniqueName: \"kubernetes.io/projected/0e7fe577-78a6-4227-b074-218a66e869bc-kube-api-access-zhkz8\") pod \"network-check-target-j6s9c\" (UID: \"0e7fe577-78a6-4227-b074-218a66e869bc\") " pod="openshift-network-diagnostics/network-check-target-j6s9c" Apr 22 15:09:27.252467 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:27.252434 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-4nbn2\"" Apr 22 15:09:27.259952 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:27.259930 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j6s9c" Apr 22 15:09:27.378084 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:27.378049 2575 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-j6s9c"] Apr 22 15:09:27.381528 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:09:27.381485 2575 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e7fe577_78a6_4227_b074_218a66e869bc.slice/crio-462a76cd2aa6c55fbb237adfc3b856d5c60dedc9b05d56f0e0999e9570317ed4 WatchSource:0}: Error finding container 462a76cd2aa6c55fbb237adfc3b856d5c60dedc9b05d56f0e0999e9570317ed4: Status 404 returned error can't find the container with id 462a76cd2aa6c55fbb237adfc3b856d5c60dedc9b05d56f0e0999e9570317ed4 Apr 22 15:09:27.618134 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:27.618038 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-j6s9c" event={"ID":"0e7fe577-78a6-4227-b074-218a66e869bc","Type":"ContainerStarted","Data":"462a76cd2aa6c55fbb237adfc3b856d5c60dedc9b05d56f0e0999e9570317ed4"} Apr 22 15:09:27.654754 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:27.654717 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-registry-tls\") pod \"image-registry-66f5f8d5cd-rgqhw\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:09:27.654928 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:09:27.654889 2575 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 22 15:09:27.654928 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:09:27.654908 2575 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-66f5f8d5cd-rgqhw: secret "image-registry-tls" not found Apr 22 15:09:27.654999 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:09:27.654976 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-registry-tls podName:2adf4441-467a-46c0-a616-97afe2eb9fe8 nodeName:}" failed. No retries permitted until 2026-04-22 15:09:59.654959346 +0000 UTC m=+98.890207134 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-registry-tls") pod "image-registry-66f5f8d5cd-rgqhw" (UID: "2adf4441-467a-46c0-a616-97afe2eb9fe8") : secret "image-registry-tls" not found Apr 22 15:09:27.755329 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:27.755288 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/60651bed-aafc-4a23-b90f-3110ee68359c-cert\") pod \"ingress-canary-4lslj\" (UID: \"60651bed-aafc-4a23-b90f-3110ee68359c\") " pod="openshift-ingress-canary/ingress-canary-4lslj" Apr 22 15:09:27.755329 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:27.755346 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b-metrics-tls\") pod \"dns-default-rb7d6\" (UID: \"63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b\") " pod="openshift-dns/dns-default-rb7d6" Apr 22 15:09:27.755587 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:09:27.755470 2575 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 22 15:09:27.755587 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:09:27.755484 2575 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 22 15:09:27.755587 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:09:27.755568 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60651bed-aafc-4a23-b90f-3110ee68359c-cert podName:60651bed-aafc-4a23-b90f-3110ee68359c nodeName:}" failed. No retries permitted until 2026-04-22 15:09:59.75554705 +0000 UTC m=+98.990794842 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/60651bed-aafc-4a23-b90f-3110ee68359c-cert") pod "ingress-canary-4lslj" (UID: "60651bed-aafc-4a23-b90f-3110ee68359c") : secret "canary-serving-cert" not found Apr 22 15:09:27.755587 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:09:27.755588 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b-metrics-tls podName:63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b nodeName:}" failed. No retries permitted until 2026-04-22 15:09:59.755579577 +0000 UTC m=+98.990827371 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b-metrics-tls") pod "dns-default-rb7d6" (UID: "63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b") : secret "dns-default-metrics-tls" not found Apr 22 15:09:30.626783 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:30.626748 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-j6s9c" event={"ID":"0e7fe577-78a6-4227-b074-218a66e869bc","Type":"ContainerStarted","Data":"bbeed5c4d5179e2437bc74d6330825ed1af2c32d0b70f25124685e84a1ffe770"} Apr 22 15:09:30.627215 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:30.626984 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-j6s9c" Apr 22 15:09:30.644616 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:30.644566 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-target-j6s9c" podStartSLOduration=67.057996869 podStartE2EDuration="1m9.644549878s" podCreationTimestamp="2026-04-22 15:08:21 +0000 UTC" firstStartedPulling="2026-04-22 15:09:27.383487447 +0000 UTC m=+66.618735235" lastFinishedPulling="2026-04-22 15:09:29.970040455 +0000 UTC m=+69.205288244" observedRunningTime="2026-04-22 15:09:30.643553221 +0000 UTC m=+69.878801061" watchObservedRunningTime="2026-04-22 15:09:30.644549878 +0000 UTC m=+69.879797678" Apr 22 15:09:59.703379 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:59.703337 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-registry-tls\") pod \"image-registry-66f5f8d5cd-rgqhw\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:09:59.703817 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:09:59.703482 2575 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 22 15:09:59.703817 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:09:59.703500 2575 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-66f5f8d5cd-rgqhw: secret "image-registry-tls" not found Apr 22 15:09:59.703817 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:09:59.703556 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-registry-tls podName:2adf4441-467a-46c0-a616-97afe2eb9fe8 nodeName:}" failed. No retries permitted until 2026-04-22 15:11:03.703538718 +0000 UTC m=+162.938786506 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-registry-tls") pod "image-registry-66f5f8d5cd-rgqhw" (UID: "2adf4441-467a-46c0-a616-97afe2eb9fe8") : secret "image-registry-tls" not found Apr 22 15:09:59.804414 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:59.804374 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/60651bed-aafc-4a23-b90f-3110ee68359c-cert\") pod \"ingress-canary-4lslj\" (UID: \"60651bed-aafc-4a23-b90f-3110ee68359c\") " pod="openshift-ingress-canary/ingress-canary-4lslj" Apr 22 15:09:59.804519 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:09:59.804428 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b-metrics-tls\") pod \"dns-default-rb7d6\" (UID: \"63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b\") " pod="openshift-dns/dns-default-rb7d6" Apr 22 15:09:59.804556 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:09:59.804529 2575 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 22 15:09:59.804556 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:09:59.804529 2575 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 22 15:09:59.804614 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:09:59.804593 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60651bed-aafc-4a23-b90f-3110ee68359c-cert podName:60651bed-aafc-4a23-b90f-3110ee68359c nodeName:}" failed. No retries permitted until 2026-04-22 15:11:03.804575381 +0000 UTC m=+163.039823169 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/60651bed-aafc-4a23-b90f-3110ee68359c-cert") pod "ingress-canary-4lslj" (UID: "60651bed-aafc-4a23-b90f-3110ee68359c") : secret "canary-serving-cert" not found Apr 22 15:09:59.804614 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:09:59.804606 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b-metrics-tls podName:63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b nodeName:}" failed. No retries permitted until 2026-04-22 15:11:03.804600238 +0000 UTC m=+163.039848026 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b-metrics-tls") pod "dns-default-rb7d6" (UID: "63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b") : secret "dns-default-metrics-tls" not found Apr 22 15:10:01.631852 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:10:01.631815 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-j6s9c" Apr 22 15:10:31.128554 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:10:31.128494 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be8c5f47-6214-42a7-8e36-1c852cc48be6-metrics-certs\") pod \"network-metrics-daemon-b6hrq\" (UID: \"be8c5f47-6214-42a7-8e36-1c852cc48be6\") " pod="openshift-multus/network-metrics-daemon-b6hrq" Apr 22 15:10:31.129083 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:10:31.128647 2575 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Apr 22 15:10:31.129083 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:10:31.128730 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be8c5f47-6214-42a7-8e36-1c852cc48be6-metrics-certs podName:be8c5f47-6214-42a7-8e36-1c852cc48be6 nodeName:}" failed. No retries permitted until 2026-04-22 15:12:33.128713517 +0000 UTC m=+252.363961306 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/be8c5f47-6214-42a7-8e36-1c852cc48be6-metrics-certs") pod "network-metrics-daemon-b6hrq" (UID: "be8c5f47-6214-42a7-8e36-1c852cc48be6") : secret "metrics-daemon-secret" not found Apr 22 15:10:46.781570 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:10:46.781537 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-hqq4l_54264bd4-ce9e-4010-b213-56e5f4bfe070/dns-node-resolver/0.log" Apr 22 15:10:47.580063 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:10:47.580031 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-rw9wr_24727a23-7950-43c6-9a15-92416687fab7/node-ca/0.log" Apr 22 15:10:58.811373 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:10:58.811324 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[registry-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" Apr 22 15:10:58.818506 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:10:58.818470 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[metrics-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-dns/dns-default-rb7d6" podUID="63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b" Apr 22 15:10:58.832340 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:10:58.832316 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rb7d6" Apr 22 15:10:58.832476 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:10:58.832353 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:10:58.838983 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:10:58.838946 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-ingress-canary/ingress-canary-4lslj" podUID="60651bed-aafc-4a23-b90f-3110ee68359c" Apr 22 15:10:59.343659 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:10:59.343611 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[metrics-certs], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-multus/network-metrics-daemon-b6hrq" podUID="be8c5f47-6214-42a7-8e36-1c852cc48be6" Apr 22 15:10:59.834728 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:10:59.834699 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4lslj" Apr 22 15:11:03.558170 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:03.558136 2575 prober.go:120] "Probe failed" probeType="Readiness" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" podUID="047276c4-c2f7-4f16-a7be-64fba5485b6c" containerName="acm-agent" probeResult="failure" output="Get \"http://10.132.0.8:8000/readyz\": dial tcp 10.132.0.8:8000: connect: connection refused" Apr 22 15:11:03.782430 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:03.782308 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-registry-tls\") pod \"image-registry-66f5f8d5cd-rgqhw\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:11:03.785025 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:03.784993 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-registry-tls\") pod \"image-registry-66f5f8d5cd-rgqhw\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:11:03.845075 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:03.845038 2575 generic.go:358] "Generic (PLEG): container finished" podID="b50382c5-ef34-4d73-9526-989655d2e11f" containerID="a34538fd36d3bfda0920a91ef8f3b0f36d554e7a47196f3ffb70fbe88bdc2f7d" exitCode=255 Apr 22 15:11:03.845245 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:03.845113 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-648d979695-ch7nn" event={"ID":"b50382c5-ef34-4d73-9526-989655d2e11f","Type":"ContainerDied","Data":"a34538fd36d3bfda0920a91ef8f3b0f36d554e7a47196f3ffb70fbe88bdc2f7d"} Apr 22 15:11:03.845483 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:03.845465 2575 scope.go:117] "RemoveContainer" containerID="a34538fd36d3bfda0920a91ef8f3b0f36d554e7a47196f3ffb70fbe88bdc2f7d" Apr 22 15:11:03.846464 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:03.846446 2575 generic.go:358] "Generic (PLEG): container finished" podID="047276c4-c2f7-4f16-a7be-64fba5485b6c" containerID="76ad31e6f176e4c33c585ab6bd257ebdd0ec300424c8a574b9ca24b69f97657e" exitCode=1 Apr 22 15:11:03.846538 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:03.846508 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" event={"ID":"047276c4-c2f7-4f16-a7be-64fba5485b6c","Type":"ContainerDied","Data":"76ad31e6f176e4c33c585ab6bd257ebdd0ec300424c8a574b9ca24b69f97657e"} Apr 22 15:11:03.846844 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:03.846827 2575 scope.go:117] "RemoveContainer" containerID="76ad31e6f176e4c33c585ab6bd257ebdd0ec300424c8a574b9ca24b69f97657e" Apr 22 15:11:03.882936 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:03.882885 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/60651bed-aafc-4a23-b90f-3110ee68359c-cert\") pod \"ingress-canary-4lslj\" (UID: \"60651bed-aafc-4a23-b90f-3110ee68359c\") " pod="openshift-ingress-canary/ingress-canary-4lslj" Apr 22 15:11:03.883133 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:03.882962 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b-metrics-tls\") pod \"dns-default-rb7d6\" (UID: \"63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b\") " pod="openshift-dns/dns-default-rb7d6" Apr 22 15:11:03.885570 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:03.885550 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b-metrics-tls\") pod \"dns-default-rb7d6\" (UID: \"63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b\") " pod="openshift-dns/dns-default-rb7d6" Apr 22 15:11:03.885801 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:03.885779 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/60651bed-aafc-4a23-b90f-3110ee68359c-cert\") pod \"ingress-canary-4lslj\" (UID: \"60651bed-aafc-4a23-b90f-3110ee68359c\") " pod="openshift-ingress-canary/ingress-canary-4lslj" Apr 22 15:11:03.936344 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:03.936305 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-fkblf\"" Apr 22 15:11:03.936534 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:03.936305 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-dblpk\"" Apr 22 15:11:03.944055 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:03.943886 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:11:03.944482 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:03.944451 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rb7d6" Apr 22 15:11:04.038703 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:04.038657 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-cbqpc\"" Apr 22 15:11:04.047060 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:04.047008 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4lslj" Apr 22 15:11:04.094523 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:04.094477 2575 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66f5f8d5cd-rgqhw"] Apr 22 15:11:04.099334 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:11:04.099116 2575 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2adf4441_467a_46c0_a616_97afe2eb9fe8.slice/crio-7ed9a2ef78421de6df9937d18b8fa6d020dca589b572458cd2823b6601fc1df2 WatchSource:0}: Error finding container 7ed9a2ef78421de6df9937d18b8fa6d020dca589b572458cd2823b6601fc1df2: Status 404 returned error can't find the container with id 7ed9a2ef78421de6df9937d18b8fa6d020dca589b572458cd2823b6601fc1df2 Apr 22 15:11:04.107706 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:04.107678 2575 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-rb7d6"] Apr 22 15:11:04.116060 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:11:04.115733 2575 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63cdf3b0_7bf4_40fc_a333_2ca716b7ef3b.slice/crio-ff55d7e24370868caea8b40599c2895b98c3a932eb88daf2d37ead735df6d65c WatchSource:0}: Error finding container ff55d7e24370868caea8b40599c2895b98c3a932eb88daf2d37ead735df6d65c: Status 404 returned error can't find the container with id ff55d7e24370868caea8b40599c2895b98c3a932eb88daf2d37ead735df6d65c Apr 22 15:11:04.182838 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:04.182802 2575 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4lslj"] Apr 22 15:11:04.185771 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:11:04.185745 2575 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60651bed_aafc_4a23_b90f_3110ee68359c.slice/crio-b5f8d632fba32cd70fa9d965edbb4d9b1cbdd454888157cd1f6e9f105304c354 WatchSource:0}: Error finding container b5f8d632fba32cd70fa9d965edbb4d9b1cbdd454888157cd1f6e9f105304c354: Status 404 returned error can't find the container with id b5f8d632fba32cd70fa9d965edbb4d9b1cbdd454888157cd1f6e9f105304c354 Apr 22 15:11:04.852773 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:04.852240 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-648d979695-ch7nn" event={"ID":"b50382c5-ef34-4d73-9526-989655d2e11f","Type":"ContainerStarted","Data":"fb39437d5f966bdc62a3bed793d9bd217a014faed662cebbe6f0d83824de8caf"} Apr 22 15:11:04.854524 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:04.854495 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" event={"ID":"047276c4-c2f7-4f16-a7be-64fba5485b6c","Type":"ContainerStarted","Data":"097113d4bb9b6fde710109ebc5e7ad05fa93fb55e6485715a21f12d2bd75b44a"} Apr 22 15:11:04.855289 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:04.855120 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" Apr 22 15:11:04.855836 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:04.855818 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" Apr 22 15:11:04.857150 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:04.857123 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rb7d6" event={"ID":"63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b","Type":"ContainerStarted","Data":"ff55d7e24370868caea8b40599c2895b98c3a932eb88daf2d37ead735df6d65c"} Apr 22 15:11:04.858783 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:04.858753 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" event={"ID":"2adf4441-467a-46c0-a616-97afe2eb9fe8","Type":"ContainerStarted","Data":"0f5323dd8c45f4d123152693f04233675d3da65ff24e6b1d72267c74587f95bd"} Apr 22 15:11:04.858783 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:04.858790 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" event={"ID":"2adf4441-467a-46c0-a616-97afe2eb9fe8","Type":"ContainerStarted","Data":"7ed9a2ef78421de6df9937d18b8fa6d020dca589b572458cd2823b6601fc1df2"} Apr 22 15:11:04.858957 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:04.858915 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:11:04.859946 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:04.859927 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4lslj" event={"ID":"60651bed-aafc-4a23-b90f-3110ee68359c","Type":"ContainerStarted","Data":"b5f8d632fba32cd70fa9d965edbb4d9b1cbdd454888157cd1f6e9f105304c354"} Apr 22 15:11:04.913557 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:04.913507 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podStartSLOduration=142.913490119 podStartE2EDuration="2m22.913490119s" podCreationTimestamp="2026-04-22 15:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-22 15:11:04.912745361 +0000 UTC m=+164.147993173" watchObservedRunningTime="2026-04-22 15:11:04.913490119 +0000 UTC m=+164.148737929" Apr 22 15:11:06.867385 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:06.867335 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4lslj" event={"ID":"60651bed-aafc-4a23-b90f-3110ee68359c","Type":"ContainerStarted","Data":"23b42f8d0c8d3e1179f106f9d40ae88c90e1273a934f147fb95a8a20e2e47631"} Apr 22 15:11:06.868849 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:06.868824 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rb7d6" event={"ID":"63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b","Type":"ContainerStarted","Data":"06a966096be9cbd192f0f5dc82f5f454f26ed1ffabb219ff3b50293111c5d659"} Apr 22 15:11:06.868981 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:06.868854 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rb7d6" event={"ID":"63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b","Type":"ContainerStarted","Data":"b597badd0823325fbf792c40121ef08bd484fc204f941a4b11fa8ed0bea2332d"} Apr 22 15:11:06.933350 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:06.933273 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-4lslj" podStartSLOduration=130.026531638 podStartE2EDuration="2m11.93325518s" podCreationTimestamp="2026-04-22 15:08:55 +0000 UTC" firstStartedPulling="2026-04-22 15:11:04.187621755 +0000 UTC m=+163.422869543" lastFinishedPulling="2026-04-22 15:11:06.094345284 +0000 UTC m=+165.329593085" observedRunningTime="2026-04-22 15:11:06.885443657 +0000 UTC m=+166.120691466" watchObservedRunningTime="2026-04-22 15:11:06.93325518 +0000 UTC m=+166.168503045" Apr 22 15:11:06.934123 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:06.934077 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-rb7d6" podStartSLOduration=129.961306825 podStartE2EDuration="2m11.934066351s" podCreationTimestamp="2026-04-22 15:08:55 +0000 UTC" firstStartedPulling="2026-04-22 15:11:04.117842955 +0000 UTC m=+163.353090744" lastFinishedPulling="2026-04-22 15:11:06.09060248 +0000 UTC m=+165.325850270" observedRunningTime="2026-04-22 15:11:06.932623753 +0000 UTC m=+166.167871589" watchObservedRunningTime="2026-04-22 15:11:06.934066351 +0000 UTC m=+166.169314160" Apr 22 15:11:07.871292 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:07.871255 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-rb7d6" Apr 22 15:11:12.324440 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:12.324400 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6hrq" Apr 22 15:11:17.876929 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:17.876893 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-rb7d6" Apr 22 15:11:23.949130 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:23.949083 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Liveness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:11:23.949527 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:23.949137 2575 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:11:25.867656 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:25.867613 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:11:25.868128 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:25.867692 2575 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:11:33.949081 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:33.949043 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Liveness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:11:33.949563 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:33.949114 2575 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:11:35.867743 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:35.867694 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:11:35.868218 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:35.867764 2575 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:11:36.078768 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:36.078731 2575 prober.go:120] "Probe failed" probeType="Liveness" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-86d5fdd478-9njxg" podUID="700dd30f-621e-4e1b-970e-a0fe55861cb9" containerName="service-proxy" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 22 15:11:43.949224 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:43.949188 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Liveness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:11:43.949770 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:43.949249 2575 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:11:43.949770 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:43.949289 2575 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:11:43.949943 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:43.949788 2575 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="registry" containerStatusID={"Type":"cri-o","ID":"0f5323dd8c45f4d123152693f04233675d3da65ff24e6b1d72267c74587f95bd"} pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" containerMessage="Container registry failed liveness probe, will be restarted" Apr 22 15:11:43.953488 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:43.953462 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:11:43.953626 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:43.953515 2575 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:11:46.079433 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:46.079391 2575 prober.go:120] "Probe failed" probeType="Liveness" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-86d5fdd478-9njxg" podUID="700dd30f-621e-4e1b-970e-a0fe55861cb9" containerName="service-proxy" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 22 15:11:53.953714 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:53.953679 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:11:53.954471 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:53.953753 2575 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:11:56.078929 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:56.078884 2575 prober.go:120] "Probe failed" probeType="Liveness" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-86d5fdd478-9njxg" podUID="700dd30f-621e-4e1b-970e-a0fe55861cb9" containerName="service-proxy" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 22 15:11:56.079305 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:56.078968 2575 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-86d5fdd478-9njxg" Apr 22 15:11:56.079539 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:56.079520 2575 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="service-proxy" containerStatusID={"Type":"cri-o","ID":"c3ce6816ea71466fb6d93735b8036da1d8799afa8a350e59637628f66f8e8d75"} pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-86d5fdd478-9njxg" containerMessage="Container service-proxy failed liveness probe, will be restarted" Apr 22 15:11:56.079576 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:56.079560 2575 kuberuntime_container.go:864] "Killing container with a grace period" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-86d5fdd478-9njxg" podUID="700dd30f-621e-4e1b-970e-a0fe55861cb9" containerName="service-proxy" containerID="cri-o://c3ce6816ea71466fb6d93735b8036da1d8799afa8a350e59637628f66f8e8d75" gracePeriod=30 Apr 22 15:11:56.998378 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:56.998342 2575 generic.go:358] "Generic (PLEG): container finished" podID="700dd30f-621e-4e1b-970e-a0fe55861cb9" containerID="c3ce6816ea71466fb6d93735b8036da1d8799afa8a350e59637628f66f8e8d75" exitCode=2 Apr 22 15:11:56.998579 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:56.998417 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-86d5fdd478-9njxg" event={"ID":"700dd30f-621e-4e1b-970e-a0fe55861cb9","Type":"ContainerDied","Data":"c3ce6816ea71466fb6d93735b8036da1d8799afa8a350e59637628f66f8e8d75"} Apr 22 15:11:56.998579 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:11:56.998462 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-86d5fdd478-9njxg" event={"ID":"700dd30f-621e-4e1b-970e-a0fe55861cb9","Type":"ContainerStarted","Data":"028edce18567c8058dda905d4aefbf31bc23aecdd0fa6dcf1776c0d17f187471"} Apr 22 15:12:03.953461 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:03.953422 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:12:03.953853 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:03.953481 2575 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:12:08.969765 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:08.969720 2575 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" containerID="cri-o://0f5323dd8c45f4d123152693f04233675d3da65ff24e6b1d72267c74587f95bd" gracePeriod=30 Apr 22 15:12:10.031483 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:10.031447 2575 generic.go:358] "Generic (PLEG): container finished" podID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerID="0f5323dd8c45f4d123152693f04233675d3da65ff24e6b1d72267c74587f95bd" exitCode=0 Apr 22 15:12:10.031887 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:10.031519 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" event={"ID":"2adf4441-467a-46c0-a616-97afe2eb9fe8","Type":"ContainerDied","Data":"0f5323dd8c45f4d123152693f04233675d3da65ff24e6b1d72267c74587f95bd"} Apr 22 15:12:10.031887 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:10.031553 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" event={"ID":"2adf4441-467a-46c0-a616-97afe2eb9fe8","Type":"ContainerStarted","Data":"425100b66ebaea623ca7a80b7d4287c7380ec4cc5a6938d2e5fb717b4d1de493"} Apr 22 15:12:10.031887 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:10.031741 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:12:13.752528 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:13.752500 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-rb7d6_63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b/dns/0.log" Apr 22 15:12:13.951515 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:13.951487 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-rb7d6_63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b/kube-rbac-proxy/0.log" Apr 22 15:12:14.351247 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:14.351224 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-hqq4l_54264bd4-ce9e-4010-b213-56e5f4bfe070/dns-node-resolver/0.log" Apr 22 15:12:14.756222 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:14.756195 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_image-registry-66f5f8d5cd-rgqhw_2adf4441-467a-46c0-a616-97afe2eb9fe8/registry/0.log" Apr 22 15:12:14.954052 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:14.954024 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_image-registry-66f5f8d5cd-rgqhw_2adf4441-467a-46c0-a616-97afe2eb9fe8/registry/1.log" Apr 22 15:12:15.552354 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:15.552328 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-rw9wr_24727a23-7950-43c6-9a15-92416687fab7/node-ca/0.log" Apr 22 15:12:16.351734 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:16.351703 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-4lslj_60651bed-aafc-4a23-b90f-3110ee68359c/serve-healthcheck-canary/0.log" Apr 22 15:12:23.949767 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:23.949711 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Liveness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:12:23.950176 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:23.949774 2575 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:12:31.038539 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:31.038451 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:12:31.038539 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:31.038505 2575 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:12:33.131570 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:33.131525 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be8c5f47-6214-42a7-8e36-1c852cc48be6-metrics-certs\") pod \"network-metrics-daemon-b6hrq\" (UID: \"be8c5f47-6214-42a7-8e36-1c852cc48be6\") " pod="openshift-multus/network-metrics-daemon-b6hrq" Apr 22 15:12:33.134020 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:33.133997 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be8c5f47-6214-42a7-8e36-1c852cc48be6-metrics-certs\") pod \"network-metrics-daemon-b6hrq\" (UID: \"be8c5f47-6214-42a7-8e36-1c852cc48be6\") " pod="openshift-multus/network-metrics-daemon-b6hrq" Apr 22 15:12:33.328160 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:33.328130 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-62z2n\"" Apr 22 15:12:33.336360 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:33.336328 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6hrq" Apr 22 15:12:33.458359 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:33.458276 2575 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-b6hrq"] Apr 22 15:12:33.460997 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:12:33.460965 2575 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe8c5f47_6214_42a7_8e36_1c852cc48be6.slice/crio-c31fc345eef3d1be248d2a45146e8736a1a9dfb53fd57a816a63078f3e149198 WatchSource:0}: Error finding container c31fc345eef3d1be248d2a45146e8736a1a9dfb53fd57a816a63078f3e149198: Status 404 returned error can't find the container with id c31fc345eef3d1be248d2a45146e8736a1a9dfb53fd57a816a63078f3e149198 Apr 22 15:12:33.948845 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:33.948808 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Liveness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:12:33.949029 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:33.948887 2575 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:12:34.093721 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:34.093685 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-b6hrq" event={"ID":"be8c5f47-6214-42a7-8e36-1c852cc48be6","Type":"ContainerStarted","Data":"c31fc345eef3d1be248d2a45146e8736a1a9dfb53fd57a816a63078f3e149198"} Apr 22 15:12:35.098412 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:35.098374 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-b6hrq" event={"ID":"be8c5f47-6214-42a7-8e36-1c852cc48be6","Type":"ContainerStarted","Data":"a2814a4470b7c2556bd2528bf4066c9bda924909ce1623f80df28733ca9b0d6a"} Apr 22 15:12:35.098412 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:35.098408 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-b6hrq" event={"ID":"be8c5f47-6214-42a7-8e36-1c852cc48be6","Type":"ContainerStarted","Data":"053218cc823a2c8475698b7b47b780ea1433e3c6cc74bae44e68b421c86efb5b"} Apr 22 15:12:35.123566 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:35.123496 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-b6hrq" podStartSLOduration=252.940998192 podStartE2EDuration="4m14.123481525s" podCreationTimestamp="2026-04-22 15:08:21 +0000 UTC" firstStartedPulling="2026-04-22 15:12:33.462691178 +0000 UTC m=+252.697938966" lastFinishedPulling="2026-04-22 15:12:34.645174506 +0000 UTC m=+253.880422299" observedRunningTime="2026-04-22 15:12:35.12295953 +0000 UTC m=+254.358207352" watchObservedRunningTime="2026-04-22 15:12:35.123481525 +0000 UTC m=+254.358729383" Apr 22 15:12:41.038186 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:41.038149 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:12:41.038631 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:41.038209 2575 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:12:43.950708 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:43.950675 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Liveness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:12:43.951186 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:43.950732 2575 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:12:43.951186 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:43.950783 2575 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:12:43.951369 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:43.951346 2575 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="registry" containerStatusID={"Type":"cri-o","ID":"425100b66ebaea623ca7a80b7d4287c7380ec4cc5a6938d2e5fb717b4d1de493"} pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" containerMessage="Container registry failed liveness probe, will be restarted" Apr 22 15:12:43.955303 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:43.955274 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:12:43.955457 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:43.955323 2575 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:12:53.955919 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:53.955858 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:12:53.956371 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:12:53.955945 2575 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:13:03.955777 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:03.955746 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:13:03.956148 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:03.955807 2575 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:13:04.174384 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:04.174353 2575 generic.go:358] "Generic (PLEG): container finished" podID="b50382c5-ef34-4d73-9526-989655d2e11f" containerID="fb39437d5f966bdc62a3bed793d9bd217a014faed662cebbe6f0d83824de8caf" exitCode=255 Apr 22 15:13:04.174614 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:04.174417 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-648d979695-ch7nn" event={"ID":"b50382c5-ef34-4d73-9526-989655d2e11f","Type":"ContainerDied","Data":"fb39437d5f966bdc62a3bed793d9bd217a014faed662cebbe6f0d83824de8caf"} Apr 22 15:13:04.174614 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:04.174456 2575 scope.go:117] "RemoveContainer" containerID="a34538fd36d3bfda0920a91ef8f3b0f36d554e7a47196f3ffb70fbe88bdc2f7d" Apr 22 15:13:04.174815 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:04.174793 2575 scope.go:117] "RemoveContainer" containerID="fb39437d5f966bdc62a3bed793d9bd217a014faed662cebbe6f0d83824de8caf" Apr 22 15:13:04.175077 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:13:04.175056 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"addon-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=addon-agent pod=managed-serviceaccount-addon-agent-648d979695-ch7nn_open-cluster-management-agent-addon(b50382c5-ef34-4d73-9526-989655d2e11f)\"" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-648d979695-ch7nn" podUID="b50382c5-ef34-4d73-9526-989655d2e11f" Apr 22 15:13:04.176059 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:04.176037 2575 generic.go:358] "Generic (PLEG): container finished" podID="047276c4-c2f7-4f16-a7be-64fba5485b6c" containerID="097113d4bb9b6fde710109ebc5e7ad05fa93fb55e6485715a21f12d2bd75b44a" exitCode=1 Apr 22 15:13:04.176132 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:04.176082 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" event={"ID":"047276c4-c2f7-4f16-a7be-64fba5485b6c","Type":"ContainerDied","Data":"097113d4bb9b6fde710109ebc5e7ad05fa93fb55e6485715a21f12d2bd75b44a"} Apr 22 15:13:04.176359 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:04.176345 2575 scope.go:117] "RemoveContainer" containerID="097113d4bb9b6fde710109ebc5e7ad05fa93fb55e6485715a21f12d2bd75b44a" Apr 22 15:13:04.176502 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:13:04.176488 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"acm-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=acm-agent pod=klusterlet-addon-workmgr-567b4745f5-tj4cv_open-cluster-management-agent-addon(047276c4-c2f7-4f16-a7be-64fba5485b6c)\"" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" podUID="047276c4-c2f7-4f16-a7be-64fba5485b6c" Apr 22 15:13:04.184582 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:04.184565 2575 scope.go:117] "RemoveContainer" containerID="76ad31e6f176e4c33c585ab6bd257ebdd0ec300424c8a574b9ca24b69f97657e" Apr 22 15:13:04.854953 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:04.854911 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" Apr 22 15:13:05.181031 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:05.180958 2575 scope.go:117] "RemoveContainer" containerID="097113d4bb9b6fde710109ebc5e7ad05fa93fb55e6485715a21f12d2bd75b44a" Apr 22 15:13:05.181367 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:13:05.181132 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"acm-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=acm-agent pod=klusterlet-addon-workmgr-567b4745f5-tj4cv_open-cluster-management-agent-addon(047276c4-c2f7-4f16-a7be-64fba5485b6c)\"" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" podUID="047276c4-c2f7-4f16-a7be-64fba5485b6c" Apr 22 15:13:06.088685 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:06.088651 2575 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-648d979695-ch7nn" Apr 22 15:13:06.089021 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:06.089008 2575 scope.go:117] "RemoveContainer" containerID="fb39437d5f966bdc62a3bed793d9bd217a014faed662cebbe6f0d83824de8caf" Apr 22 15:13:06.089198 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:13:06.089182 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"addon-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=addon-agent pod=managed-serviceaccount-addon-agent-648d979695-ch7nn_open-cluster-management-agent-addon(b50382c5-ef34-4d73-9526-989655d2e11f)\"" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-648d979695-ch7nn" podUID="b50382c5-ef34-4d73-9526-989655d2e11f" Apr 22 15:13:06.102756 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:06.102733 2575 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" Apr 22 15:13:06.183712 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:06.183683 2575 scope.go:117] "RemoveContainer" containerID="097113d4bb9b6fde710109ebc5e7ad05fa93fb55e6485715a21f12d2bd75b44a" Apr 22 15:13:06.184092 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:13:06.183877 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"acm-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=acm-agent pod=klusterlet-addon-workmgr-567b4745f5-tj4cv_open-cluster-management-agent-addon(047276c4-c2f7-4f16-a7be-64fba5485b6c)\"" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" podUID="047276c4-c2f7-4f16-a7be-64fba5485b6c" Apr 22 15:13:08.971629 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:08.971586 2575 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" containerID="cri-o://425100b66ebaea623ca7a80b7d4287c7380ec4cc5a6938d2e5fb717b4d1de493" gracePeriod=30 Apr 22 15:13:09.192617 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:09.192584 2575 generic.go:358] "Generic (PLEG): container finished" podID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerID="425100b66ebaea623ca7a80b7d4287c7380ec4cc5a6938d2e5fb717b4d1de493" exitCode=0 Apr 22 15:13:09.192780 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:09.192630 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" event={"ID":"2adf4441-467a-46c0-a616-97afe2eb9fe8","Type":"ContainerDied","Data":"425100b66ebaea623ca7a80b7d4287c7380ec4cc5a6938d2e5fb717b4d1de493"} Apr 22 15:13:09.192780 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:09.192659 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" event={"ID":"2adf4441-467a-46c0-a616-97afe2eb9fe8","Type":"ContainerStarted","Data":"7920d1ce3e2e6e3d9a968b669354131377fd7c1c2b09138fdf2da2bde7c45a85"} Apr 22 15:13:09.192780 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:09.192674 2575 scope.go:117] "RemoveContainer" containerID="0f5323dd8c45f4d123152693f04233675d3da65ff24e6b1d72267c74587f95bd" Apr 22 15:13:09.192939 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:09.192822 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:13:18.324494 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:18.324460 2575 scope.go:117] "RemoveContainer" containerID="fb39437d5f966bdc62a3bed793d9bd217a014faed662cebbe6f0d83824de8caf" Apr 22 15:13:19.221380 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:19.221346 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-648d979695-ch7nn" event={"ID":"b50382c5-ef34-4d73-9526-989655d2e11f","Type":"ContainerStarted","Data":"b30b9a487f2c7d459d78601ccd02ab05d0904fea425d022b8996bf44421cdee8"} Apr 22 15:13:21.224441 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:21.224409 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lt5hd_7b9c0073-689d-408d-ac2b-84411c925f02/ovn-acl-logging/0.log" Apr 22 15:13:21.224990 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:21.224409 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lt5hd_7b9c0073-689d-408d-ac2b-84411c925f02/ovn-acl-logging/0.log" Apr 22 15:13:21.228924 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:21.228902 2575 kubelet.go:1628] "Image garbage collection succeeded" Apr 22 15:13:21.325070 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:21.324930 2575 scope.go:117] "RemoveContainer" containerID="097113d4bb9b6fde710109ebc5e7ad05fa93fb55e6485715a21f12d2bd75b44a" Apr 22 15:13:22.230034 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:22.229997 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" event={"ID":"047276c4-c2f7-4f16-a7be-64fba5485b6c","Type":"ContainerStarted","Data":"7f6e6128e8e1b5bb61a105262dcb9d437b29d826bd60670a790ef24349455c2c"} Apr 22 15:13:22.230410 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:22.230295 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" Apr 22 15:13:22.231688 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:22.231665 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" Apr 22 15:13:23.949107 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:23.949074 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Liveness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:13:23.949546 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:23.949133 2575 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:13:30.201568 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:30.201524 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:13:30.202023 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:30.201580 2575 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:13:33.948241 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:33.948208 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Liveness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:13:33.948608 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:33.948258 2575 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:13:40.201404 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:40.201367 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:13:40.201798 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:40.201429 2575 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:13:43.948215 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:43.948181 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Liveness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:13:43.948587 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:43.948233 2575 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:13:43.948587 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:43.948271 2575 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:13:43.948724 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:43.948706 2575 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="registry" containerStatusID={"Type":"cri-o","ID":"7920d1ce3e2e6e3d9a968b669354131377fd7c1c2b09138fdf2da2bde7c45a85"} pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" containerMessage="Container registry failed liveness probe, will be restarted" Apr 22 15:13:43.952024 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:43.952001 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:13:43.952144 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:43.952038 2575 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:13:53.952690 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:53.952655 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:13:53.953169 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:13:53.952710 2575 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:14:03.952639 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:14:03.952604 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:14:03.953027 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:14:03.952664 2575 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:14:08.967427 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:14:08.967379 2575 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" containerID="cri-o://7920d1ce3e2e6e3d9a968b669354131377fd7c1c2b09138fdf2da2bde7c45a85" gracePeriod=30 Apr 22 15:14:09.085767 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:14:09.085747 2575 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 22 15:14:09.351451 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:14:09.351360 2575 generic.go:358] "Generic (PLEG): container finished" podID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerID="7920d1ce3e2e6e3d9a968b669354131377fd7c1c2b09138fdf2da2bde7c45a85" exitCode=0 Apr 22 15:14:09.351451 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:14:09.351415 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" event={"ID":"2adf4441-467a-46c0-a616-97afe2eb9fe8","Type":"ContainerDied","Data":"7920d1ce3e2e6e3d9a968b669354131377fd7c1c2b09138fdf2da2bde7c45a85"} Apr 22 15:14:09.351451 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:14:09.351448 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" event={"ID":"2adf4441-467a-46c0-a616-97afe2eb9fe8","Type":"ContainerStarted","Data":"b6fd7924a9a0df3e3bf2d80b466877381616af51f02f00c7831cc6bc0d117abd"} Apr 22 15:14:09.351712 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:14:09.351477 2575 scope.go:117] "RemoveContainer" containerID="425100b66ebaea623ca7a80b7d4287c7380ec4cc5a6938d2e5fb717b4d1de493" Apr 22 15:14:09.351712 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:14:09.351665 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:14:23.949299 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:14:23.949259 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Liveness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:14:23.949685 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:14:23.949328 2575 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:14:30.359663 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:14:30.359625 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:14:30.360133 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:14:30.359677 2575 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:14:33.948211 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:14:33.948173 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Liveness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:14:33.948570 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:14:33.948226 2575 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:14:40.360078 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:14:40.360037 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:14:40.360525 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:14:40.360095 2575 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:14:43.948247 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:14:43.948209 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Liveness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:14:43.948665 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:14:43.948263 2575 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:14:43.948665 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:14:43.948298 2575 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:14:43.948785 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:14:43.948699 2575 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="registry" containerStatusID={"Type":"cri-o","ID":"b6fd7924a9a0df3e3bf2d80b466877381616af51f02f00c7831cc6bc0d117abd"} pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" containerMessage="Container registry failed liveness probe, will be restarted" Apr 22 15:14:43.952246 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:14:43.952211 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:14:43.952394 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:14:43.952287 2575 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:14:53.952711 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:14:53.952675 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:14:53.953142 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:14:53.952733 2575 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:15:03.952718 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:03.952679 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:15:03.953153 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:03.952734 2575 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:15:08.967641 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:08.967593 2575 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" containerID="cri-o://b6fd7924a9a0df3e3bf2d80b466877381616af51f02f00c7831cc6bc0d117abd" gracePeriod=30 Apr 22 15:15:09.498907 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:09.498841 2575 generic.go:358] "Generic (PLEG): container finished" podID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerID="b6fd7924a9a0df3e3bf2d80b466877381616af51f02f00c7831cc6bc0d117abd" exitCode=0 Apr 22 15:15:09.499122 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:09.498986 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" event={"ID":"2adf4441-467a-46c0-a616-97afe2eb9fe8","Type":"ContainerDied","Data":"b6fd7924a9a0df3e3bf2d80b466877381616af51f02f00c7831cc6bc0d117abd"} Apr 22 15:15:09.499122 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:09.499018 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" event={"ID":"2adf4441-467a-46c0-a616-97afe2eb9fe8","Type":"ContainerStarted","Data":"ef8ff105da3be8347546c542c9baa9588533471eec56f7f422d0c92afa4c9a15"} Apr 22 15:15:09.499122 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:09.499040 2575 scope.go:117] "RemoveContainer" containerID="7920d1ce3e2e6e3d9a968b669354131377fd7c1c2b09138fdf2da2bde7c45a85" Apr 22 15:15:09.499566 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:09.499335 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:15:18.525160 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:18.525128 2575 generic.go:358] "Generic (PLEG): container finished" podID="b50382c5-ef34-4d73-9526-989655d2e11f" containerID="b30b9a487f2c7d459d78601ccd02ab05d0904fea425d022b8996bf44421cdee8" exitCode=255 Apr 22 15:15:18.525542 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:18.525193 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-648d979695-ch7nn" event={"ID":"b50382c5-ef34-4d73-9526-989655d2e11f","Type":"ContainerDied","Data":"b30b9a487f2c7d459d78601ccd02ab05d0904fea425d022b8996bf44421cdee8"} Apr 22 15:15:18.525542 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:18.525226 2575 scope.go:117] "RemoveContainer" containerID="fb39437d5f966bdc62a3bed793d9bd217a014faed662cebbe6f0d83824de8caf" Apr 22 15:15:18.525542 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:18.525489 2575 scope.go:117] "RemoveContainer" containerID="b30b9a487f2c7d459d78601ccd02ab05d0904fea425d022b8996bf44421cdee8" Apr 22 15:15:18.525694 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:15:18.525675 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"addon-agent\" with CrashLoopBackOff: \"back-off 20s restarting failed container=addon-agent pod=managed-serviceaccount-addon-agent-648d979695-ch7nn_open-cluster-management-agent-addon(b50382c5-ef34-4d73-9526-989655d2e11f)\"" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-648d979695-ch7nn" podUID="b50382c5-ef34-4d73-9526-989655d2e11f" Apr 22 15:15:21.535300 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:21.535262 2575 generic.go:358] "Generic (PLEG): container finished" podID="047276c4-c2f7-4f16-a7be-64fba5485b6c" containerID="7f6e6128e8e1b5bb61a105262dcb9d437b29d826bd60670a790ef24349455c2c" exitCode=1 Apr 22 15:15:21.535682 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:21.535341 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" event={"ID":"047276c4-c2f7-4f16-a7be-64fba5485b6c","Type":"ContainerDied","Data":"7f6e6128e8e1b5bb61a105262dcb9d437b29d826bd60670a790ef24349455c2c"} Apr 22 15:15:21.535682 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:21.535392 2575 scope.go:117] "RemoveContainer" containerID="097113d4bb9b6fde710109ebc5e7ad05fa93fb55e6485715a21f12d2bd75b44a" Apr 22 15:15:21.535778 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:21.535762 2575 scope.go:117] "RemoveContainer" containerID="7f6e6128e8e1b5bb61a105262dcb9d437b29d826bd60670a790ef24349455c2c" Apr 22 15:15:21.535993 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:15:21.535963 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"acm-agent\" with CrashLoopBackOff: \"back-off 20s restarting failed container=acm-agent pod=klusterlet-addon-workmgr-567b4745f5-tj4cv_open-cluster-management-agent-addon(047276c4-c2f7-4f16-a7be-64fba5485b6c)\"" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" podUID="047276c4-c2f7-4f16-a7be-64fba5485b6c" Apr 22 15:15:22.230323 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:22.230284 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" Apr 22 15:15:22.539310 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:22.539221 2575 scope.go:117] "RemoveContainer" containerID="7f6e6128e8e1b5bb61a105262dcb9d437b29d826bd60670a790ef24349455c2c" Apr 22 15:15:22.539665 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:15:22.539388 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"acm-agent\" with CrashLoopBackOff: \"back-off 20s restarting failed container=acm-agent pod=klusterlet-addon-workmgr-567b4745f5-tj4cv_open-cluster-management-agent-addon(047276c4-c2f7-4f16-a7be-64fba5485b6c)\"" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" podUID="047276c4-c2f7-4f16-a7be-64fba5485b6c" Apr 22 15:15:23.949584 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:23.949544 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Liveness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:15:23.949975 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:23.949608 2575 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:15:26.088360 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:26.088322 2575 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-648d979695-ch7nn" Apr 22 15:15:26.088779 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:26.088668 2575 scope.go:117] "RemoveContainer" containerID="b30b9a487f2c7d459d78601ccd02ab05d0904fea425d022b8996bf44421cdee8" Apr 22 15:15:26.088889 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:15:26.088843 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"addon-agent\" with CrashLoopBackOff: \"back-off 20s restarting failed container=addon-agent pod=managed-serviceaccount-addon-agent-648d979695-ch7nn_open-cluster-management-agent-addon(b50382c5-ef34-4d73-9526-989655d2e11f)\"" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-648d979695-ch7nn" podUID="b50382c5-ef34-4d73-9526-989655d2e11f" Apr 22 15:15:26.102537 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:26.102503 2575 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" Apr 22 15:15:26.103069 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:26.103043 2575 scope.go:117] "RemoveContainer" containerID="7f6e6128e8e1b5bb61a105262dcb9d437b29d826bd60670a790ef24349455c2c" Apr 22 15:15:26.103399 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:15:26.103304 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"acm-agent\" with CrashLoopBackOff: \"back-off 20s restarting failed container=acm-agent pod=klusterlet-addon-workmgr-567b4745f5-tj4cv_open-cluster-management-agent-addon(047276c4-c2f7-4f16-a7be-64fba5485b6c)\"" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" podUID="047276c4-c2f7-4f16-a7be-64fba5485b6c" Apr 22 15:15:30.507693 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:30.507605 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:15:30.507693 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:30.507678 2575 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:15:33.948763 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:33.948729 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Liveness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:15:33.949173 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:33.948784 2575 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:15:37.324048 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:37.324006 2575 scope.go:117] "RemoveContainer" containerID="b30b9a487f2c7d459d78601ccd02ab05d0904fea425d022b8996bf44421cdee8" Apr 22 15:15:37.324633 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:15:37.324177 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"addon-agent\" with CrashLoopBackOff: \"back-off 20s restarting failed container=addon-agent pod=managed-serviceaccount-addon-agent-648d979695-ch7nn_open-cluster-management-agent-addon(b50382c5-ef34-4d73-9526-989655d2e11f)\"" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-648d979695-ch7nn" podUID="b50382c5-ef34-4d73-9526-989655d2e11f" Apr 22 15:15:40.324566 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:40.324529 2575 scope.go:117] "RemoveContainer" containerID="7f6e6128e8e1b5bb61a105262dcb9d437b29d826bd60670a790ef24349455c2c" Apr 22 15:15:40.324982 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:15:40.324707 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"acm-agent\" with CrashLoopBackOff: \"back-off 20s restarting failed container=acm-agent pod=klusterlet-addon-workmgr-567b4745f5-tj4cv_open-cluster-management-agent-addon(047276c4-c2f7-4f16-a7be-64fba5485b6c)\"" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" podUID="047276c4-c2f7-4f16-a7be-64fba5485b6c" Apr 22 15:15:40.507986 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:40.507950 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:15:40.508132 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:40.508001 2575 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:15:43.948566 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:43.948526 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Liveness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:15:43.948976 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:43.948591 2575 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:15:43.948976 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:43.948627 2575 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:15:43.949121 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:43.949102 2575 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="registry" containerStatusID={"Type":"cri-o","ID":"ef8ff105da3be8347546c542c9baa9588533471eec56f7f422d0c92afa4c9a15"} pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" containerMessage="Container registry failed liveness probe, will be restarted" Apr 22 15:15:43.952655 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:43.952622 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:15:43.952812 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:43.952668 2575 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:15:48.324855 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:48.324815 2575 scope.go:117] "RemoveContainer" containerID="b30b9a487f2c7d459d78601ccd02ab05d0904fea425d022b8996bf44421cdee8" Apr 22 15:15:48.603389 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:48.603302 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-648d979695-ch7nn" event={"ID":"b50382c5-ef34-4d73-9526-989655d2e11f","Type":"ContainerStarted","Data":"b074a967f88be3cc1d1866829ca77d657a9c1718485754468fdf1c477b95c951"} Apr 22 15:15:53.953195 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:53.953155 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:15:53.953587 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:53.953207 2575 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:15:55.327075 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:55.327048 2575 scope.go:117] "RemoveContainer" containerID="7f6e6128e8e1b5bb61a105262dcb9d437b29d826bd60670a790ef24349455c2c" Apr 22 15:15:55.625177 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:55.625091 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" event={"ID":"047276c4-c2f7-4f16-a7be-64fba5485b6c","Type":"ContainerStarted","Data":"7407681406bb28dd57d6d74b881999d4ff58483f5c5c2d2df3eb03eb586652ad"} Apr 22 15:15:55.625421 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:55.625400 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" Apr 22 15:15:55.626856 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:15:55.626832 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" Apr 22 15:16:03.953061 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:16:03.953024 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:16:03.953461 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:16:03.953080 2575 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:16:08.967645 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:16:08.967607 2575 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" containerID="cri-o://ef8ff105da3be8347546c542c9baa9588533471eec56f7f422d0c92afa4c9a15" gracePeriod=30 Apr 22 15:16:09.661185 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:16:09.661147 2575 generic.go:358] "Generic (PLEG): container finished" podID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerID="ef8ff105da3be8347546c542c9baa9588533471eec56f7f422d0c92afa4c9a15" exitCode=0 Apr 22 15:16:09.661394 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:16:09.661233 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" event={"ID":"2adf4441-467a-46c0-a616-97afe2eb9fe8","Type":"ContainerDied","Data":"ef8ff105da3be8347546c542c9baa9588533471eec56f7f422d0c92afa4c9a15"} Apr 22 15:16:09.661394 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:16:09.661281 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" event={"ID":"2adf4441-467a-46c0-a616-97afe2eb9fe8","Type":"ContainerStarted","Data":"4f598263fbba4e1d553ee83668cf9f79d1fa7504694d86ebdb4921c31bf622f1"} Apr 22 15:16:09.661394 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:16:09.661301 2575 scope.go:117] "RemoveContainer" containerID="b6fd7924a9a0df3e3bf2d80b466877381616af51f02f00c7831cc6bc0d117abd" Apr 22 15:16:09.661552 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:16:09.661503 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:16:23.949425 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:16:23.949391 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Liveness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:16:23.949791 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:16:23.949452 2575 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:16:30.673649 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:16:30.673600 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:16:30.674129 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:16:30.673672 2575 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:16:33.948928 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:16:33.948893 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Liveness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:16:33.949283 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:16:33.948946 2575 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:16:40.673970 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:16:40.673925 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:16:40.674505 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:16:40.673987 2575 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:16:43.948696 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:16:43.948659 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Liveness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:16:43.949101 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:16:43.948725 2575 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:16:43.949101 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:16:43.948762 2575 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:16:43.949294 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:16:43.949270 2575 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="registry" containerStatusID={"Type":"cri-o","ID":"4f598263fbba4e1d553ee83668cf9f79d1fa7504694d86ebdb4921c31bf622f1"} pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" containerMessage="Container registry failed liveness probe, will be restarted" Apr 22 15:16:43.952577 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:16:43.952543 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:16:43.952715 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:16:43.952602 2575 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:16:53.952552 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:16:53.952510 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:16:53.952957 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:16:53.952569 2575 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:17:03.953716 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:17:03.953633 2575 patch_prober.go:28] interesting pod/image-registry-66f5f8d5cd-rgqhw container/registry namespace/openshift-image-registry: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 22 15:17:03.953716 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:17:03.953705 2575 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 22 15:17:08.968202 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:17:08.968157 2575 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" containerID="cri-o://4f598263fbba4e1d553ee83668cf9f79d1fa7504694d86ebdb4921c31bf622f1" gracePeriod=30 Apr 22 15:17:09.081059 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:17:09.081028 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=registry pod=image-registry-66f5f8d5cd-rgqhw_openshift-image-registry(2adf4441-467a-46c0-a616-97afe2eb9fe8)\"" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" Apr 22 15:17:09.824222 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:17:09.824188 2575 generic.go:358] "Generic (PLEG): container finished" podID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerID="4f598263fbba4e1d553ee83668cf9f79d1fa7504694d86ebdb4921c31bf622f1" exitCode=0 Apr 22 15:17:09.824222 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:17:09.824229 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" event={"ID":"2adf4441-467a-46c0-a616-97afe2eb9fe8","Type":"ContainerDied","Data":"4f598263fbba4e1d553ee83668cf9f79d1fa7504694d86ebdb4921c31bf622f1"} Apr 22 15:17:09.824446 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:17:09.824261 2575 scope.go:117] "RemoveContainer" containerID="ef8ff105da3be8347546c542c9baa9588533471eec56f7f422d0c92afa4c9a15" Apr 22 15:17:09.824648 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:17:09.824618 2575 scope.go:117] "RemoveContainer" containerID="4f598263fbba4e1d553ee83668cf9f79d1fa7504694d86ebdb4921c31bf622f1" Apr 22 15:17:09.824858 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:17:09.824842 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=registry pod=image-registry-66f5f8d5cd-rgqhw_openshift-image-registry(2adf4441-467a-46c0-a616-97afe2eb9fe8)\"" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" Apr 22 15:17:23.324419 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:17:23.324382 2575 scope.go:117] "RemoveContainer" containerID="4f598263fbba4e1d553ee83668cf9f79d1fa7504694d86ebdb4921c31bf622f1" Apr 22 15:17:23.324969 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:17:23.324570 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=registry pod=image-registry-66f5f8d5cd-rgqhw_openshift-image-registry(2adf4441-467a-46c0-a616-97afe2eb9fe8)\"" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" Apr 22 15:17:38.323892 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:17:38.323835 2575 scope.go:117] "RemoveContainer" containerID="4f598263fbba4e1d553ee83668cf9f79d1fa7504694d86ebdb4921c31bf622f1" Apr 22 15:17:38.324262 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:17:38.324043 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=registry pod=image-registry-66f5f8d5cd-rgqhw_openshift-image-registry(2adf4441-467a-46c0-a616-97afe2eb9fe8)\"" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" Apr 22 15:17:48.932054 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:17:48.932015 2575 generic.go:358] "Generic (PLEG): container finished" podID="b50382c5-ef34-4d73-9526-989655d2e11f" containerID="b074a967f88be3cc1d1866829ca77d657a9c1718485754468fdf1c477b95c951" exitCode=255 Apr 22 15:17:48.932534 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:17:48.932086 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-648d979695-ch7nn" event={"ID":"b50382c5-ef34-4d73-9526-989655d2e11f","Type":"ContainerDied","Data":"b074a967f88be3cc1d1866829ca77d657a9c1718485754468fdf1c477b95c951"} Apr 22 15:17:48.932534 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:17:48.932131 2575 scope.go:117] "RemoveContainer" containerID="b30b9a487f2c7d459d78601ccd02ab05d0904fea425d022b8996bf44421cdee8" Apr 22 15:17:48.932534 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:17:48.932479 2575 scope.go:117] "RemoveContainer" containerID="b074a967f88be3cc1d1866829ca77d657a9c1718485754468fdf1c477b95c951" Apr 22 15:17:48.932760 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:17:48.932739 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"addon-agent\" with CrashLoopBackOff: \"back-off 40s restarting failed container=addon-agent pod=managed-serviceaccount-addon-agent-648d979695-ch7nn_open-cluster-management-agent-addon(b50382c5-ef34-4d73-9526-989655d2e11f)\"" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-648d979695-ch7nn" podUID="b50382c5-ef34-4d73-9526-989655d2e11f" Apr 22 15:17:51.325986 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:17:51.325947 2575 scope.go:117] "RemoveContainer" containerID="4f598263fbba4e1d553ee83668cf9f79d1fa7504694d86ebdb4921c31bf622f1" Apr 22 15:17:51.326378 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:17:51.326188 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=registry pod=image-registry-66f5f8d5cd-rgqhw_openshift-image-registry(2adf4441-467a-46c0-a616-97afe2eb9fe8)\"" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" Apr 22 15:17:55.626266 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:17:55.626216 2575 prober.go:120] "Probe failed" probeType="Readiness" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" podUID="047276c4-c2f7-4f16-a7be-64fba5485b6c" containerName="acm-agent" probeResult="failure" output="Get \"http://10.132.0.8:8000/readyz\": dial tcp 10.132.0.8:8000: connect: connection refused" Apr 22 15:17:55.951140 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:17:55.951102 2575 generic.go:358] "Generic (PLEG): container finished" podID="047276c4-c2f7-4f16-a7be-64fba5485b6c" containerID="7407681406bb28dd57d6d74b881999d4ff58483f5c5c2d2df3eb03eb586652ad" exitCode=1 Apr 22 15:17:55.951321 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:17:55.951172 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" event={"ID":"047276c4-c2f7-4f16-a7be-64fba5485b6c","Type":"ContainerDied","Data":"7407681406bb28dd57d6d74b881999d4ff58483f5c5c2d2df3eb03eb586652ad"} Apr 22 15:17:55.951321 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:17:55.951216 2575 scope.go:117] "RemoveContainer" containerID="7f6e6128e8e1b5bb61a105262dcb9d437b29d826bd60670a790ef24349455c2c" Apr 22 15:17:55.951533 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:17:55.951514 2575 scope.go:117] "RemoveContainer" containerID="7407681406bb28dd57d6d74b881999d4ff58483f5c5c2d2df3eb03eb586652ad" Apr 22 15:17:55.951760 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:17:55.951743 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"acm-agent\" with CrashLoopBackOff: \"back-off 40s restarting failed container=acm-agent pod=klusterlet-addon-workmgr-567b4745f5-tj4cv_open-cluster-management-agent-addon(047276c4-c2f7-4f16-a7be-64fba5485b6c)\"" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" podUID="047276c4-c2f7-4f16-a7be-64fba5485b6c" Apr 22 15:17:56.088321 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:17:56.088284 2575 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-648d979695-ch7nn" Apr 22 15:17:56.088630 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:17:56.088616 2575 scope.go:117] "RemoveContainer" containerID="b074a967f88be3cc1d1866829ca77d657a9c1718485754468fdf1c477b95c951" Apr 22 15:17:56.088850 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:17:56.088834 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"addon-agent\" with CrashLoopBackOff: \"back-off 40s restarting failed container=addon-agent pod=managed-serviceaccount-addon-agent-648d979695-ch7nn_open-cluster-management-agent-addon(b50382c5-ef34-4d73-9526-989655d2e11f)\"" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-648d979695-ch7nn" podUID="b50382c5-ef34-4d73-9526-989655d2e11f" Apr 22 15:17:56.103340 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:17:56.103305 2575 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" Apr 22 15:17:56.955628 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:17:56.955599 2575 scope.go:117] "RemoveContainer" containerID="7407681406bb28dd57d6d74b881999d4ff58483f5c5c2d2df3eb03eb586652ad" Apr 22 15:17:56.956050 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:17:56.955764 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"acm-agent\" with CrashLoopBackOff: \"back-off 40s restarting failed container=acm-agent pod=klusterlet-addon-workmgr-567b4745f5-tj4cv_open-cluster-management-agent-addon(047276c4-c2f7-4f16-a7be-64fba5485b6c)\"" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" podUID="047276c4-c2f7-4f16-a7be-64fba5485b6c" Apr 22 15:18:05.219643 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:05.219609 2575 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-runtime-extractor-j9pck"] Apr 22 15:18:05.222500 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:05.222481 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-j9pck" Apr 22 15:18:05.227485 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:05.227454 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-rbac-proxy\"" Apr 22 15:18:05.227485 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:05.227477 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-sa-dockercfg-c6286\"" Apr 22 15:18:05.228245 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:05.228220 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-root-ca.crt\"" Apr 22 15:18:05.228380 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:05.228336 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-tls\"" Apr 22 15:18:05.228622 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:05.228599 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"openshift-service-ca.crt\"" Apr 22 15:18:05.251959 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:05.251924 2575 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-j9pck"] Apr 22 15:18:05.310471 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:05.310431 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmrmq\" (UniqueName: \"kubernetes.io/projected/1f82ac4e-087f-449b-b8e0-2f3bfab3600e-kube-api-access-pmrmq\") pod \"insights-runtime-extractor-j9pck\" (UID: \"1f82ac4e-087f-449b-b8e0-2f3bfab3600e\") " pod="openshift-insights/insights-runtime-extractor-j9pck" Apr 22 15:18:05.310471 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:05.310472 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/1f82ac4e-087f-449b-b8e0-2f3bfab3600e-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-j9pck\" (UID: \"1f82ac4e-087f-449b-b8e0-2f3bfab3600e\") " pod="openshift-insights/insights-runtime-extractor-j9pck" Apr 22 15:18:05.310675 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:05.310509 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/1f82ac4e-087f-449b-b8e0-2f3bfab3600e-crio-socket\") pod \"insights-runtime-extractor-j9pck\" (UID: \"1f82ac4e-087f-449b-b8e0-2f3bfab3600e\") " pod="openshift-insights/insights-runtime-extractor-j9pck" Apr 22 15:18:05.310675 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:05.310533 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/1f82ac4e-087f-449b-b8e0-2f3bfab3600e-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-j9pck\" (UID: \"1f82ac4e-087f-449b-b8e0-2f3bfab3600e\") " pod="openshift-insights/insights-runtime-extractor-j9pck" Apr 22 15:18:05.310675 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:05.310551 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/1f82ac4e-087f-449b-b8e0-2f3bfab3600e-data-volume\") pod \"insights-runtime-extractor-j9pck\" (UID: \"1f82ac4e-087f-449b-b8e0-2f3bfab3600e\") " pod="openshift-insights/insights-runtime-extractor-j9pck" Apr 22 15:18:05.411431 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:05.411394 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/1f82ac4e-087f-449b-b8e0-2f3bfab3600e-crio-socket\") pod \"insights-runtime-extractor-j9pck\" (UID: \"1f82ac4e-087f-449b-b8e0-2f3bfab3600e\") " pod="openshift-insights/insights-runtime-extractor-j9pck" Apr 22 15:18:05.411431 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:05.411435 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/1f82ac4e-087f-449b-b8e0-2f3bfab3600e-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-j9pck\" (UID: \"1f82ac4e-087f-449b-b8e0-2f3bfab3600e\") " pod="openshift-insights/insights-runtime-extractor-j9pck" Apr 22 15:18:05.411697 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:05.411458 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/1f82ac4e-087f-449b-b8e0-2f3bfab3600e-data-volume\") pod \"insights-runtime-extractor-j9pck\" (UID: \"1f82ac4e-087f-449b-b8e0-2f3bfab3600e\") " pod="openshift-insights/insights-runtime-extractor-j9pck" Apr 22 15:18:05.411697 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:05.411490 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pmrmq\" (UniqueName: \"kubernetes.io/projected/1f82ac4e-087f-449b-b8e0-2f3bfab3600e-kube-api-access-pmrmq\") pod \"insights-runtime-extractor-j9pck\" (UID: \"1f82ac4e-087f-449b-b8e0-2f3bfab3600e\") " pod="openshift-insights/insights-runtime-extractor-j9pck" Apr 22 15:18:05.411697 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:05.411511 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/1f82ac4e-087f-449b-b8e0-2f3bfab3600e-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-j9pck\" (UID: \"1f82ac4e-087f-449b-b8e0-2f3bfab3600e\") " pod="openshift-insights/insights-runtime-extractor-j9pck" Apr 22 15:18:05.411697 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:05.411521 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/1f82ac4e-087f-449b-b8e0-2f3bfab3600e-crio-socket\") pod \"insights-runtime-extractor-j9pck\" (UID: \"1f82ac4e-087f-449b-b8e0-2f3bfab3600e\") " pod="openshift-insights/insights-runtime-extractor-j9pck" Apr 22 15:18:05.411940 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:05.411851 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/1f82ac4e-087f-449b-b8e0-2f3bfab3600e-data-volume\") pod \"insights-runtime-extractor-j9pck\" (UID: \"1f82ac4e-087f-449b-b8e0-2f3bfab3600e\") " pod="openshift-insights/insights-runtime-extractor-j9pck" Apr 22 15:18:05.412082 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:05.412066 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/1f82ac4e-087f-449b-b8e0-2f3bfab3600e-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-j9pck\" (UID: \"1f82ac4e-087f-449b-b8e0-2f3bfab3600e\") " pod="openshift-insights/insights-runtime-extractor-j9pck" Apr 22 15:18:05.413960 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:05.413942 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/1f82ac4e-087f-449b-b8e0-2f3bfab3600e-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-j9pck\" (UID: \"1f82ac4e-087f-449b-b8e0-2f3bfab3600e\") " pod="openshift-insights/insights-runtime-extractor-j9pck" Apr 22 15:18:05.436574 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:05.436535 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmrmq\" (UniqueName: \"kubernetes.io/projected/1f82ac4e-087f-449b-b8e0-2f3bfab3600e-kube-api-access-pmrmq\") pod \"insights-runtime-extractor-j9pck\" (UID: \"1f82ac4e-087f-449b-b8e0-2f3bfab3600e\") " pod="openshift-insights/insights-runtime-extractor-j9pck" Apr 22 15:18:05.533249 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:05.533154 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-j9pck" Apr 22 15:18:05.626491 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:05.626454 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" Apr 22 15:18:05.627072 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:05.627052 2575 scope.go:117] "RemoveContainer" containerID="7407681406bb28dd57d6d74b881999d4ff58483f5c5c2d2df3eb03eb586652ad" Apr 22 15:18:05.627303 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:18:05.627284 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"acm-agent\" with CrashLoopBackOff: \"back-off 40s restarting failed container=acm-agent pod=klusterlet-addon-workmgr-567b4745f5-tj4cv_open-cluster-management-agent-addon(047276c4-c2f7-4f16-a7be-64fba5485b6c)\"" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" podUID="047276c4-c2f7-4f16-a7be-64fba5485b6c" Apr 22 15:18:05.670314 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:05.670274 2575 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-j9pck"] Apr 22 15:18:05.674263 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:18:05.674232 2575 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f82ac4e_087f_449b_b8e0_2f3bfab3600e.slice/crio-942465f9a2752704c1e76655ce11786359a66113c4c12cc86e18ece1a39c2c5b WatchSource:0}: Error finding container 942465f9a2752704c1e76655ce11786359a66113c4c12cc86e18ece1a39c2c5b: Status 404 returned error can't find the container with id 942465f9a2752704c1e76655ce11786359a66113c4c12cc86e18ece1a39c2c5b Apr 22 15:18:05.979086 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:05.979054 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-j9pck" event={"ID":"1f82ac4e-087f-449b-b8e0-2f3bfab3600e","Type":"ContainerStarted","Data":"5a6646f17412038f3e7f5768bf911361249ea437c52add06698bf609b02e8570"} Apr 22 15:18:05.979086 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:05.979092 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-j9pck" event={"ID":"1f82ac4e-087f-449b-b8e0-2f3bfab3600e","Type":"ContainerStarted","Data":"942465f9a2752704c1e76655ce11786359a66113c4c12cc86e18ece1a39c2c5b"} Apr 22 15:18:06.324145 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:06.324063 2575 scope.go:117] "RemoveContainer" containerID="4f598263fbba4e1d553ee83668cf9f79d1fa7504694d86ebdb4921c31bf622f1" Apr 22 15:18:06.324494 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:18:06.324296 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=registry pod=image-registry-66f5f8d5cd-rgqhw_openshift-image-registry(2adf4441-467a-46c0-a616-97afe2eb9fe8)\"" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" Apr 22 15:18:06.983287 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:06.983245 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-j9pck" event={"ID":"1f82ac4e-087f-449b-b8e0-2f3bfab3600e","Type":"ContainerStarted","Data":"6f6037f0193dae908e3f8cf27ec7b511c81972581f51fb38d2e58e7a86bb94e6"} Apr 22 15:18:07.987738 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:07.987701 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-j9pck" event={"ID":"1f82ac4e-087f-449b-b8e0-2f3bfab3600e","Type":"ContainerStarted","Data":"4b39f55853e31e75b7a2f99b7297cfb823009c7e8ffd8e0d27a0a69a515474dc"} Apr 22 15:18:08.009568 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:08.009519 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-runtime-extractor-j9pck" podStartSLOduration=1.131883565 podStartE2EDuration="3.009501596s" podCreationTimestamp="2026-04-22 15:18:05 +0000 UTC" firstStartedPulling="2026-04-22 15:18:05.745012806 +0000 UTC m=+584.980260594" lastFinishedPulling="2026-04-22 15:18:07.622630838 +0000 UTC m=+586.857878625" observedRunningTime="2026-04-22 15:18:08.008125914 +0000 UTC m=+587.243373731" watchObservedRunningTime="2026-04-22 15:18:08.009501596 +0000 UTC m=+587.244749406" Apr 22 15:18:08.323801 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:08.323711 2575 scope.go:117] "RemoveContainer" containerID="b074a967f88be3cc1d1866829ca77d657a9c1718485754468fdf1c477b95c951" Apr 22 15:18:08.323969 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:18:08.323918 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"addon-agent\" with CrashLoopBackOff: \"back-off 40s restarting failed container=addon-agent pod=managed-serviceaccount-addon-agent-648d979695-ch7nn_open-cluster-management-agent-addon(b50382c5-ef34-4d73-9526-989655d2e11f)\"" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-648d979695-ch7nn" podUID="b50382c5-ef34-4d73-9526-989655d2e11f" Apr 22 15:18:13.439624 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:13.439581 2575 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-295ld"] Apr 22 15:18:13.442946 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:13.442925 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-295ld" Apr 22 15:18:13.445522 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:13.445496 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-kube-rbac-proxy-config\"" Apr 22 15:18:13.445725 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:13.445698 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-dockercfg-8dvjq\"" Apr 22 15:18:13.446385 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:13.446368 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"node-exporter-accelerators-collector-config\"" Apr 22 15:18:13.446850 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:13.446836 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-tls\"" Apr 22 15:18:13.446946 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:13.446900 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kube-root-ca.crt\"" Apr 22 15:18:13.447218 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:13.447204 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"metrics-client-ca\"" Apr 22 15:18:13.447656 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:13.447640 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"openshift-service-ca.crt\"" Apr 22 15:18:13.570112 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:13.570076 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/e25ab46f-9394-4c4f-bf94-d5a9726b16da-node-exporter-textfile\") pod \"node-exporter-295ld\" (UID: \"e25ab46f-9394-4c4f-bf94-d5a9726b16da\") " pod="openshift-monitoring/node-exporter-295ld" Apr 22 15:18:13.570112 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:13.570119 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e25ab46f-9394-4c4f-bf94-d5a9726b16da-sys\") pod \"node-exporter-295ld\" (UID: \"e25ab46f-9394-4c4f-bf94-d5a9726b16da\") " pod="openshift-monitoring/node-exporter-295ld" Apr 22 15:18:13.570326 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:13.570147 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e25ab46f-9394-4c4f-bf94-d5a9726b16da-metrics-client-ca\") pod \"node-exporter-295ld\" (UID: \"e25ab46f-9394-4c4f-bf94-d5a9726b16da\") " pod="openshift-monitoring/node-exporter-295ld" Apr 22 15:18:13.570326 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:13.570186 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/e25ab46f-9394-4c4f-bf94-d5a9726b16da-root\") pod \"node-exporter-295ld\" (UID: \"e25ab46f-9394-4c4f-bf94-d5a9726b16da\") " pod="openshift-monitoring/node-exporter-295ld" Apr 22 15:18:13.570326 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:13.570210 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/e25ab46f-9394-4c4f-bf94-d5a9726b16da-node-exporter-accelerators-collector-config\") pod \"node-exporter-295ld\" (UID: \"e25ab46f-9394-4c4f-bf94-d5a9726b16da\") " pod="openshift-monitoring/node-exporter-295ld" Apr 22 15:18:13.570326 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:13.570258 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/e25ab46f-9394-4c4f-bf94-d5a9726b16da-node-exporter-tls\") pod \"node-exporter-295ld\" (UID: \"e25ab46f-9394-4c4f-bf94-d5a9726b16da\") " pod="openshift-monitoring/node-exporter-295ld" Apr 22 15:18:13.570326 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:13.570301 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e25ab46f-9394-4c4f-bf94-d5a9726b16da-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-295ld\" (UID: \"e25ab46f-9394-4c4f-bf94-d5a9726b16da\") " pod="openshift-monitoring/node-exporter-295ld" Apr 22 15:18:13.570478 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:13.570347 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fsl6\" (UniqueName: \"kubernetes.io/projected/e25ab46f-9394-4c4f-bf94-d5a9726b16da-kube-api-access-2fsl6\") pod \"node-exporter-295ld\" (UID: \"e25ab46f-9394-4c4f-bf94-d5a9726b16da\") " pod="openshift-monitoring/node-exporter-295ld" Apr 22 15:18:13.570478 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:13.570366 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/e25ab46f-9394-4c4f-bf94-d5a9726b16da-node-exporter-wtmp\") pod \"node-exporter-295ld\" (UID: \"e25ab46f-9394-4c4f-bf94-d5a9726b16da\") " pod="openshift-monitoring/node-exporter-295ld" Apr 22 15:18:13.670962 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:13.670913 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2fsl6\" (UniqueName: \"kubernetes.io/projected/e25ab46f-9394-4c4f-bf94-d5a9726b16da-kube-api-access-2fsl6\") pod \"node-exporter-295ld\" (UID: \"e25ab46f-9394-4c4f-bf94-d5a9726b16da\") " pod="openshift-monitoring/node-exporter-295ld" Apr 22 15:18:13.670962 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:13.670958 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/e25ab46f-9394-4c4f-bf94-d5a9726b16da-node-exporter-wtmp\") pod \"node-exporter-295ld\" (UID: \"e25ab46f-9394-4c4f-bf94-d5a9726b16da\") " pod="openshift-monitoring/node-exporter-295ld" Apr 22 15:18:13.671220 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:13.670982 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/e25ab46f-9394-4c4f-bf94-d5a9726b16da-node-exporter-textfile\") pod \"node-exporter-295ld\" (UID: \"e25ab46f-9394-4c4f-bf94-d5a9726b16da\") " pod="openshift-monitoring/node-exporter-295ld" Apr 22 15:18:13.671220 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:13.671003 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e25ab46f-9394-4c4f-bf94-d5a9726b16da-sys\") pod \"node-exporter-295ld\" (UID: \"e25ab46f-9394-4c4f-bf94-d5a9726b16da\") " pod="openshift-monitoring/node-exporter-295ld" Apr 22 15:18:13.671220 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:13.671029 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e25ab46f-9394-4c4f-bf94-d5a9726b16da-metrics-client-ca\") pod \"node-exporter-295ld\" (UID: \"e25ab46f-9394-4c4f-bf94-d5a9726b16da\") " pod="openshift-monitoring/node-exporter-295ld" Apr 22 15:18:13.671220 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:13.671053 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/e25ab46f-9394-4c4f-bf94-d5a9726b16da-root\") pod \"node-exporter-295ld\" (UID: \"e25ab46f-9394-4c4f-bf94-d5a9726b16da\") " pod="openshift-monitoring/node-exporter-295ld" Apr 22 15:18:13.671220 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:13.671083 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/e25ab46f-9394-4c4f-bf94-d5a9726b16da-node-exporter-accelerators-collector-config\") pod \"node-exporter-295ld\" (UID: \"e25ab46f-9394-4c4f-bf94-d5a9726b16da\") " pod="openshift-monitoring/node-exporter-295ld" Apr 22 15:18:13.671220 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:13.671095 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e25ab46f-9394-4c4f-bf94-d5a9726b16da-sys\") pod \"node-exporter-295ld\" (UID: \"e25ab46f-9394-4c4f-bf94-d5a9726b16da\") " pod="openshift-monitoring/node-exporter-295ld" Apr 22 15:18:13.671220 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:13.671130 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/e25ab46f-9394-4c4f-bf94-d5a9726b16da-node-exporter-wtmp\") pod \"node-exporter-295ld\" (UID: \"e25ab46f-9394-4c4f-bf94-d5a9726b16da\") " pod="openshift-monitoring/node-exporter-295ld" Apr 22 15:18:13.671220 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:13.671105 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/e25ab46f-9394-4c4f-bf94-d5a9726b16da-node-exporter-tls\") pod \"node-exporter-295ld\" (UID: \"e25ab46f-9394-4c4f-bf94-d5a9726b16da\") " pod="openshift-monitoring/node-exporter-295ld" Apr 22 15:18:13.671220 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:13.671176 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/e25ab46f-9394-4c4f-bf94-d5a9726b16da-root\") pod \"node-exporter-295ld\" (UID: \"e25ab46f-9394-4c4f-bf94-d5a9726b16da\") " pod="openshift-monitoring/node-exporter-295ld" Apr 22 15:18:13.671220 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:13.671202 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e25ab46f-9394-4c4f-bf94-d5a9726b16da-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-295ld\" (UID: \"e25ab46f-9394-4c4f-bf94-d5a9726b16da\") " pod="openshift-monitoring/node-exporter-295ld" Apr 22 15:18:13.671683 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:13.671391 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/e25ab46f-9394-4c4f-bf94-d5a9726b16da-node-exporter-textfile\") pod \"node-exporter-295ld\" (UID: \"e25ab46f-9394-4c4f-bf94-d5a9726b16da\") " pod="openshift-monitoring/node-exporter-295ld" Apr 22 15:18:13.671683 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:13.671660 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e25ab46f-9394-4c4f-bf94-d5a9726b16da-metrics-client-ca\") pod \"node-exporter-295ld\" (UID: \"e25ab46f-9394-4c4f-bf94-d5a9726b16da\") " pod="openshift-monitoring/node-exporter-295ld" Apr 22 15:18:13.671845 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:13.671820 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/e25ab46f-9394-4c4f-bf94-d5a9726b16da-node-exporter-accelerators-collector-config\") pod \"node-exporter-295ld\" (UID: \"e25ab46f-9394-4c4f-bf94-d5a9726b16da\") " pod="openshift-monitoring/node-exporter-295ld" Apr 22 15:18:13.673795 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:13.673776 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e25ab46f-9394-4c4f-bf94-d5a9726b16da-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-295ld\" (UID: \"e25ab46f-9394-4c4f-bf94-d5a9726b16da\") " pod="openshift-monitoring/node-exporter-295ld" Apr 22 15:18:13.673878 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:13.673792 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/e25ab46f-9394-4c4f-bf94-d5a9726b16da-node-exporter-tls\") pod \"node-exporter-295ld\" (UID: \"e25ab46f-9394-4c4f-bf94-d5a9726b16da\") " pod="openshift-monitoring/node-exporter-295ld" Apr 22 15:18:13.678752 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:13.678727 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fsl6\" (UniqueName: \"kubernetes.io/projected/e25ab46f-9394-4c4f-bf94-d5a9726b16da-kube-api-access-2fsl6\") pod \"node-exporter-295ld\" (UID: \"e25ab46f-9394-4c4f-bf94-d5a9726b16da\") " pod="openshift-monitoring/node-exporter-295ld" Apr 22 15:18:13.753273 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:13.753232 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-295ld" Apr 22 15:18:13.762178 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:18:13.762140 2575 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode25ab46f_9394_4c4f_bf94_d5a9726b16da.slice/crio-d162abf9761e591d9eebda72555cac321cd07d1054d4eb8ff479328ab0709f9f WatchSource:0}: Error finding container d162abf9761e591d9eebda72555cac321cd07d1054d4eb8ff479328ab0709f9f: Status 404 returned error can't find the container with id d162abf9761e591d9eebda72555cac321cd07d1054d4eb8ff479328ab0709f9f Apr 22 15:18:14.006329 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:14.006244 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-295ld" event={"ID":"e25ab46f-9394-4c4f-bf94-d5a9726b16da","Type":"ContainerStarted","Data":"d162abf9761e591d9eebda72555cac321cd07d1054d4eb8ff479328ab0709f9f"} Apr 22 15:18:15.011408 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:15.011367 2575 generic.go:358] "Generic (PLEG): container finished" podID="e25ab46f-9394-4c4f-bf94-d5a9726b16da" containerID="befe2e7d5654951b0a753e2869ff50ee4ed98d4b30c2e6a749744e975b8eb5a6" exitCode=0 Apr 22 15:18:15.011806 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:15.011443 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-295ld" event={"ID":"e25ab46f-9394-4c4f-bf94-d5a9726b16da","Type":"ContainerDied","Data":"befe2e7d5654951b0a753e2869ff50ee4ed98d4b30c2e6a749744e975b8eb5a6"} Apr 22 15:18:16.016274 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:16.016233 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-295ld" event={"ID":"e25ab46f-9394-4c4f-bf94-d5a9726b16da","Type":"ContainerStarted","Data":"9e98753b2c7658391bb49c085cf0dd6b250146d70a1ee5bd68c624c9be60f6f1"} Apr 22 15:18:16.016274 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:16.016280 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-295ld" event={"ID":"e25ab46f-9394-4c4f-bf94-d5a9726b16da","Type":"ContainerStarted","Data":"f1ea18534b44fe681164bdf6c05b9ab27577886740b156adac81920dfebe4aa0"} Apr 22 15:18:16.037589 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:16.037526 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-295ld" podStartSLOduration=2.316136252 podStartE2EDuration="3.03750847s" podCreationTimestamp="2026-04-22 15:18:13 +0000 UTC" firstStartedPulling="2026-04-22 15:18:13.764141917 +0000 UTC m=+592.999389722" lastFinishedPulling="2026-04-22 15:18:14.485514145 +0000 UTC m=+593.720761940" observedRunningTime="2026-04-22 15:18:16.036042557 +0000 UTC m=+595.271290367" watchObservedRunningTime="2026-04-22 15:18:16.03750847 +0000 UTC m=+595.272756281" Apr 22 15:18:18.324569 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:18.324533 2575 scope.go:117] "RemoveContainer" containerID="7407681406bb28dd57d6d74b881999d4ff58483f5c5c2d2df3eb03eb586652ad" Apr 22 15:18:18.324983 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:18:18.324705 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"acm-agent\" with CrashLoopBackOff: \"back-off 40s restarting failed container=acm-agent pod=klusterlet-addon-workmgr-567b4745f5-tj4cv_open-cluster-management-agent-addon(047276c4-c2f7-4f16-a7be-64fba5485b6c)\"" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" podUID="047276c4-c2f7-4f16-a7be-64fba5485b6c" Apr 22 15:18:20.324452 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:20.324411 2575 scope.go:117] "RemoveContainer" containerID="b074a967f88be3cc1d1866829ca77d657a9c1718485754468fdf1c477b95c951" Apr 22 15:18:20.324926 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:18:20.324645 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"addon-agent\" with CrashLoopBackOff: \"back-off 40s restarting failed container=addon-agent pod=managed-serviceaccount-addon-agent-648d979695-ch7nn_open-cluster-management-agent-addon(b50382c5-ef34-4d73-9526-989655d2e11f)\"" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-648d979695-ch7nn" podUID="b50382c5-ef34-4d73-9526-989655d2e11f" Apr 22 15:18:21.247517 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:21.247487 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lt5hd_7b9c0073-689d-408d-ac2b-84411c925f02/ovn-acl-logging/0.log" Apr 22 15:18:21.247517 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:21.247512 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lt5hd_7b9c0073-689d-408d-ac2b-84411c925f02/ovn-acl-logging/0.log" Apr 22 15:18:21.326225 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:21.326109 2575 scope.go:117] "RemoveContainer" containerID="4f598263fbba4e1d553ee83668cf9f79d1fa7504694d86ebdb4921c31bf622f1" Apr 22 15:18:21.345107 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:18:21.326342 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=registry pod=image-registry-66f5f8d5cd-rgqhw_openshift-image-registry(2adf4441-467a-46c0-a616-97afe2eb9fe8)\"" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" Apr 22 15:18:27.240074 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:27.240027 2575 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66f5f8d5cd-rgqhw"] Apr 22 15:18:27.358623 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:27.358599 2575 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:18:27.371302 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:27.371262 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2adf4441-467a-46c0-a616-97afe2eb9fe8-ca-trust-extracted\") pod \"2adf4441-467a-46c0-a616-97afe2eb9fe8\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " Apr 22 15:18:27.371471 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:27.371356 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-bound-sa-token\") pod \"2adf4441-467a-46c0-a616-97afe2eb9fe8\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " Apr 22 15:18:27.371471 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:27.371387 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2adf4441-467a-46c0-a616-97afe2eb9fe8-trusted-ca\") pod \"2adf4441-467a-46c0-a616-97afe2eb9fe8\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " Apr 22 15:18:27.371471 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:27.371416 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwx2n\" (UniqueName: \"kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-kube-api-access-dwx2n\") pod \"2adf4441-467a-46c0-a616-97afe2eb9fe8\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " Apr 22 15:18:27.371471 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:27.371431 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-registry-tls\") pod \"2adf4441-467a-46c0-a616-97afe2eb9fe8\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " Apr 22 15:18:27.371471 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:27.371451 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/2adf4441-467a-46c0-a616-97afe2eb9fe8-image-registry-private-configuration\") pod \"2adf4441-467a-46c0-a616-97afe2eb9fe8\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " Apr 22 15:18:27.371471 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:27.371473 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2adf4441-467a-46c0-a616-97afe2eb9fe8-registry-certificates\") pod \"2adf4441-467a-46c0-a616-97afe2eb9fe8\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " Apr 22 15:18:27.371969 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:27.371939 2575 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2adf4441-467a-46c0-a616-97afe2eb9fe8-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2adf4441-467a-46c0-a616-97afe2eb9fe8" (UID: "2adf4441-467a-46c0-a616-97afe2eb9fe8"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 22 15:18:27.372070 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:27.372046 2575 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2adf4441-467a-46c0-a616-97afe2eb9fe8-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "2adf4441-467a-46c0-a616-97afe2eb9fe8" (UID: "2adf4441-467a-46c0-a616-97afe2eb9fe8"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 22 15:18:27.374056 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:27.374025 2575 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2adf4441-467a-46c0-a616-97afe2eb9fe8-image-registry-private-configuration" (OuterVolumeSpecName: "image-registry-private-configuration") pod "2adf4441-467a-46c0-a616-97afe2eb9fe8" (UID: "2adf4441-467a-46c0-a616-97afe2eb9fe8"). InnerVolumeSpecName "image-registry-private-configuration". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 22 15:18:27.374178 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:27.374073 2575 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "2adf4441-467a-46c0-a616-97afe2eb9fe8" (UID: "2adf4441-467a-46c0-a616-97afe2eb9fe8"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 22 15:18:27.374325 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:27.374304 2575 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "2adf4441-467a-46c0-a616-97afe2eb9fe8" (UID: "2adf4441-467a-46c0-a616-97afe2eb9fe8"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 22 15:18:27.374442 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:27.374296 2575 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-kube-api-access-dwx2n" (OuterVolumeSpecName: "kube-api-access-dwx2n") pod "2adf4441-467a-46c0-a616-97afe2eb9fe8" (UID: "2adf4441-467a-46c0-a616-97afe2eb9fe8"). InnerVolumeSpecName "kube-api-access-dwx2n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 22 15:18:27.391119 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:27.391080 2575 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2adf4441-467a-46c0-a616-97afe2eb9fe8-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "2adf4441-467a-46c0-a616-97afe2eb9fe8" (UID: "2adf4441-467a-46c0-a616-97afe2eb9fe8"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 22 15:18:27.472613 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:27.472556 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2adf4441-467a-46c0-a616-97afe2eb9fe8-installation-pull-secrets\") pod \"2adf4441-467a-46c0-a616-97afe2eb9fe8\" (UID: \"2adf4441-467a-46c0-a616-97afe2eb9fe8\") " Apr 22 15:18:27.472790 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:27.472711 2575 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2adf4441-467a-46c0-a616-97afe2eb9fe8-ca-trust-extracted\") on node \"ip-10-0-134-217.ec2.internal\" DevicePath \"\"" Apr 22 15:18:27.472790 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:27.472723 2575 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-bound-sa-token\") on node \"ip-10-0-134-217.ec2.internal\" DevicePath \"\"" Apr 22 15:18:27.472790 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:27.472733 2575 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2adf4441-467a-46c0-a616-97afe2eb9fe8-trusted-ca\") on node \"ip-10-0-134-217.ec2.internal\" DevicePath \"\"" Apr 22 15:18:27.472790 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:27.472742 2575 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dwx2n\" (UniqueName: \"kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-kube-api-access-dwx2n\") on node \"ip-10-0-134-217.ec2.internal\" DevicePath \"\"" Apr 22 15:18:27.472790 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:27.472753 2575 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2adf4441-467a-46c0-a616-97afe2eb9fe8-registry-tls\") on node \"ip-10-0-134-217.ec2.internal\" DevicePath \"\"" Apr 22 15:18:27.472790 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:27.472763 2575 reconciler_common.go:299] "Volume detached for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/2adf4441-467a-46c0-a616-97afe2eb9fe8-image-registry-private-configuration\") on node \"ip-10-0-134-217.ec2.internal\" DevicePath \"\"" Apr 22 15:18:27.472790 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:27.472771 2575 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2adf4441-467a-46c0-a616-97afe2eb9fe8-registry-certificates\") on node \"ip-10-0-134-217.ec2.internal\" DevicePath \"\"" Apr 22 15:18:27.474812 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:27.474785 2575 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2adf4441-467a-46c0-a616-97afe2eb9fe8-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "2adf4441-467a-46c0-a616-97afe2eb9fe8" (UID: "2adf4441-467a-46c0-a616-97afe2eb9fe8"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 22 15:18:27.573446 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:27.573353 2575 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2adf4441-467a-46c0-a616-97afe2eb9fe8-installation-pull-secrets\") on node \"ip-10-0-134-217.ec2.internal\" DevicePath \"\"" Apr 22 15:18:28.049828 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:28.049792 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" event={"ID":"2adf4441-467a-46c0-a616-97afe2eb9fe8","Type":"ContainerDied","Data":"7ed9a2ef78421de6df9937d18b8fa6d020dca589b572458cd2823b6601fc1df2"} Apr 22 15:18:28.049828 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:28.049818 2575 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66f5f8d5cd-rgqhw" Apr 22 15:18:28.050094 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:28.049847 2575 scope.go:117] "RemoveContainer" containerID="4f598263fbba4e1d553ee83668cf9f79d1fa7504694d86ebdb4921c31bf622f1" Apr 22 15:18:28.069428 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:28.069397 2575 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66f5f8d5cd-rgqhw"] Apr 22 15:18:28.072742 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:28.072712 2575 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66f5f8d5cd-rgqhw"] Apr 22 15:18:29.324800 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:29.324769 2575 scope.go:117] "RemoveContainer" containerID="7407681406bb28dd57d6d74b881999d4ff58483f5c5c2d2df3eb03eb586652ad" Apr 22 15:18:29.325218 ip-10-0-134-217 kubenswrapper[2575]: E0422 15:18:29.324978 2575 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"acm-agent\" with CrashLoopBackOff: \"back-off 40s restarting failed container=acm-agent pod=klusterlet-addon-workmgr-567b4745f5-tj4cv_open-cluster-management-agent-addon(047276c4-c2f7-4f16-a7be-64fba5485b6c)\"" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" podUID="047276c4-c2f7-4f16-a7be-64fba5485b6c" Apr 22 15:18:29.328036 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:29.328011 2575 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" path="/var/lib/kubelet/pods/2adf4441-467a-46c0-a616-97afe2eb9fe8/volumes" Apr 22 15:18:34.324575 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:34.324541 2575 scope.go:117] "RemoveContainer" containerID="b074a967f88be3cc1d1866829ca77d657a9c1718485754468fdf1c477b95c951" Apr 22 15:18:35.072092 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:35.072054 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-648d979695-ch7nn" event={"ID":"b50382c5-ef34-4d73-9526-989655d2e11f","Type":"ContainerStarted","Data":"4a8dad8486c20d28f21c834610138a1b0d0506ff23742ddd23049d0d525af1d6"} Apr 22 15:18:44.324081 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:44.324044 2575 scope.go:117] "RemoveContainer" containerID="7407681406bb28dd57d6d74b881999d4ff58483f5c5c2d2df3eb03eb586652ad" Apr 22 15:18:45.098830 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:45.098789 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" event={"ID":"047276c4-c2f7-4f16-a7be-64fba5485b6c","Type":"ContainerStarted","Data":"d76ec353e12dbe66f82ac7960e9ed54c6101a29c3dfb995ed5ee8b81d1c5b556"} Apr 22 15:18:45.099127 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:45.099108 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" Apr 22 15:18:45.100667 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:18:45.100645 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-567b4745f5-tj4cv" Apr 22 15:22:49.880274 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:22:49.880234 2575 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-587ccfb98-qqrf7"] Apr 22 15:22:49.880773 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:22:49.880460 2575 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" Apr 22 15:22:49.880773 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:22:49.880471 2575 state_mem.go:107] "Deleted CPUSet assignment" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" Apr 22 15:22:49.880773 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:22:49.880481 2575 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" Apr 22 15:22:49.880773 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:22:49.880486 2575 state_mem.go:107] "Deleted CPUSet assignment" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" Apr 22 15:22:49.880773 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:22:49.880493 2575 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" Apr 22 15:22:49.880773 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:22:49.880499 2575 state_mem.go:107] "Deleted CPUSet assignment" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" Apr 22 15:22:49.880773 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:22:49.880508 2575 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" Apr 22 15:22:49.880773 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:22:49.880513 2575 state_mem.go:107] "Deleted CPUSet assignment" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" Apr 22 15:22:49.880773 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:22:49.880519 2575 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" Apr 22 15:22:49.880773 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:22:49.880523 2575 state_mem.go:107] "Deleted CPUSet assignment" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" Apr 22 15:22:49.880773 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:22:49.880529 2575 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" Apr 22 15:22:49.880773 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:22:49.880533 2575 state_mem.go:107] "Deleted CPUSet assignment" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" Apr 22 15:22:49.880773 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:22:49.880570 2575 memory_manager.go:356] "RemoveStaleState removing state" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" Apr 22 15:22:49.880773 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:22:49.880577 2575 memory_manager.go:356] "RemoveStaleState removing state" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" Apr 22 15:22:49.880773 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:22:49.880582 2575 memory_manager.go:356] "RemoveStaleState removing state" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" Apr 22 15:22:49.880773 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:22:49.880589 2575 memory_manager.go:356] "RemoveStaleState removing state" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" Apr 22 15:22:49.880773 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:22:49.880595 2575 memory_manager.go:356] "RemoveStaleState removing state" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" Apr 22 15:22:49.883337 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:22:49.883321 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-587ccfb98-qqrf7" Apr 22 15:22:49.886640 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:22:49.886613 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Apr 22 15:22:49.887580 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:22:49.887565 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Apr 22 15:22:49.887658 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:22:49.887601 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-sd6v2\"" Apr 22 15:22:49.891419 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:22:49.891397 2575 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-587ccfb98-qqrf7"] Apr 22 15:22:49.961075 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:22:49.961035 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2zcd\" (UniqueName: \"kubernetes.io/projected/7fbb649c-3d04-439d-97b4-c0818a94798f-kube-api-access-g2zcd\") pod \"cert-manager-webhook-587ccfb98-qqrf7\" (UID: \"7fbb649c-3d04-439d-97b4-c0818a94798f\") " pod="cert-manager/cert-manager-webhook-587ccfb98-qqrf7" Apr 22 15:22:49.961075 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:22:49.961079 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7fbb649c-3d04-439d-97b4-c0818a94798f-bound-sa-token\") pod \"cert-manager-webhook-587ccfb98-qqrf7\" (UID: \"7fbb649c-3d04-439d-97b4-c0818a94798f\") " pod="cert-manager/cert-manager-webhook-587ccfb98-qqrf7" Apr 22 15:22:50.062037 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:22:50.061991 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7fbb649c-3d04-439d-97b4-c0818a94798f-bound-sa-token\") pod \"cert-manager-webhook-587ccfb98-qqrf7\" (UID: \"7fbb649c-3d04-439d-97b4-c0818a94798f\") " pod="cert-manager/cert-manager-webhook-587ccfb98-qqrf7" Apr 22 15:22:50.062134 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:22:50.062082 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g2zcd\" (UniqueName: \"kubernetes.io/projected/7fbb649c-3d04-439d-97b4-c0818a94798f-kube-api-access-g2zcd\") pod \"cert-manager-webhook-587ccfb98-qqrf7\" (UID: \"7fbb649c-3d04-439d-97b4-c0818a94798f\") " pod="cert-manager/cert-manager-webhook-587ccfb98-qqrf7" Apr 22 15:22:50.071025 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:22:50.071003 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2zcd\" (UniqueName: \"kubernetes.io/projected/7fbb649c-3d04-439d-97b4-c0818a94798f-kube-api-access-g2zcd\") pod \"cert-manager-webhook-587ccfb98-qqrf7\" (UID: \"7fbb649c-3d04-439d-97b4-c0818a94798f\") " pod="cert-manager/cert-manager-webhook-587ccfb98-qqrf7" Apr 22 15:22:50.071105 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:22:50.071003 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7fbb649c-3d04-439d-97b4-c0818a94798f-bound-sa-token\") pod \"cert-manager-webhook-587ccfb98-qqrf7\" (UID: \"7fbb649c-3d04-439d-97b4-c0818a94798f\") " pod="cert-manager/cert-manager-webhook-587ccfb98-qqrf7" Apr 22 15:22:50.193307 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:22:50.193225 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-587ccfb98-qqrf7" Apr 22 15:22:50.311296 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:22:50.311235 2575 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-587ccfb98-qqrf7"] Apr 22 15:22:50.313835 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:22:50.313808 2575 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fbb649c_3d04_439d_97b4_c0818a94798f.slice/crio-10344fa0fa795c1a242fbb7da32ae3a58f7d58756c9d2162b757b475816878cd WatchSource:0}: Error finding container 10344fa0fa795c1a242fbb7da32ae3a58f7d58756c9d2162b757b475816878cd: Status 404 returned error can't find the container with id 10344fa0fa795c1a242fbb7da32ae3a58f7d58756c9d2162b757b475816878cd Apr 22 15:22:50.315809 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:22:50.315789 2575 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 22 15:22:50.745102 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:22:50.745052 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-587ccfb98-qqrf7" event={"ID":"7fbb649c-3d04-439d-97b4-c0818a94798f","Type":"ContainerStarted","Data":"10344fa0fa795c1a242fbb7da32ae3a58f7d58756c9d2162b757b475816878cd"} Apr 22 15:22:53.755226 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:22:53.755187 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-587ccfb98-qqrf7" event={"ID":"7fbb649c-3d04-439d-97b4-c0818a94798f","Type":"ContainerStarted","Data":"6dd0c206ef771660428069b1da2a91d932356bfae2bb5ee222141d8f7ff04fe8"} Apr 22 15:22:53.755595 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:22:53.755250 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-587ccfb98-qqrf7" Apr 22 15:22:53.775295 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:22:53.775246 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-587ccfb98-qqrf7" podStartSLOduration=2.090833557 podStartE2EDuration="4.775231115s" podCreationTimestamp="2026-04-22 15:22:49 +0000 UTC" firstStartedPulling="2026-04-22 15:22:50.316000071 +0000 UTC m=+869.551247873" lastFinishedPulling="2026-04-22 15:22:53.000397622 +0000 UTC m=+872.235645431" observedRunningTime="2026-04-22 15:22:53.773066303 +0000 UTC m=+873.008314112" watchObservedRunningTime="2026-04-22 15:22:53.775231115 +0000 UTC m=+873.010478924" Apr 22 15:22:59.759740 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:22:59.759707 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-587ccfb98-qqrf7" Apr 22 15:23:21.264531 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:23:21.264503 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lt5hd_7b9c0073-689d-408d-ac2b-84411c925f02/ovn-acl-logging/0.log" Apr 22 15:23:21.265349 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:23:21.265330 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lt5hd_7b9c0073-689d-408d-ac2b-84411c925f02/ovn-acl-logging/0.log" Apr 22 15:23:48.663251 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:23:48.663213 2575 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["opendatahub/kubeflow-trainer-controller-manager-7dd5f9474-tltm9"] Apr 22 15:23:48.665641 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:23:48.663492 2575 memory_manager.go:356] "RemoveStaleState removing state" podUID="2adf4441-467a-46c0-a616-97afe2eb9fe8" containerName="registry" Apr 22 15:23:48.666537 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:23:48.666521 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="opendatahub/kubeflow-trainer-controller-manager-7dd5f9474-tltm9" Apr 22 15:23:48.669438 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:23:48.669415 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"opendatahub\"/\"kubeflow-trainer-config\"" Apr 22 15:23:48.670496 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:23:48.670472 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"opendatahub\"/\"kube-root-ca.crt\"" Apr 22 15:23:48.670496 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:23:48.670494 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"opendatahub\"/\"kubeflow-trainer-webhook-cert\"" Apr 22 15:23:48.670655 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:23:48.670500 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"opendatahub\"/\"kubeflow-trainer-controller-manager-dockercfg-5qc96\"" Apr 22 15:23:48.670655 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:23:48.670505 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"opendatahub\"/\"openshift-service-ca.crt\"" Apr 22 15:23:48.674312 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:23:48.674286 2575 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["opendatahub/kubeflow-trainer-controller-manager-7dd5f9474-tltm9"] Apr 22 15:23:48.761398 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:23:48.761359 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6znz\" (UniqueName: \"kubernetes.io/projected/5fff5505-1741-4009-999b-0c93a45b780a-kube-api-access-w6znz\") pod \"kubeflow-trainer-controller-manager-7dd5f9474-tltm9\" (UID: \"5fff5505-1741-4009-999b-0c93a45b780a\") " pod="opendatahub/kubeflow-trainer-controller-manager-7dd5f9474-tltm9" Apr 22 15:23:48.761561 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:23:48.761405 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5fff5505-1741-4009-999b-0c93a45b780a-cert\") pod \"kubeflow-trainer-controller-manager-7dd5f9474-tltm9\" (UID: \"5fff5505-1741-4009-999b-0c93a45b780a\") " pod="opendatahub/kubeflow-trainer-controller-manager-7dd5f9474-tltm9" Apr 22 15:23:48.761561 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:23:48.761484 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeflow-trainer-config\" (UniqueName: \"kubernetes.io/configmap/5fff5505-1741-4009-999b-0c93a45b780a-kubeflow-trainer-config\") pod \"kubeflow-trainer-controller-manager-7dd5f9474-tltm9\" (UID: \"5fff5505-1741-4009-999b-0c93a45b780a\") " pod="opendatahub/kubeflow-trainer-controller-manager-7dd5f9474-tltm9" Apr 22 15:23:48.861937 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:23:48.861891 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w6znz\" (UniqueName: \"kubernetes.io/projected/5fff5505-1741-4009-999b-0c93a45b780a-kube-api-access-w6znz\") pod \"kubeflow-trainer-controller-manager-7dd5f9474-tltm9\" (UID: \"5fff5505-1741-4009-999b-0c93a45b780a\") " pod="opendatahub/kubeflow-trainer-controller-manager-7dd5f9474-tltm9" Apr 22 15:23:48.862131 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:23:48.861954 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5fff5505-1741-4009-999b-0c93a45b780a-cert\") pod \"kubeflow-trainer-controller-manager-7dd5f9474-tltm9\" (UID: \"5fff5505-1741-4009-999b-0c93a45b780a\") " pod="opendatahub/kubeflow-trainer-controller-manager-7dd5f9474-tltm9" Apr 22 15:23:48.862131 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:23:48.861992 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubeflow-trainer-config\" (UniqueName: \"kubernetes.io/configmap/5fff5505-1741-4009-999b-0c93a45b780a-kubeflow-trainer-config\") pod \"kubeflow-trainer-controller-manager-7dd5f9474-tltm9\" (UID: \"5fff5505-1741-4009-999b-0c93a45b780a\") " pod="opendatahub/kubeflow-trainer-controller-manager-7dd5f9474-tltm9" Apr 22 15:23:48.862640 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:23:48.862613 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubeflow-trainer-config\" (UniqueName: \"kubernetes.io/configmap/5fff5505-1741-4009-999b-0c93a45b780a-kubeflow-trainer-config\") pod \"kubeflow-trainer-controller-manager-7dd5f9474-tltm9\" (UID: \"5fff5505-1741-4009-999b-0c93a45b780a\") " pod="opendatahub/kubeflow-trainer-controller-manager-7dd5f9474-tltm9" Apr 22 15:23:48.864436 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:23:48.864415 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5fff5505-1741-4009-999b-0c93a45b780a-cert\") pod \"kubeflow-trainer-controller-manager-7dd5f9474-tltm9\" (UID: \"5fff5505-1741-4009-999b-0c93a45b780a\") " pod="opendatahub/kubeflow-trainer-controller-manager-7dd5f9474-tltm9" Apr 22 15:23:48.870573 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:23:48.870548 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6znz\" (UniqueName: \"kubernetes.io/projected/5fff5505-1741-4009-999b-0c93a45b780a-kube-api-access-w6znz\") pod \"kubeflow-trainer-controller-manager-7dd5f9474-tltm9\" (UID: \"5fff5505-1741-4009-999b-0c93a45b780a\") " pod="opendatahub/kubeflow-trainer-controller-manager-7dd5f9474-tltm9" Apr 22 15:23:48.976403 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:23:48.976372 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="opendatahub/kubeflow-trainer-controller-manager-7dd5f9474-tltm9" Apr 22 15:23:49.097650 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:23:49.097616 2575 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["opendatahub/kubeflow-trainer-controller-manager-7dd5f9474-tltm9"] Apr 22 15:23:49.100816 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:23:49.100779 2575 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5fff5505_1741_4009_999b_0c93a45b780a.slice/crio-81f12915b442f343d8f9b36a4f55e240794ede4b7f0637b4bee6b6526b8fb1e4 WatchSource:0}: Error finding container 81f12915b442f343d8f9b36a4f55e240794ede4b7f0637b4bee6b6526b8fb1e4: Status 404 returned error can't find the container with id 81f12915b442f343d8f9b36a4f55e240794ede4b7f0637b4bee6b6526b8fb1e4 Apr 22 15:23:49.903158 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:23:49.903103 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="opendatahub/kubeflow-trainer-controller-manager-7dd5f9474-tltm9" event={"ID":"5fff5505-1741-4009-999b-0c93a45b780a","Type":"ContainerStarted","Data":"81f12915b442f343d8f9b36a4f55e240794ede4b7f0637b4bee6b6526b8fb1e4"} Apr 22 15:23:51.909011 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:23:51.908912 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="opendatahub/kubeflow-trainer-controller-manager-7dd5f9474-tltm9" event={"ID":"5fff5505-1741-4009-999b-0c93a45b780a","Type":"ContainerStarted","Data":"964cd1f754c97b9c121b5ecce401190265f389a900c902cc5bb2a9eb980e3808"} Apr 22 15:23:51.909357 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:23:51.909013 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="opendatahub/kubeflow-trainer-controller-manager-7dd5f9474-tltm9" Apr 22 15:23:51.925233 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:23:51.925189 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="opendatahub/kubeflow-trainer-controller-manager-7dd5f9474-tltm9" podStartSLOduration=1.406613717 podStartE2EDuration="3.925175023s" podCreationTimestamp="2026-04-22 15:23:48 +0000 UTC" firstStartedPulling="2026-04-22 15:23:49.10259331 +0000 UTC m=+928.337841101" lastFinishedPulling="2026-04-22 15:23:51.621154619 +0000 UTC m=+930.856402407" observedRunningTime="2026-04-22 15:23:51.925081677 +0000 UTC m=+931.160329487" watchObservedRunningTime="2026-04-22 15:23:51.925175023 +0000 UTC m=+931.160422811" Apr 22 15:24:07.916680 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:24:07.916647 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="opendatahub/kubeflow-trainer-controller-manager-7dd5f9474-tltm9" Apr 22 15:28:21.280073 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:28:21.280044 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lt5hd_7b9c0073-689d-408d-ac2b-84411c925f02/ovn-acl-logging/0.log" Apr 22 15:28:21.282031 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:28:21.282005 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lt5hd_7b9c0073-689d-408d-ac2b-84411c925f02/ovn-acl-logging/0.log" Apr 22 15:29:09.021406 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:29:09.021325 2575 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["rhai-e2e-progression-tdbgv/progression-custom-config-node-0-0-9fj68"] Apr 22 15:29:09.024667 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:29:09.024649 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="rhai-e2e-progression-tdbgv/progression-custom-config-node-0-0-9fj68" Apr 22 15:29:09.027384 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:29:09.027358 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"rhai-e2e-progression-tdbgv\"/\"kube-root-ca.crt\"" Apr 22 15:29:09.027384 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:29:09.027379 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"rhai-e2e-progression-tdbgv\"/\"default-dockercfg-sbhg8\"" Apr 22 15:29:09.027550 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:29:09.027361 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"rhai-e2e-progression-tdbgv\"/\"openshift-service-ca.crt\"" Apr 22 15:29:09.037205 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:29:09.037178 2575 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["rhai-e2e-progression-tdbgv/progression-custom-config-node-0-0-9fj68"] Apr 22 15:29:09.065530 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:29:09.065498 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zsbm\" (UniqueName: \"kubernetes.io/projected/e6338d20-9dc3-4039-9fa7-5a0b9a8680fc-kube-api-access-2zsbm\") pod \"progression-custom-config-node-0-0-9fj68\" (UID: \"e6338d20-9dc3-4039-9fa7-5a0b9a8680fc\") " pod="rhai-e2e-progression-tdbgv/progression-custom-config-node-0-0-9fj68" Apr 22 15:29:09.166494 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:29:09.166447 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2zsbm\" (UniqueName: \"kubernetes.io/projected/e6338d20-9dc3-4039-9fa7-5a0b9a8680fc-kube-api-access-2zsbm\") pod \"progression-custom-config-node-0-0-9fj68\" (UID: \"e6338d20-9dc3-4039-9fa7-5a0b9a8680fc\") " pod="rhai-e2e-progression-tdbgv/progression-custom-config-node-0-0-9fj68" Apr 22 15:29:09.175293 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:29:09.175261 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zsbm\" (UniqueName: \"kubernetes.io/projected/e6338d20-9dc3-4039-9fa7-5a0b9a8680fc-kube-api-access-2zsbm\") pod \"progression-custom-config-node-0-0-9fj68\" (UID: \"e6338d20-9dc3-4039-9fa7-5a0b9a8680fc\") " pod="rhai-e2e-progression-tdbgv/progression-custom-config-node-0-0-9fj68" Apr 22 15:29:09.334406 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:29:09.334332 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="rhai-e2e-progression-tdbgv/progression-custom-config-node-0-0-9fj68" Apr 22 15:29:09.457336 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:29:09.457312 2575 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["rhai-e2e-progression-tdbgv/progression-custom-config-node-0-0-9fj68"] Apr 22 15:29:09.459823 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:29:09.459794 2575 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6338d20_9dc3_4039_9fa7_5a0b9a8680fc.slice/crio-dc934712c693d82f97b1549d71210161e81f9cea25664a59311a3f2eeb6d7248 WatchSource:0}: Error finding container dc934712c693d82f97b1549d71210161e81f9cea25664a59311a3f2eeb6d7248: Status 404 returned error can't find the container with id dc934712c693d82f97b1549d71210161e81f9cea25664a59311a3f2eeb6d7248 Apr 22 15:29:09.461981 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:29:09.461964 2575 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 22 15:29:09.716501 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:29:09.716461 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="rhai-e2e-progression-tdbgv/progression-custom-config-node-0-0-9fj68" event={"ID":"e6338d20-9dc3-4039-9fa7-5a0b9a8680fc","Type":"ContainerStarted","Data":"dc934712c693d82f97b1549d71210161e81f9cea25664a59311a3f2eeb6d7248"} Apr 22 15:31:05.035243 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:05.035206 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="rhai-e2e-progression-tdbgv/progression-custom-config-node-0-0-9fj68" event={"ID":"e6338d20-9dc3-4039-9fa7-5a0b9a8680fc","Type":"ContainerStarted","Data":"c960f988aca7d8be911c4c9c16adba81045a6cf5d15404329e888876a4ce1c0e"} Apr 22 15:31:05.035663 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:05.035344 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="rhai-e2e-progression-tdbgv/progression-custom-config-node-0-0-9fj68" Apr 22 15:31:05.059203 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:05.059142 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="rhai-e2e-progression-tdbgv/progression-custom-config-node-0-0-9fj68" podStartSLOduration=0.858951107 podStartE2EDuration="1m56.059122446s" podCreationTimestamp="2026-04-22 15:29:09 +0000 UTC" firstStartedPulling="2026-04-22 15:29:09.462124426 +0000 UTC m=+1248.697372214" lastFinishedPulling="2026-04-22 15:31:04.662295766 +0000 UTC m=+1363.897543553" observedRunningTime="2026-04-22 15:31:05.053742971 +0000 UTC m=+1364.288990781" watchObservedRunningTime="2026-04-22 15:31:05.059122446 +0000 UTC m=+1364.294370257" Apr 22 15:31:06.037720 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:06.037678 2575 prober.go:120] "Probe failed" probeType="Readiness" pod="rhai-e2e-progression-tdbgv/progression-custom-config-node-0-0-9fj68" podUID="e6338d20-9dc3-4039-9fa7-5a0b9a8680fc" containerName="node" probeResult="failure" output="Get \"http://10.132.0.15:28080/metrics\": dial tcp 10.132.0.15:28080: connect: connection refused" Apr 22 15:31:06.038750 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:06.038722 2575 prober.go:120] "Probe failed" probeType="Readiness" pod="rhai-e2e-progression-tdbgv/progression-custom-config-node-0-0-9fj68" podUID="e6338d20-9dc3-4039-9fa7-5a0b9a8680fc" containerName="node" probeResult="failure" output="Get \"http://10.132.0.15:28080/metrics\": dial tcp 10.132.0.15:28080: connect: connection refused" Apr 22 15:31:07.040116 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:07.040084 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="rhai-e2e-progression-tdbgv/progression-custom-config-node-0-0-9fj68" Apr 22 15:31:28.038920 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:28.038835 2575 prober.go:120] "Probe failed" probeType="Readiness" pod="rhai-e2e-progression-tdbgv/progression-custom-config-node-0-0-9fj68" podUID="e6338d20-9dc3-4039-9fa7-5a0b9a8680fc" containerName="node" probeResult="failure" output="Get \"http://10.132.0.15:28080/metrics\": dial tcp 10.132.0.15:28080: connect: connection refused" Apr 22 15:31:28.098800 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:28.098773 2575 generic.go:358] "Generic (PLEG): container finished" podID="e6338d20-9dc3-4039-9fa7-5a0b9a8680fc" containerID="c960f988aca7d8be911c4c9c16adba81045a6cf5d15404329e888876a4ce1c0e" exitCode=0 Apr 22 15:31:28.098950 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:28.098853 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="rhai-e2e-progression-tdbgv/progression-custom-config-node-0-0-9fj68" event={"ID":"e6338d20-9dc3-4039-9fa7-5a0b9a8680fc","Type":"ContainerDied","Data":"c960f988aca7d8be911c4c9c16adba81045a6cf5d15404329e888876a4ce1c0e"} Apr 22 15:31:29.222395 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:29.222373 2575 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="rhai-e2e-progression-tdbgv/progression-custom-config-node-0-0-9fj68" Apr 22 15:31:29.335256 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:29.335226 2575 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zsbm\" (UniqueName: \"kubernetes.io/projected/e6338d20-9dc3-4039-9fa7-5a0b9a8680fc-kube-api-access-2zsbm\") pod \"e6338d20-9dc3-4039-9fa7-5a0b9a8680fc\" (UID: \"e6338d20-9dc3-4039-9fa7-5a0b9a8680fc\") " Apr 22 15:31:29.337518 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:29.337492 2575 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6338d20-9dc3-4039-9fa7-5a0b9a8680fc-kube-api-access-2zsbm" (OuterVolumeSpecName: "kube-api-access-2zsbm") pod "e6338d20-9dc3-4039-9fa7-5a0b9a8680fc" (UID: "e6338d20-9dc3-4039-9fa7-5a0b9a8680fc"). InnerVolumeSpecName "kube-api-access-2zsbm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 22 15:31:29.436694 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:29.436636 2575 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2zsbm\" (UniqueName: \"kubernetes.io/projected/e6338d20-9dc3-4039-9fa7-5a0b9a8680fc-kube-api-access-2zsbm\") on node \"ip-10-0-134-217.ec2.internal\" DevicePath \"\"" Apr 22 15:31:30.105404 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:30.105365 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="rhai-e2e-progression-tdbgv/progression-custom-config-node-0-0-9fj68" event={"ID":"e6338d20-9dc3-4039-9fa7-5a0b9a8680fc","Type":"ContainerDied","Data":"dc934712c693d82f97b1549d71210161e81f9cea25664a59311a3f2eeb6d7248"} Apr 22 15:31:30.105404 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:30.105402 2575 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="rhai-e2e-progression-tdbgv/progression-custom-config-node-0-0-9fj68" Apr 22 15:31:30.105596 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:30.105403 2575 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc934712c693d82f97b1549d71210161e81f9cea25664a59311a3f2eeb6d7248" Apr 22 15:31:32.747413 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:32.747376 2575 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["rhai-e2e-progression-tdbgv/progression-custom-config-node-0-0-9fj68"] Apr 22 15:31:32.750219 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:32.750190 2575 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["rhai-e2e-progression-tdbgv/progression-custom-config-node-0-0-9fj68"] Apr 22 15:31:33.328127 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:33.328092 2575 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6338d20-9dc3-4039-9fa7-5a0b9a8680fc" path="/var/lib/kubelet/pods/e6338d20-9dc3-4039-9fa7-5a0b9a8680fc/volumes" Apr 22 15:31:43.816522 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:43.816483 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/opendatahub_kubeflow-trainer-controller-manager-7dd5f9474-tltm9_5fff5505-1741-4009-999b-0c93a45b780a/manager/0.log" Apr 22 15:31:44.259490 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:44.259460 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/opendatahub_kubeflow-trainer-controller-manager-7dd5f9474-tltm9_5fff5505-1741-4009-999b-0c93a45b780a/manager/0.log" Apr 22 15:31:44.709070 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:44.709039 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/opendatahub_kubeflow-trainer-controller-manager-7dd5f9474-tltm9_5fff5505-1741-4009-999b-0c93a45b780a/manager/0.log" Apr 22 15:31:55.482394 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:55.482354 2575 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-nrz24/must-gather-xv2lk"] Apr 22 15:31:55.482804 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:55.482637 2575 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e6338d20-9dc3-4039-9fa7-5a0b9a8680fc" containerName="node" Apr 22 15:31:55.482804 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:55.482656 2575 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6338d20-9dc3-4039-9fa7-5a0b9a8680fc" containerName="node" Apr 22 15:31:55.482804 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:55.482735 2575 memory_manager.go:356] "RemoveStaleState removing state" podUID="e6338d20-9dc3-4039-9fa7-5a0b9a8680fc" containerName="node" Apr 22 15:31:55.485547 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:55.485532 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nrz24/must-gather-xv2lk" Apr 22 15:31:55.488087 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:55.488065 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-nrz24\"/\"openshift-service-ca.crt\"" Apr 22 15:31:55.489180 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:55.489160 2575 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-nrz24\"/\"default-dockercfg-klhx2\"" Apr 22 15:31:55.489248 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:55.489165 2575 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-nrz24\"/\"kube-root-ca.crt\"" Apr 22 15:31:55.491422 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:55.491401 2575 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-nrz24/must-gather-xv2lk"] Apr 22 15:31:55.514376 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:55.514350 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4efacdb6-e83d-48cf-a6e0-10038937f148-must-gather-output\") pod \"must-gather-xv2lk\" (UID: \"4efacdb6-e83d-48cf-a6e0-10038937f148\") " pod="openshift-must-gather-nrz24/must-gather-xv2lk" Apr 22 15:31:55.514500 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:55.514382 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnr56\" (UniqueName: \"kubernetes.io/projected/4efacdb6-e83d-48cf-a6e0-10038937f148-kube-api-access-wnr56\") pod \"must-gather-xv2lk\" (UID: \"4efacdb6-e83d-48cf-a6e0-10038937f148\") " pod="openshift-must-gather-nrz24/must-gather-xv2lk" Apr 22 15:31:55.615770 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:55.615727 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4efacdb6-e83d-48cf-a6e0-10038937f148-must-gather-output\") pod \"must-gather-xv2lk\" (UID: \"4efacdb6-e83d-48cf-a6e0-10038937f148\") " pod="openshift-must-gather-nrz24/must-gather-xv2lk" Apr 22 15:31:55.615770 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:55.615769 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wnr56\" (UniqueName: \"kubernetes.io/projected/4efacdb6-e83d-48cf-a6e0-10038937f148-kube-api-access-wnr56\") pod \"must-gather-xv2lk\" (UID: \"4efacdb6-e83d-48cf-a6e0-10038937f148\") " pod="openshift-must-gather-nrz24/must-gather-xv2lk" Apr 22 15:31:55.616097 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:55.616074 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4efacdb6-e83d-48cf-a6e0-10038937f148-must-gather-output\") pod \"must-gather-xv2lk\" (UID: \"4efacdb6-e83d-48cf-a6e0-10038937f148\") " pod="openshift-must-gather-nrz24/must-gather-xv2lk" Apr 22 15:31:55.623893 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:55.623854 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnr56\" (UniqueName: \"kubernetes.io/projected/4efacdb6-e83d-48cf-a6e0-10038937f148-kube-api-access-wnr56\") pod \"must-gather-xv2lk\" (UID: \"4efacdb6-e83d-48cf-a6e0-10038937f148\") " pod="openshift-must-gather-nrz24/must-gather-xv2lk" Apr 22 15:31:55.795139 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:55.795061 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nrz24/must-gather-xv2lk" Apr 22 15:31:55.909654 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:55.909620 2575 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-nrz24/must-gather-xv2lk"] Apr 22 15:31:55.912459 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:31:55.912433 2575 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4efacdb6_e83d_48cf_a6e0_10038937f148.slice/crio-c4486ad138254984919ad5750aa86522a8913954182e912b8fa27497c3c244d6 WatchSource:0}: Error finding container c4486ad138254984919ad5750aa86522a8913954182e912b8fa27497c3c244d6: Status 404 returned error can't find the container with id c4486ad138254984919ad5750aa86522a8913954182e912b8fa27497c3c244d6 Apr 22 15:31:56.175794 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:56.175702 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nrz24/must-gather-xv2lk" event={"ID":"4efacdb6-e83d-48cf-a6e0-10038937f148","Type":"ContainerStarted","Data":"c4486ad138254984919ad5750aa86522a8913954182e912b8fa27497c3c244d6"} Apr 22 15:31:57.180994 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:57.180490 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nrz24/must-gather-xv2lk" event={"ID":"4efacdb6-e83d-48cf-a6e0-10038937f148","Type":"ContainerStarted","Data":"21c74b2d8879560d31d8d5f32372bce0cc7bb31841b2d4400edf45086c77da6e"} Apr 22 15:31:57.180994 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:57.180534 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nrz24/must-gather-xv2lk" event={"ID":"4efacdb6-e83d-48cf-a6e0-10038937f148","Type":"ContainerStarted","Data":"780665e468b59d0cbc08266fae6ec1fc39742a7148ecb374717b06c29efb5849"} Apr 22 15:31:57.196973 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:57.196927 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-nrz24/must-gather-xv2lk" podStartSLOduration=1.331392538 podStartE2EDuration="2.19691105s" podCreationTimestamp="2026-04-22 15:31:55 +0000 UTC" firstStartedPulling="2026-04-22 15:31:55.914166306 +0000 UTC m=+1415.149414109" lastFinishedPulling="2026-04-22 15:31:56.779684833 +0000 UTC m=+1416.014932621" observedRunningTime="2026-04-22 15:31:57.195897699 +0000 UTC m=+1416.431145511" watchObservedRunningTime="2026-04-22 15:31:57.19691105 +0000 UTC m=+1416.432158885" Apr 22 15:31:58.002436 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:58.002406 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_global-pull-secret-syncer-6fwnt_95af4bf4-9e09-49ec-bfb1-f16c11110db8/global-pull-secret-syncer/0.log" Apr 22 15:31:58.177203 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:58.177151 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_konnectivity-agent-qxtv4_a17c3e99-1108-4fee-af0c-ec3741b68100/konnectivity-agent/0.log" Apr 22 15:31:58.197408 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:31:58.197382 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_kube-apiserver-proxy-ip-10-0-134-217.ec2.internal_9b396163e7a8a1c1709913f4b2fb7b1e/haproxy/0.log" Apr 22 15:32:01.768277 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:01.768175 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-295ld_e25ab46f-9394-4c4f-bf94-d5a9726b16da/node-exporter/0.log" Apr 22 15:32:01.796855 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:01.796824 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-295ld_e25ab46f-9394-4c4f-bf94-d5a9726b16da/kube-rbac-proxy/0.log" Apr 22 15:32:01.828049 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:01.828023 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-295ld_e25ab46f-9394-4c4f-bf94-d5a9726b16da/init-textfile/0.log" Apr 22 15:32:04.710456 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:04.710413 2575 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-nrz24/perf-node-gather-daemonset-kfrnd"] Apr 22 15:32:04.714515 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:04.714491 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nrz24/perf-node-gather-daemonset-kfrnd" Apr 22 15:32:04.722126 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:04.722100 2575 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-nrz24/perf-node-gather-daemonset-kfrnd"] Apr 22 15:32:04.794834 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:04.794790 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkmdz\" (UniqueName: \"kubernetes.io/projected/2c4eac7a-1c5e-4864-abf7-ab3c73d423f0-kube-api-access-dkmdz\") pod \"perf-node-gather-daemonset-kfrnd\" (UID: \"2c4eac7a-1c5e-4864-abf7-ab3c73d423f0\") " pod="openshift-must-gather-nrz24/perf-node-gather-daemonset-kfrnd" Apr 22 15:32:04.794834 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:04.794830 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/2c4eac7a-1c5e-4864-abf7-ab3c73d423f0-proc\") pod \"perf-node-gather-daemonset-kfrnd\" (UID: \"2c4eac7a-1c5e-4864-abf7-ab3c73d423f0\") " pod="openshift-must-gather-nrz24/perf-node-gather-daemonset-kfrnd" Apr 22 15:32:04.795042 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:04.794852 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2c4eac7a-1c5e-4864-abf7-ab3c73d423f0-sys\") pod \"perf-node-gather-daemonset-kfrnd\" (UID: \"2c4eac7a-1c5e-4864-abf7-ab3c73d423f0\") " pod="openshift-must-gather-nrz24/perf-node-gather-daemonset-kfrnd" Apr 22 15:32:04.795042 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:04.794978 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c4eac7a-1c5e-4864-abf7-ab3c73d423f0-lib-modules\") pod \"perf-node-gather-daemonset-kfrnd\" (UID: \"2c4eac7a-1c5e-4864-abf7-ab3c73d423f0\") " pod="openshift-must-gather-nrz24/perf-node-gather-daemonset-kfrnd" Apr 22 15:32:04.795042 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:04.795013 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/2c4eac7a-1c5e-4864-abf7-ab3c73d423f0-podres\") pod \"perf-node-gather-daemonset-kfrnd\" (UID: \"2c4eac7a-1c5e-4864-abf7-ab3c73d423f0\") " pod="openshift-must-gather-nrz24/perf-node-gather-daemonset-kfrnd" Apr 22 15:32:04.896392 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:04.896350 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/2c4eac7a-1c5e-4864-abf7-ab3c73d423f0-podres\") pod \"perf-node-gather-daemonset-kfrnd\" (UID: \"2c4eac7a-1c5e-4864-abf7-ab3c73d423f0\") " pod="openshift-must-gather-nrz24/perf-node-gather-daemonset-kfrnd" Apr 22 15:32:04.896594 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:04.896422 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dkmdz\" (UniqueName: \"kubernetes.io/projected/2c4eac7a-1c5e-4864-abf7-ab3c73d423f0-kube-api-access-dkmdz\") pod \"perf-node-gather-daemonset-kfrnd\" (UID: \"2c4eac7a-1c5e-4864-abf7-ab3c73d423f0\") " pod="openshift-must-gather-nrz24/perf-node-gather-daemonset-kfrnd" Apr 22 15:32:04.896594 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:04.896453 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/2c4eac7a-1c5e-4864-abf7-ab3c73d423f0-proc\") pod \"perf-node-gather-daemonset-kfrnd\" (UID: \"2c4eac7a-1c5e-4864-abf7-ab3c73d423f0\") " pod="openshift-must-gather-nrz24/perf-node-gather-daemonset-kfrnd" Apr 22 15:32:04.896594 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:04.896473 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2c4eac7a-1c5e-4864-abf7-ab3c73d423f0-sys\") pod \"perf-node-gather-daemonset-kfrnd\" (UID: \"2c4eac7a-1c5e-4864-abf7-ab3c73d423f0\") " pod="openshift-must-gather-nrz24/perf-node-gather-daemonset-kfrnd" Apr 22 15:32:04.896594 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:04.896532 2575 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c4eac7a-1c5e-4864-abf7-ab3c73d423f0-lib-modules\") pod \"perf-node-gather-daemonset-kfrnd\" (UID: \"2c4eac7a-1c5e-4864-abf7-ab3c73d423f0\") " pod="openshift-must-gather-nrz24/perf-node-gather-daemonset-kfrnd" Apr 22 15:32:04.896594 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:04.896565 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/2c4eac7a-1c5e-4864-abf7-ab3c73d423f0-podres\") pod \"perf-node-gather-daemonset-kfrnd\" (UID: \"2c4eac7a-1c5e-4864-abf7-ab3c73d423f0\") " pod="openshift-must-gather-nrz24/perf-node-gather-daemonset-kfrnd" Apr 22 15:32:04.896594 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:04.896589 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/2c4eac7a-1c5e-4864-abf7-ab3c73d423f0-proc\") pod \"perf-node-gather-daemonset-kfrnd\" (UID: \"2c4eac7a-1c5e-4864-abf7-ab3c73d423f0\") " pod="openshift-must-gather-nrz24/perf-node-gather-daemonset-kfrnd" Apr 22 15:32:04.896814 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:04.896599 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2c4eac7a-1c5e-4864-abf7-ab3c73d423f0-sys\") pod \"perf-node-gather-daemonset-kfrnd\" (UID: \"2c4eac7a-1c5e-4864-abf7-ab3c73d423f0\") " pod="openshift-must-gather-nrz24/perf-node-gather-daemonset-kfrnd" Apr 22 15:32:04.896814 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:04.896662 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c4eac7a-1c5e-4864-abf7-ab3c73d423f0-lib-modules\") pod \"perf-node-gather-daemonset-kfrnd\" (UID: \"2c4eac7a-1c5e-4864-abf7-ab3c73d423f0\") " pod="openshift-must-gather-nrz24/perf-node-gather-daemonset-kfrnd" Apr 22 15:32:04.905538 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:04.905509 2575 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkmdz\" (UniqueName: \"kubernetes.io/projected/2c4eac7a-1c5e-4864-abf7-ab3c73d423f0-kube-api-access-dkmdz\") pod \"perf-node-gather-daemonset-kfrnd\" (UID: \"2c4eac7a-1c5e-4864-abf7-ab3c73d423f0\") " pod="openshift-must-gather-nrz24/perf-node-gather-daemonset-kfrnd" Apr 22 15:32:05.025835 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:05.025760 2575 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nrz24/perf-node-gather-daemonset-kfrnd" Apr 22 15:32:05.153121 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:05.153096 2575 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-nrz24/perf-node-gather-daemonset-kfrnd"] Apr 22 15:32:05.156325 ip-10-0-134-217 kubenswrapper[2575]: W0422 15:32:05.156294 2575 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod2c4eac7a_1c5e_4864_abf7_ab3c73d423f0.slice/crio-f9c18c7194bf65161cd3d3ae9b183c7933cbdc9ac2efec51e437c6ca896debb9 WatchSource:0}: Error finding container f9c18c7194bf65161cd3d3ae9b183c7933cbdc9ac2efec51e437c6ca896debb9: Status 404 returned error can't find the container with id f9c18c7194bf65161cd3d3ae9b183c7933cbdc9ac2efec51e437c6ca896debb9 Apr 22 15:32:05.208837 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:05.208808 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nrz24/perf-node-gather-daemonset-kfrnd" event={"ID":"2c4eac7a-1c5e-4864-abf7-ab3c73d423f0","Type":"ContainerStarted","Data":"f9c18c7194bf65161cd3d3ae9b183c7933cbdc9ac2efec51e437c6ca896debb9"} Apr 22 15:32:05.717709 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:05.717684 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-rb7d6_63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b/dns/0.log" Apr 22 15:32:05.739829 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:05.739790 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-rb7d6_63cdf3b0-7bf4-40fc-a333-2ca716b7ef3b/kube-rbac-proxy/0.log" Apr 22 15:32:05.788640 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:05.788618 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-hqq4l_54264bd4-ce9e-4010-b213-56e5f4bfe070/dns-node-resolver/0.log" Apr 22 15:32:06.212996 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:06.212962 2575 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nrz24/perf-node-gather-daemonset-kfrnd" event={"ID":"2c4eac7a-1c5e-4864-abf7-ab3c73d423f0","Type":"ContainerStarted","Data":"c95a57e039d4e499896adbf890ec4d9c61ec8758c42f43db46b481fe5148365a"} Apr 22 15:32:06.213151 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:06.213079 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-must-gather-nrz24/perf-node-gather-daemonset-kfrnd" Apr 22 15:32:06.230052 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:06.230008 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-nrz24/perf-node-gather-daemonset-kfrnd" podStartSLOduration=2.229993773 podStartE2EDuration="2.229993773s" podCreationTimestamp="2026-04-22 15:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-22 15:32:06.228206201 +0000 UTC m=+1425.463454027" watchObservedRunningTime="2026-04-22 15:32:06.229993773 +0000 UTC m=+1425.465241583" Apr 22 15:32:06.242278 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:06.242247 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-rw9wr_24727a23-7950-43c6-9a15-92416687fab7/node-ca/0.log" Apr 22 15:32:07.320471 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:07.320442 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-4lslj_60651bed-aafc-4a23-b90f-3110ee68359c/serve-healthcheck-canary/0.log" Apr 22 15:32:07.854272 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:07.854246 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-j9pck_1f82ac4e-087f-449b-b8e0-2f3bfab3600e/kube-rbac-proxy/0.log" Apr 22 15:32:07.876795 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:07.876768 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-j9pck_1f82ac4e-087f-449b-b8e0-2f3bfab3600e/exporter/0.log" Apr 22 15:32:07.900855 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:07.900832 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-j9pck_1f82ac4e-087f-449b-b8e0-2f3bfab3600e/extractor/0.log" Apr 22 15:32:12.228120 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:12.228091 2575 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-nrz24/perf-node-gather-daemonset-kfrnd" Apr 22 15:32:13.901515 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:13.901421 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4rqkv_df2d7157-ac73-43ed-adb1-0db7ad5e65fd/kube-multus/0.log" Apr 22 15:32:14.301527 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:14.301497 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-xrffc_a5f9bf55-b089-4f8e-8313-0f7409db1455/kube-multus-additional-cni-plugins/0.log" Apr 22 15:32:14.326516 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:14.326493 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-xrffc_a5f9bf55-b089-4f8e-8313-0f7409db1455/egress-router-binary-copy/0.log" Apr 22 15:32:14.351028 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:14.351003 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-xrffc_a5f9bf55-b089-4f8e-8313-0f7409db1455/cni-plugins/0.log" Apr 22 15:32:14.376027 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:14.375994 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-xrffc_a5f9bf55-b089-4f8e-8313-0f7409db1455/bond-cni-plugin/0.log" Apr 22 15:32:14.402401 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:14.402371 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-xrffc_a5f9bf55-b089-4f8e-8313-0f7409db1455/routeoverride-cni/0.log" Apr 22 15:32:14.426404 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:14.426380 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-xrffc_a5f9bf55-b089-4f8e-8313-0f7409db1455/whereabouts-cni-bincopy/0.log" Apr 22 15:32:14.452262 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:14.452236 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-xrffc_a5f9bf55-b089-4f8e-8313-0f7409db1455/whereabouts-cni/0.log" Apr 22 15:32:14.594125 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:14.594035 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-b6hrq_be8c5f47-6214-42a7-8e36-1c852cc48be6/network-metrics-daemon/0.log" Apr 22 15:32:14.615454 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:14.615428 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-b6hrq_be8c5f47-6214-42a7-8e36-1c852cc48be6/kube-rbac-proxy/0.log" Apr 22 15:32:15.926686 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:15.926608 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lt5hd_7b9c0073-689d-408d-ac2b-84411c925f02/ovn-controller/0.log" Apr 22 15:32:15.949602 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:15.949574 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lt5hd_7b9c0073-689d-408d-ac2b-84411c925f02/ovn-acl-logging/0.log" Apr 22 15:32:15.956209 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:15.956187 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lt5hd_7b9c0073-689d-408d-ac2b-84411c925f02/ovn-acl-logging/1.log" Apr 22 15:32:15.979044 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:15.979023 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lt5hd_7b9c0073-689d-408d-ac2b-84411c925f02/kube-rbac-proxy-node/0.log" Apr 22 15:32:16.005785 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:16.005760 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lt5hd_7b9c0073-689d-408d-ac2b-84411c925f02/kube-rbac-proxy-ovn-metrics/0.log" Apr 22 15:32:16.027028 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:16.027011 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lt5hd_7b9c0073-689d-408d-ac2b-84411c925f02/northd/0.log" Apr 22 15:32:16.049185 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:16.049164 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lt5hd_7b9c0073-689d-408d-ac2b-84411c925f02/nbdb/0.log" Apr 22 15:32:16.072231 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:16.072209 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lt5hd_7b9c0073-689d-408d-ac2b-84411c925f02/sbdb/0.log" Apr 22 15:32:16.173247 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:16.173220 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lt5hd_7b9c0073-689d-408d-ac2b-84411c925f02/ovnkube-controller/0.log" Apr 22 15:32:17.172575 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:17.172542 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-target-j6s9c_0e7fe577-78a6-4227-b074-218a66e869bc/network-check-target-container/0.log" Apr 22 15:32:18.129579 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:18.129549 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_iptables-alerter-x467p_af1546e5-60a5-4932-8506-3627e007c4b6/iptables-alerter/0.log" Apr 22 15:32:18.771471 ip-10-0-134-217 kubenswrapper[2575]: I0422 15:32:18.771447 2575 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_tuned-g4dbx_ffffeec3-bd38-4d24-8d3d-36ee2cdbe144/tuned/0.log"