Apr 23 17:49:50.457501 ip-10-0-135-87 systemd[1]: kubelet.service: Failed to load environment files: No such file or directory Apr 23 17:49:50.457514 ip-10-0-135-87 systemd[1]: kubelet.service: Failed to run 'start-pre' task: No such file or directory Apr 23 17:49:50.457521 ip-10-0-135-87 systemd[1]: kubelet.service: Failed with result 'resources'. Apr 23 17:49:50.457731 ip-10-0-135-87 systemd[1]: Failed to start Kubernetes Kubelet. Apr 23 17:50:00.602941 ip-10-0-135-87 systemd[1]: kubelet.service: Failed to schedule restart job: Unit crio.service not found. Apr 23 17:50:00.602960 ip-10-0-135-87 systemd[1]: kubelet.service: Failed with result 'resources'. -- Boot e15ceba2127345d4b6403bad3504d375 -- Apr 23 17:52:09.504126 ip-10-0-135-87 systemd[1]: Starting Kubernetes Kubelet... Apr 23 17:52:10.006308 ip-10-0-135-87 kubenswrapper[2574]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 23 17:52:10.006308 ip-10-0-135-87 kubenswrapper[2574]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Apr 23 17:52:10.006308 ip-10-0-135-87 kubenswrapper[2574]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 23 17:52:10.006308 ip-10-0-135-87 kubenswrapper[2574]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 23 17:52:10.006308 ip-10-0-135-87 kubenswrapper[2574]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 23 17:52:10.008287 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.008199 2574 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 23 17:52:10.012327 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012313 2574 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:52:10.012365 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012328 2574 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:52:10.012365 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012332 2574 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:52:10.012365 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012336 2574 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:52:10.012365 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012339 2574 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:52:10.012365 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012343 2574 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:52:10.012365 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012345 2574 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:52:10.012365 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012348 2574 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:52:10.012365 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012351 2574 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:52:10.012365 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012354 2574 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:52:10.012365 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012356 2574 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:52:10.012365 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012359 2574 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:52:10.012365 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012362 2574 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:52:10.012365 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012364 2574 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:52:10.012365 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012367 2574 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:52:10.012692 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012392 2574 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:52:10.012692 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012396 2574 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:52:10.012692 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012399 2574 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:52:10.012692 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012402 2574 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:52:10.012692 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012405 2574 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:52:10.012692 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012407 2574 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:52:10.012692 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012410 2574 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:52:10.012692 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012413 2574 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:52:10.012692 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012416 2574 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:52:10.012692 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012418 2574 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:52:10.012692 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012421 2574 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:52:10.012692 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012423 2574 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:52:10.012692 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012426 2574 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:52:10.012692 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012428 2574 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:52:10.012692 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012430 2574 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:52:10.012692 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012433 2574 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:52:10.012692 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012435 2574 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:52:10.012692 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012438 2574 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:52:10.012692 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012440 2574 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:52:10.013165 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012442 2574 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:52:10.013165 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012445 2574 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:52:10.013165 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012447 2574 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:52:10.013165 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012449 2574 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:52:10.013165 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012452 2574 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:52:10.013165 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012456 2574 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:52:10.013165 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012459 2574 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:52:10.013165 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012462 2574 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:52:10.013165 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012464 2574 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:52:10.013165 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012467 2574 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:52:10.013165 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012469 2574 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:52:10.013165 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012471 2574 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:52:10.013165 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012474 2574 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:52:10.013165 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012477 2574 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:52:10.013165 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012481 2574 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:52:10.013165 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012483 2574 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:52:10.013165 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012486 2574 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:52:10.013165 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012489 2574 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:52:10.013165 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012492 2574 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:52:10.013637 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012494 2574 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:52:10.013637 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012497 2574 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:52:10.013637 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012500 2574 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:52:10.013637 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012503 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:52:10.013637 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012505 2574 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:52:10.013637 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012508 2574 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:52:10.013637 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012511 2574 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:52:10.013637 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012513 2574 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:52:10.013637 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012515 2574 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:52:10.013637 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012518 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:52:10.013637 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012521 2574 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:52:10.013637 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012523 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:52:10.013637 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012526 2574 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:52:10.013637 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012530 2574 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:52:10.013637 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012534 2574 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:52:10.013637 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012537 2574 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:52:10.013637 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012539 2574 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:52:10.013637 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012542 2574 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:52:10.013637 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012544 2574 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:52:10.013637 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012547 2574 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:52:10.014175 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012549 2574 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:52:10.014175 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012552 2574 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:52:10.014175 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012556 2574 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:52:10.014175 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012559 2574 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:52:10.014175 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012562 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:52:10.014175 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012565 2574 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:52:10.014175 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012568 2574 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:52:10.014175 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012571 2574 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:52:10.014175 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012574 2574 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:52:10.014175 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012576 2574 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:52:10.014175 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012579 2574 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:52:10.014175 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012581 2574 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:52:10.014175 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012584 2574 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:52:10.014175 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.012999 2574 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:52:10.014175 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013005 2574 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:52:10.014175 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013009 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:52:10.014175 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013012 2574 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:52:10.014175 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013015 2574 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:52:10.014175 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013018 2574 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:52:10.014631 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013021 2574 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:52:10.014631 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013024 2574 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:52:10.014631 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013027 2574 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:52:10.014631 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013030 2574 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:52:10.014631 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013033 2574 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:52:10.014631 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013036 2574 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:52:10.014631 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013039 2574 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:52:10.014631 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013042 2574 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:52:10.014631 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013045 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:52:10.014631 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013048 2574 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:52:10.014631 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013050 2574 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:52:10.014631 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013053 2574 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:52:10.014631 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013056 2574 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:52:10.014631 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013059 2574 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:52:10.014631 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013061 2574 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:52:10.014631 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013063 2574 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:52:10.014631 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013066 2574 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:52:10.014631 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013068 2574 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:52:10.014631 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013071 2574 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:52:10.014631 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013074 2574 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:52:10.015170 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013076 2574 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:52:10.015170 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013079 2574 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:52:10.015170 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013083 2574 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:52:10.015170 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013086 2574 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:52:10.015170 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013089 2574 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:52:10.015170 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013092 2574 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:52:10.015170 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013095 2574 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:52:10.015170 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013098 2574 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:52:10.015170 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013100 2574 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:52:10.015170 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013103 2574 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:52:10.015170 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013105 2574 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:52:10.015170 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013108 2574 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:52:10.015170 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013111 2574 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:52:10.015170 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013114 2574 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:52:10.015170 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013116 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:52:10.015170 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013119 2574 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:52:10.015170 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013121 2574 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:52:10.015170 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013124 2574 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:52:10.015170 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013126 2574 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:52:10.015627 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013129 2574 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:52:10.015627 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013132 2574 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:52:10.015627 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013134 2574 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:52:10.015627 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013137 2574 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:52:10.015627 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013139 2574 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:52:10.015627 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013142 2574 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:52:10.015627 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013145 2574 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:52:10.015627 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013149 2574 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:52:10.015627 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013152 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:52:10.015627 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013155 2574 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:52:10.015627 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013158 2574 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:52:10.015627 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013160 2574 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:52:10.015627 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013163 2574 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:52:10.015627 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013165 2574 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:52:10.015627 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013168 2574 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:52:10.015627 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013170 2574 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:52:10.015627 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013173 2574 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:52:10.015627 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013175 2574 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:52:10.015627 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013178 2574 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:52:10.015627 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013181 2574 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:52:10.016139 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013183 2574 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:52:10.016139 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013185 2574 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:52:10.016139 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013188 2574 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:52:10.016139 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013193 2574 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:52:10.016139 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013195 2574 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:52:10.016139 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013198 2574 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:52:10.016139 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013200 2574 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:52:10.016139 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013203 2574 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:52:10.016139 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013205 2574 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:52:10.016139 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013208 2574 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:52:10.016139 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013211 2574 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:52:10.016139 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013213 2574 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:52:10.016139 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013216 2574 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:52:10.016139 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013219 2574 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:52:10.016139 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013221 2574 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:52:10.016139 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013224 2574 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:52:10.016139 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013227 2574 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:52:10.016139 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013229 2574 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:52:10.016139 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013232 2574 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:52:10.016139 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013235 2574 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:52:10.016619 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013238 2574 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:52:10.016619 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013310 2574 flags.go:64] FLAG: --address="0.0.0.0" Apr 23 17:52:10.016619 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013321 2574 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Apr 23 17:52:10.016619 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013328 2574 flags.go:64] FLAG: --anonymous-auth="true" Apr 23 17:52:10.016619 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013332 2574 flags.go:64] FLAG: --application-metrics-count-limit="100" Apr 23 17:52:10.016619 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013336 2574 flags.go:64] FLAG: --authentication-token-webhook="false" Apr 23 17:52:10.016619 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013339 2574 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Apr 23 17:52:10.016619 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013344 2574 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Apr 23 17:52:10.016619 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013349 2574 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Apr 23 17:52:10.016619 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013352 2574 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Apr 23 17:52:10.016619 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013355 2574 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Apr 23 17:52:10.016619 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013359 2574 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Apr 23 17:52:10.016619 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013362 2574 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Apr 23 17:52:10.016619 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013365 2574 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Apr 23 17:52:10.016619 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013368 2574 flags.go:64] FLAG: --cgroup-root="" Apr 23 17:52:10.016619 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013372 2574 flags.go:64] FLAG: --cgroups-per-qos="true" Apr 23 17:52:10.016619 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013375 2574 flags.go:64] FLAG: --client-ca-file="" Apr 23 17:52:10.016619 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013378 2574 flags.go:64] FLAG: --cloud-config="" Apr 23 17:52:10.016619 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013380 2574 flags.go:64] FLAG: --cloud-provider="external" Apr 23 17:52:10.016619 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013383 2574 flags.go:64] FLAG: --cluster-dns="[]" Apr 23 17:52:10.016619 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013387 2574 flags.go:64] FLAG: --cluster-domain="" Apr 23 17:52:10.016619 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013390 2574 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Apr 23 17:52:10.016619 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013393 2574 flags.go:64] FLAG: --config-dir="" Apr 23 17:52:10.016619 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013396 2574 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Apr 23 17:52:10.017217 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013399 2574 flags.go:64] FLAG: --container-log-max-files="5" Apr 23 17:52:10.017217 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013403 2574 flags.go:64] FLAG: --container-log-max-size="10Mi" Apr 23 17:52:10.017217 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013413 2574 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Apr 23 17:52:10.017217 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013416 2574 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Apr 23 17:52:10.017217 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013419 2574 flags.go:64] FLAG: --containerd-namespace="k8s.io" Apr 23 17:52:10.017217 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013422 2574 flags.go:64] FLAG: --contention-profiling="false" Apr 23 17:52:10.017217 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013425 2574 flags.go:64] FLAG: --cpu-cfs-quota="true" Apr 23 17:52:10.017217 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013428 2574 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Apr 23 17:52:10.017217 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013431 2574 flags.go:64] FLAG: --cpu-manager-policy="none" Apr 23 17:52:10.017217 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013434 2574 flags.go:64] FLAG: --cpu-manager-policy-options="" Apr 23 17:52:10.017217 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013442 2574 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Apr 23 17:52:10.017217 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013446 2574 flags.go:64] FLAG: --enable-controller-attach-detach="true" Apr 23 17:52:10.017217 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013448 2574 flags.go:64] FLAG: --enable-debugging-handlers="true" Apr 23 17:52:10.017217 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013451 2574 flags.go:64] FLAG: --enable-load-reader="false" Apr 23 17:52:10.017217 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013454 2574 flags.go:64] FLAG: --enable-server="true" Apr 23 17:52:10.017217 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013456 2574 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Apr 23 17:52:10.017217 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013462 2574 flags.go:64] FLAG: --event-burst="100" Apr 23 17:52:10.017217 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013465 2574 flags.go:64] FLAG: --event-qps="50" Apr 23 17:52:10.017217 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013468 2574 flags.go:64] FLAG: --event-storage-age-limit="default=0" Apr 23 17:52:10.017217 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013471 2574 flags.go:64] FLAG: --event-storage-event-limit="default=0" Apr 23 17:52:10.017217 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013474 2574 flags.go:64] FLAG: --eviction-hard="" Apr 23 17:52:10.017217 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013477 2574 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Apr 23 17:52:10.017217 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013480 2574 flags.go:64] FLAG: --eviction-minimum-reclaim="" Apr 23 17:52:10.017217 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013484 2574 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Apr 23 17:52:10.017217 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013487 2574 flags.go:64] FLAG: --eviction-soft="" Apr 23 17:52:10.017869 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013490 2574 flags.go:64] FLAG: --eviction-soft-grace-period="" Apr 23 17:52:10.017869 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013493 2574 flags.go:64] FLAG: --exit-on-lock-contention="false" Apr 23 17:52:10.017869 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013495 2574 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Apr 23 17:52:10.017869 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013498 2574 flags.go:64] FLAG: --experimental-mounter-path="" Apr 23 17:52:10.017869 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013501 2574 flags.go:64] FLAG: --fail-cgroupv1="false" Apr 23 17:52:10.017869 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013504 2574 flags.go:64] FLAG: --fail-swap-on="true" Apr 23 17:52:10.017869 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013506 2574 flags.go:64] FLAG: --feature-gates="" Apr 23 17:52:10.017869 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013510 2574 flags.go:64] FLAG: --file-check-frequency="20s" Apr 23 17:52:10.017869 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013513 2574 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Apr 23 17:52:10.017869 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013516 2574 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Apr 23 17:52:10.017869 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013520 2574 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Apr 23 17:52:10.017869 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013523 2574 flags.go:64] FLAG: --healthz-port="10248" Apr 23 17:52:10.017869 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013525 2574 flags.go:64] FLAG: --help="false" Apr 23 17:52:10.017869 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013528 2574 flags.go:64] FLAG: --hostname-override="ip-10-0-135-87.ec2.internal" Apr 23 17:52:10.017869 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013531 2574 flags.go:64] FLAG: --housekeeping-interval="10s" Apr 23 17:52:10.017869 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013537 2574 flags.go:64] FLAG: --http-check-frequency="20s" Apr 23 17:52:10.017869 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013540 2574 flags.go:64] FLAG: --image-credential-provider-bin-dir="/usr/libexec/kubelet-image-credential-provider-plugins" Apr 23 17:52:10.017869 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013543 2574 flags.go:64] FLAG: --image-credential-provider-config="/etc/kubernetes/credential-providers/ecr-credential-provider.yaml" Apr 23 17:52:10.017869 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013546 2574 flags.go:64] FLAG: --image-gc-high-threshold="85" Apr 23 17:52:10.017869 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013549 2574 flags.go:64] FLAG: --image-gc-low-threshold="80" Apr 23 17:52:10.017869 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013552 2574 flags.go:64] FLAG: --image-service-endpoint="" Apr 23 17:52:10.017869 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013555 2574 flags.go:64] FLAG: --kernel-memcg-notification="false" Apr 23 17:52:10.017869 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013558 2574 flags.go:64] FLAG: --kube-api-burst="100" Apr 23 17:52:10.018433 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013561 2574 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Apr 23 17:52:10.018433 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013564 2574 flags.go:64] FLAG: --kube-api-qps="50" Apr 23 17:52:10.018433 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013567 2574 flags.go:64] FLAG: --kube-reserved="" Apr 23 17:52:10.018433 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013569 2574 flags.go:64] FLAG: --kube-reserved-cgroup="" Apr 23 17:52:10.018433 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013572 2574 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Apr 23 17:52:10.018433 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013575 2574 flags.go:64] FLAG: --kubelet-cgroups="" Apr 23 17:52:10.018433 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013578 2574 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Apr 23 17:52:10.018433 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013581 2574 flags.go:64] FLAG: --lock-file="" Apr 23 17:52:10.018433 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013584 2574 flags.go:64] FLAG: --log-cadvisor-usage="false" Apr 23 17:52:10.018433 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013587 2574 flags.go:64] FLAG: --log-flush-frequency="5s" Apr 23 17:52:10.018433 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013590 2574 flags.go:64] FLAG: --log-json-info-buffer-size="0" Apr 23 17:52:10.018433 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013609 2574 flags.go:64] FLAG: --log-json-split-stream="false" Apr 23 17:52:10.018433 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013613 2574 flags.go:64] FLAG: --log-text-info-buffer-size="0" Apr 23 17:52:10.018433 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013616 2574 flags.go:64] FLAG: --log-text-split-stream="false" Apr 23 17:52:10.018433 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013620 2574 flags.go:64] FLAG: --logging-format="text" Apr 23 17:52:10.018433 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013623 2574 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Apr 23 17:52:10.018433 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013626 2574 flags.go:64] FLAG: --make-iptables-util-chains="true" Apr 23 17:52:10.018433 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013629 2574 flags.go:64] FLAG: --manifest-url="" Apr 23 17:52:10.018433 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013633 2574 flags.go:64] FLAG: --manifest-url-header="" Apr 23 17:52:10.018433 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013637 2574 flags.go:64] FLAG: --max-housekeeping-interval="15s" Apr 23 17:52:10.018433 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013640 2574 flags.go:64] FLAG: --max-open-files="1000000" Apr 23 17:52:10.018433 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013644 2574 flags.go:64] FLAG: --max-pods="110" Apr 23 17:52:10.018433 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013648 2574 flags.go:64] FLAG: --maximum-dead-containers="-1" Apr 23 17:52:10.018433 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013651 2574 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Apr 23 17:52:10.018433 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013655 2574 flags.go:64] FLAG: --memory-manager-policy="None" Apr 23 17:52:10.019045 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013657 2574 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Apr 23 17:52:10.019045 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013660 2574 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Apr 23 17:52:10.019045 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013663 2574 flags.go:64] FLAG: --node-ip="0.0.0.0" Apr 23 17:52:10.019045 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013666 2574 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhel" Apr 23 17:52:10.019045 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013674 2574 flags.go:64] FLAG: --node-status-max-images="50" Apr 23 17:52:10.019045 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013677 2574 flags.go:64] FLAG: --node-status-update-frequency="10s" Apr 23 17:52:10.019045 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013680 2574 flags.go:64] FLAG: --oom-score-adj="-999" Apr 23 17:52:10.019045 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013683 2574 flags.go:64] FLAG: --pod-cidr="" Apr 23 17:52:10.019045 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013685 2574 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8cfe89231412ff3ee8cb6207fa0be33cad0f08e88c9c0f1e9f7e8c6f14d6715" Apr 23 17:52:10.019045 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013691 2574 flags.go:64] FLAG: --pod-manifest-path="" Apr 23 17:52:10.019045 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013694 2574 flags.go:64] FLAG: --pod-max-pids="-1" Apr 23 17:52:10.019045 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013697 2574 flags.go:64] FLAG: --pods-per-core="0" Apr 23 17:52:10.019045 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013700 2574 flags.go:64] FLAG: --port="10250" Apr 23 17:52:10.019045 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013703 2574 flags.go:64] FLAG: --protect-kernel-defaults="false" Apr 23 17:52:10.019045 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013707 2574 flags.go:64] FLAG: --provider-id="aws:///us-east-1a/i-029cfb1bcbc4d9e06" Apr 23 17:52:10.019045 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013710 2574 flags.go:64] FLAG: --qos-reserved="" Apr 23 17:52:10.019045 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013713 2574 flags.go:64] FLAG: --read-only-port="10255" Apr 23 17:52:10.019045 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013716 2574 flags.go:64] FLAG: --register-node="true" Apr 23 17:52:10.019045 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013718 2574 flags.go:64] FLAG: --register-schedulable="true" Apr 23 17:52:10.019045 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013721 2574 flags.go:64] FLAG: --register-with-taints="" Apr 23 17:52:10.019045 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013725 2574 flags.go:64] FLAG: --registry-burst="10" Apr 23 17:52:10.019045 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013728 2574 flags.go:64] FLAG: --registry-qps="5" Apr 23 17:52:10.019045 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013730 2574 flags.go:64] FLAG: --reserved-cpus="" Apr 23 17:52:10.019045 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013733 2574 flags.go:64] FLAG: --reserved-memory="" Apr 23 17:52:10.019045 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013737 2574 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Apr 23 17:52:10.019648 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013740 2574 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Apr 23 17:52:10.019648 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013743 2574 flags.go:64] FLAG: --rotate-certificates="false" Apr 23 17:52:10.019648 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013746 2574 flags.go:64] FLAG: --rotate-server-certificates="false" Apr 23 17:52:10.019648 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013749 2574 flags.go:64] FLAG: --runonce="false" Apr 23 17:52:10.019648 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013752 2574 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Apr 23 17:52:10.019648 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013755 2574 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Apr 23 17:52:10.019648 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013757 2574 flags.go:64] FLAG: --seccomp-default="false" Apr 23 17:52:10.019648 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013761 2574 flags.go:64] FLAG: --serialize-image-pulls="true" Apr 23 17:52:10.019648 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013764 2574 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Apr 23 17:52:10.019648 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013767 2574 flags.go:64] FLAG: --storage-driver-db="cadvisor" Apr 23 17:52:10.019648 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013770 2574 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Apr 23 17:52:10.019648 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013773 2574 flags.go:64] FLAG: --storage-driver-password="root" Apr 23 17:52:10.019648 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013775 2574 flags.go:64] FLAG: --storage-driver-secure="false" Apr 23 17:52:10.019648 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013778 2574 flags.go:64] FLAG: --storage-driver-table="stats" Apr 23 17:52:10.019648 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013781 2574 flags.go:64] FLAG: --storage-driver-user="root" Apr 23 17:52:10.019648 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013784 2574 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Apr 23 17:52:10.019648 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013786 2574 flags.go:64] FLAG: --sync-frequency="1m0s" Apr 23 17:52:10.019648 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013789 2574 flags.go:64] FLAG: --system-cgroups="" Apr 23 17:52:10.019648 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013792 2574 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Apr 23 17:52:10.019648 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013797 2574 flags.go:64] FLAG: --system-reserved-cgroup="" Apr 23 17:52:10.019648 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013800 2574 flags.go:64] FLAG: --tls-cert-file="" Apr 23 17:52:10.019648 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013804 2574 flags.go:64] FLAG: --tls-cipher-suites="[]" Apr 23 17:52:10.019648 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013808 2574 flags.go:64] FLAG: --tls-min-version="" Apr 23 17:52:10.019648 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013811 2574 flags.go:64] FLAG: --tls-private-key-file="" Apr 23 17:52:10.019648 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013814 2574 flags.go:64] FLAG: --topology-manager-policy="none" Apr 23 17:52:10.020293 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013816 2574 flags.go:64] FLAG: --topology-manager-policy-options="" Apr 23 17:52:10.020293 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013819 2574 flags.go:64] FLAG: --topology-manager-scope="container" Apr 23 17:52:10.020293 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013822 2574 flags.go:64] FLAG: --v="2" Apr 23 17:52:10.020293 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013826 2574 flags.go:64] FLAG: --version="false" Apr 23 17:52:10.020293 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013830 2574 flags.go:64] FLAG: --vmodule="" Apr 23 17:52:10.020293 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013834 2574 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Apr 23 17:52:10.020293 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.013837 2574 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Apr 23 17:52:10.020293 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013937 2574 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:52:10.020293 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013941 2574 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:52:10.020293 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013944 2574 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:52:10.020293 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013947 2574 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:52:10.020293 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013950 2574 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:52:10.020293 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013953 2574 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:52:10.020293 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013956 2574 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:52:10.020293 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013959 2574 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:52:10.020293 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013962 2574 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:52:10.020293 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013965 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:52:10.020293 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013967 2574 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:52:10.020293 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013969 2574 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:52:10.020293 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013972 2574 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:52:10.020293 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013974 2574 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:52:10.020293 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013977 2574 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:52:10.020827 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013979 2574 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:52:10.020827 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013982 2574 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:52:10.020827 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013984 2574 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:52:10.020827 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013987 2574 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:52:10.020827 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013990 2574 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:52:10.020827 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013992 2574 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:52:10.020827 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013996 2574 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:52:10.020827 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.013998 2574 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:52:10.020827 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014001 2574 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:52:10.020827 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014004 2574 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:52:10.020827 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014007 2574 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:52:10.020827 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014011 2574 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:52:10.020827 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014013 2574 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:52:10.020827 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014016 2574 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:52:10.020827 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014019 2574 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:52:10.020827 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014021 2574 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:52:10.020827 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014024 2574 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:52:10.020827 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014026 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:52:10.020827 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014029 2574 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:52:10.020827 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014031 2574 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:52:10.021356 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014034 2574 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:52:10.021356 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014036 2574 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:52:10.021356 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014040 2574 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:52:10.021356 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014042 2574 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:52:10.021356 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014046 2574 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:52:10.021356 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014049 2574 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:52:10.021356 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014051 2574 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:52:10.021356 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014054 2574 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:52:10.021356 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014056 2574 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:52:10.021356 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014059 2574 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:52:10.021356 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014061 2574 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:52:10.021356 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014064 2574 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:52:10.021356 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014066 2574 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:52:10.021356 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014069 2574 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:52:10.021356 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014071 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:52:10.021356 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014073 2574 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:52:10.021356 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014076 2574 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:52:10.021356 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014079 2574 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:52:10.021356 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014082 2574 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:52:10.021356 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014085 2574 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:52:10.021835 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014087 2574 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:52:10.021835 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014089 2574 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:52:10.021835 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014092 2574 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:52:10.021835 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014095 2574 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:52:10.021835 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014097 2574 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:52:10.021835 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014100 2574 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:52:10.021835 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014102 2574 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:52:10.021835 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014105 2574 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:52:10.021835 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014107 2574 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:52:10.021835 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014110 2574 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:52:10.021835 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014112 2574 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:52:10.021835 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014116 2574 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:52:10.021835 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014120 2574 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:52:10.021835 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014122 2574 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:52:10.021835 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014125 2574 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:52:10.021835 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014129 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:52:10.021835 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014134 2574 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:52:10.021835 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014136 2574 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:52:10.021835 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014138 2574 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:52:10.022304 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014141 2574 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:52:10.022304 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014144 2574 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:52:10.022304 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014146 2574 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:52:10.022304 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014149 2574 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:52:10.022304 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014151 2574 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:52:10.022304 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014154 2574 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:52:10.022304 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014156 2574 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:52:10.022304 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014159 2574 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:52:10.022304 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014161 2574 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:52:10.022304 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014164 2574 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:52:10.022304 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014166 2574 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:52:10.022304 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.014170 2574 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:52:10.022304 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.015074 2574 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 23 17:52:10.022304 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.022040 2574 server.go:530] "Kubelet version" kubeletVersion="v1.33.9" Apr 23 17:52:10.022304 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.022055 2574 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 23 17:52:10.022304 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022101 2574 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:52:10.022715 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022107 2574 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:52:10.022715 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022110 2574 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:52:10.022715 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022113 2574 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:52:10.022715 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022116 2574 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:52:10.022715 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022118 2574 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:52:10.022715 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022121 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:52:10.022715 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022123 2574 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:52:10.022715 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022126 2574 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:52:10.022715 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022129 2574 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:52:10.022715 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022131 2574 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:52:10.022715 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022134 2574 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:52:10.022715 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022136 2574 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:52:10.022715 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022138 2574 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:52:10.022715 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022141 2574 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:52:10.022715 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022144 2574 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:52:10.022715 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022146 2574 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:52:10.022715 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022149 2574 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:52:10.022715 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022151 2574 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:52:10.022715 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022154 2574 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:52:10.022715 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022156 2574 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:52:10.023221 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022159 2574 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:52:10.023221 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022162 2574 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:52:10.023221 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022165 2574 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:52:10.023221 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022167 2574 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:52:10.023221 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022169 2574 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:52:10.023221 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022172 2574 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:52:10.023221 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022174 2574 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:52:10.023221 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022177 2574 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:52:10.023221 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022179 2574 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:52:10.023221 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022182 2574 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:52:10.023221 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022186 2574 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:52:10.023221 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022189 2574 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:52:10.023221 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022191 2574 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:52:10.023221 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022193 2574 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:52:10.023221 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022196 2574 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:52:10.023221 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022199 2574 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:52:10.023221 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022201 2574 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:52:10.023221 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022203 2574 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:52:10.023221 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022206 2574 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:52:10.023709 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022209 2574 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:52:10.023709 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022211 2574 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:52:10.023709 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022214 2574 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:52:10.023709 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022216 2574 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:52:10.023709 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022219 2574 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:52:10.023709 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022221 2574 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:52:10.023709 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022224 2574 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:52:10.023709 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022226 2574 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:52:10.023709 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022229 2574 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:52:10.023709 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022231 2574 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:52:10.023709 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022234 2574 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:52:10.023709 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022236 2574 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:52:10.023709 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022238 2574 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:52:10.023709 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022241 2574 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:52:10.023709 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022243 2574 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:52:10.023709 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022246 2574 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:52:10.023709 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022248 2574 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:52:10.023709 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022251 2574 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:52:10.023709 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022253 2574 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:52:10.023709 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022256 2574 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:52:10.024209 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022259 2574 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:52:10.024209 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022261 2574 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:52:10.024209 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022263 2574 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:52:10.024209 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022266 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:52:10.024209 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022270 2574 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:52:10.024209 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022272 2574 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:52:10.024209 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022274 2574 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:52:10.024209 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022278 2574 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:52:10.024209 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022282 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:52:10.024209 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022284 2574 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:52:10.024209 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022286 2574 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:52:10.024209 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022289 2574 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:52:10.024209 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022291 2574 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:52:10.024209 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022294 2574 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:52:10.024209 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022296 2574 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:52:10.024209 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022299 2574 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:52:10.024209 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022301 2574 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:52:10.024209 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022304 2574 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:52:10.024209 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022306 2574 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:52:10.024209 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022309 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:52:10.024683 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022311 2574 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:52:10.024683 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022315 2574 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:52:10.024683 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022320 2574 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:52:10.024683 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022322 2574 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:52:10.024683 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022325 2574 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:52:10.024683 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022328 2574 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:52:10.024683 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.022333 2574 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 23 17:52:10.024683 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022427 2574 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:52:10.024683 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022431 2574 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:52:10.024683 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022434 2574 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:52:10.024683 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022436 2574 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:52:10.024683 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022439 2574 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:52:10.024683 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022442 2574 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:52:10.024683 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022445 2574 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:52:10.024683 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022448 2574 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:52:10.025075 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022452 2574 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:52:10.025075 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022455 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:52:10.025075 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022459 2574 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:52:10.025075 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022461 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:52:10.025075 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022464 2574 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:52:10.025075 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022467 2574 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:52:10.025075 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022470 2574 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:52:10.025075 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022472 2574 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:52:10.025075 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022475 2574 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:52:10.025075 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022477 2574 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:52:10.025075 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022479 2574 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:52:10.025075 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022482 2574 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:52:10.025075 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022484 2574 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:52:10.025075 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022487 2574 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:52:10.025075 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022489 2574 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:52:10.025075 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022491 2574 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:52:10.025075 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022494 2574 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:52:10.025075 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022496 2574 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:52:10.025075 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022499 2574 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:52:10.025075 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022501 2574 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:52:10.025569 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022503 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:52:10.025569 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022506 2574 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:52:10.025569 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022508 2574 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:52:10.025569 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022511 2574 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:52:10.025569 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022513 2574 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:52:10.025569 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022515 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:52:10.025569 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022518 2574 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:52:10.025569 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022521 2574 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:52:10.025569 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022523 2574 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:52:10.025569 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022526 2574 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:52:10.025569 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022528 2574 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:52:10.025569 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022531 2574 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:52:10.025569 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022533 2574 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:52:10.025569 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022535 2574 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:52:10.025569 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022538 2574 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:52:10.025569 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022540 2574 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:52:10.025569 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022543 2574 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:52:10.025569 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022545 2574 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:52:10.025569 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022548 2574 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:52:10.026045 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022550 2574 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:52:10.026045 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022552 2574 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:52:10.026045 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022555 2574 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:52:10.026045 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022557 2574 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:52:10.026045 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022559 2574 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:52:10.026045 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022562 2574 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:52:10.026045 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022565 2574 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:52:10.026045 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022567 2574 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:52:10.026045 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022569 2574 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:52:10.026045 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022572 2574 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:52:10.026045 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022574 2574 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:52:10.026045 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022577 2574 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:52:10.026045 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022579 2574 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:52:10.026045 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022582 2574 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:52:10.026045 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022584 2574 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:52:10.026045 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022587 2574 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:52:10.026045 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022589 2574 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:52:10.026045 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022592 2574 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:52:10.026045 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022594 2574 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:52:10.026045 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022596 2574 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:52:10.026531 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022599 2574 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:52:10.026531 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022601 2574 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:52:10.026531 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022604 2574 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:52:10.026531 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022606 2574 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:52:10.026531 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022608 2574 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:52:10.026531 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022611 2574 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:52:10.026531 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022613 2574 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:52:10.026531 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022615 2574 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:52:10.026531 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022618 2574 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:52:10.026531 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022621 2574 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:52:10.026531 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022623 2574 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:52:10.026531 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022627 2574 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:52:10.026531 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022630 2574 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:52:10.026531 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022633 2574 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:52:10.026531 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022635 2574 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:52:10.026531 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022638 2574 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:52:10.026531 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022640 2574 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:52:10.026531 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022642 2574 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:52:10.026531 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:10.022645 2574 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:52:10.027031 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.022649 2574 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 23 17:52:10.027031 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.023541 2574 server.go:962] "Client rotation is on, will bootstrap in background" Apr 23 17:52:10.027031 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.026751 2574 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Apr 23 17:52:10.027948 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.027936 2574 server.go:1019] "Starting client certificate rotation" Apr 23 17:52:10.028043 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.028026 2574 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 23 17:52:10.028935 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.028924 2574 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 23 17:52:10.056295 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.056275 2574 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 23 17:52:10.059541 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.059462 2574 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 23 17:52:10.081523 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.081507 2574 log.go:25] "Validated CRI v1 runtime API" Apr 23 17:52:10.087463 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.087443 2574 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 23 17:52:10.089295 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.089281 2574 log.go:25] "Validated CRI v1 image API" Apr 23 17:52:10.091672 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.091659 2574 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 23 17:52:10.097743 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.097721 2574 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/nvme0n1p2 7b581826-b945-47fc-a2d9-47be3d816a77:/dev/nvme0n1p3 ab7e970f-0243-4656-8107-2c4ac1b9983d:/dev/nvme0n1p4] Apr 23 17:52:10.097822 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.097743 2574 fs.go:136] Filesystem partitions: map[/dev/nvme0n1p3:{mountpoint:/boot major:259 minor:3 fsType:ext4 blockSize:0} /dev/nvme0n1p4:{mountpoint:/var major:259 minor:4 fsType:xfs blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Apr 23 17:52:10.103711 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.103601 2574 manager.go:217] Machine: {Timestamp:2026-04-23 17:52:10.101432638 +0000 UTC m=+0.465473588 CPUVendorID:GenuineIntel NumCores:8 NumPhysicalCores:4 NumSockets:1 CpuFrequency:3099673 MemoryCapacity:32812175360 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ec2a6c169fc925245017932c5081c429 SystemUUID:ec2a6c16-9fc9-2524-5017-932c5081c429 BootID:e15ceba2-1273-45d4-b640-3bad3504d375 Filesystems:[{Device:/dev/nvme0n1p3 DeviceMajor:259 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16406085632 Type:vfs Inodes:4005392 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6562435072 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/nvme0n1p4 DeviceMajor:259 DeviceMinor:4 Capacity:128243970048 Type:vfs Inodes:62651840 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6103040 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16406089728 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[259:0:{Name:nvme0n1 Major:259 Minor:0 Size:128849018880 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:02:6d:34:e0:7a:c9 Speed:0 Mtu:9001} {Name:ens5 MacAddress:02:6d:34:e0:7a:c9 Speed:0 Mtu:9001} {Name:ovs-system MacAddress:12:f4:61:d7:c8:64 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:32812175360 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0 4] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:1 Threads:[1 5] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:2 Threads:[2 6] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:3 Threads:[3 7] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:}] Caches:[{Id:0 Size:34603008 Type:Unified Level:3}] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Apr 23 17:52:10.103711 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.103709 2574 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Apr 23 17:52:10.103837 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.103826 2574 manager.go:233] Version: {KernelVersion:5.14.0-570.107.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20260414-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Apr 23 17:52:10.105056 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.105025 2574 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 23 17:52:10.105216 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.105058 2574 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-10-0-135-87.ec2.internal","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 23 17:52:10.105302 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.105233 2574 topology_manager.go:138] "Creating topology manager with none policy" Apr 23 17:52:10.105302 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.105246 2574 container_manager_linux.go:306] "Creating device plugin manager" Apr 23 17:52:10.105302 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.105265 2574 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 23 17:52:10.105302 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.105293 2574 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 23 17:52:10.107298 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.107283 2574 state_mem.go:36] "Initialized new in-memory state store" Apr 23 17:52:10.107426 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.107414 2574 server.go:1267] "Using root directory" path="/var/lib/kubelet" Apr 23 17:52:10.110228 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.110216 2574 kubelet.go:491] "Attempting to sync node with API server" Apr 23 17:52:10.110286 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.110234 2574 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 23 17:52:10.110286 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.110250 2574 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Apr 23 17:52:10.110286 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.110263 2574 kubelet.go:397] "Adding apiserver pod source" Apr 23 17:52:10.110414 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.110289 2574 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 23 17:52:10.111551 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.111536 2574 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 23 17:52:10.111625 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.111560 2574 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 23 17:52:10.115024 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.115008 2574 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.10-2.rhaos4.20.gita4d0894.el9" apiVersion="v1" Apr 23 17:52:10.116646 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.116633 2574 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 23 17:52:10.119027 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.119010 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Apr 23 17:52:10.119113 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.119038 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Apr 23 17:52:10.119113 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.119052 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Apr 23 17:52:10.119113 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.119063 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Apr 23 17:52:10.119113 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.119074 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Apr 23 17:52:10.119113 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.119086 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Apr 23 17:52:10.119113 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.119096 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Apr 23 17:52:10.119113 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.119105 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Apr 23 17:52:10.119293 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.119117 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Apr 23 17:52:10.119293 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.119134 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Apr 23 17:52:10.119293 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.119151 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Apr 23 17:52:10.119293 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.119163 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Apr 23 17:52:10.121109 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.121098 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Apr 23 17:52:10.121109 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.121109 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Apr 23 17:52:10.126039 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.126002 2574 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 23 17:52:10.126447 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.126424 2574 server.go:1295] "Started kubelet" Apr 23 17:52:10.126618 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.126594 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:52:10.126712 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.126669 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:52:10.126712 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.126655 2574 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 23 17:52:10.126804 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.126722 2574 server_v1.go:47] "podresources" method="list" useActivePods=true Apr 23 17:52:10.126804 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.126697 2574 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 23 17:52:10.126804 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.126799 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:10.127342 ip-10-0-135-87 systemd[1]: Started Kubernetes Kubelet. Apr 23 17:52:10.128309 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.128242 2574 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 23 17:52:10.128901 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.128884 2574 server.go:317] "Adding debug handlers to kubelet server" Apr 23 17:52:10.136025 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.136005 2574 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Apr 23 17:52:10.136669 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.136655 2574 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 23 17:52:10.136761 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.136707 2574 kubelet.go:1618] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Apr 23 17:52:10.137530 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.137450 2574 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 23 17:52:10.137530 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.137460 2574 volume_manager.go:295] "The desired_state_of_world populator starts" Apr 23 17:52:10.137530 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.137478 2574 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 23 17:52:10.137530 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.137510 2574 factory.go:55] Registering systemd factory Apr 23 17:52:10.137530 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.137525 2574 factory.go:223] Registration of the systemd container factory successfully Apr 23 17:52:10.137793 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.137541 2574 reconstruct.go:97] "Volume reconstruction finished" Apr 23 17:52:10.137793 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.137548 2574 reconciler.go:26] "Reconciler: start to sync state" Apr 23 17:52:10.137793 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.137656 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-135-87.ec2.internal\" not found" Apr 23 17:52:10.137793 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.137750 2574 factory.go:153] Registering CRI-O factory Apr 23 17:52:10.137793 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.137764 2574 factory.go:223] Registration of the crio container factory successfully Apr 23 17:52:10.138046 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.137817 2574 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Apr 23 17:52:10.138046 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.137934 2574 factory.go:103] Registering Raw factory Apr 23 17:52:10.138046 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.137947 2574 manager.go:1196] Started watching for new ooms in manager Apr 23 17:52:10.138345 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.138333 2574 manager.go:319] Starting recovery of all containers Apr 23 17:52:10.138470 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.138452 2574 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-t7bnq" Apr 23 17:52:10.140254 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.140219 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:52:10.140351 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.140279 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Apr 23 17:52:10.142033 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.140389 2574 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-135-87.ec2.internal.18a90dd386d98f39 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-135-87.ec2.internal,UID:ip-10-0-135-87.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:10.126036793 +0000 UTC m=+0.490077730,LastTimestamp:2026-04-23 17:52:10.126036793 +0000 UTC m=+0.490077730,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:10.149950 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.149933 2574 manager.go:324] Recovery completed Apr 23 17:52:10.155776 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.155761 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:10.158264 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.158251 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:10.158325 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.158276 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:10.158325 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.158288 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:10.158756 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.158743 2574 cpu_manager.go:222] "Starting CPU manager" policy="none" Apr 23 17:52:10.158756 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.158754 2574 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Apr 23 17:52:10.158856 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.158768 2574 state_mem.go:36] "Initialized new in-memory state store" Apr 23 17:52:10.160898 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.160792 2574 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-135-87.ec2.internal.18a90dd388c54f29 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-135-87.ec2.internal,UID:ip-10-0-135-87.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-135-87.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:10.158264105 +0000 UTC m=+0.522305037,LastTimestamp:2026-04-23 17:52:10.158264105 +0000 UTC m=+0.522305037,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:10.161708 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.161695 2574 policy_none.go:49] "None policy: Start" Apr 23 17:52:10.161764 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.161712 2574 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 23 17:52:10.161764 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.161722 2574 state_mem.go:35] "Initializing new in-memory state store" Apr 23 17:52:10.167987 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.167822 2574 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-135-87.ec2.internal.18a90dd388c5915c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-135-87.ec2.internal,UID:ip-10-0-135-87.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-135-87.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:10.158281052 +0000 UTC m=+0.522321983,LastTimestamp:2026-04-23 17:52:10.158281052 +0000 UTC m=+0.522321983,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:10.177018 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.176951 2574 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-135-87.ec2.internal.18a90dd388c5bae8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-135-87.ec2.internal,UID:ip-10-0-135-87.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-135-87.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:10.158291688 +0000 UTC m=+0.522332620,LastTimestamp:2026-04-23 17:52:10.158291688 +0000 UTC m=+0.522332620,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:10.197460 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.197445 2574 manager.go:341] "Starting Device Plugin manager" Apr 23 17:52:10.208339 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.197491 2574 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 23 17:52:10.208339 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.197502 2574 server.go:85] "Starting device plugin registration server" Apr 23 17:52:10.208339 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.197792 2574 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 23 17:52:10.208339 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.197805 2574 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 23 17:52:10.208339 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.197944 2574 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Apr 23 17:52:10.208339 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.198053 2574 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Apr 23 17:52:10.208339 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.198066 2574 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 23 17:52:10.208339 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.198804 2574 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Apr 23 17:52:10.208339 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.198834 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-135-87.ec2.internal\" not found" Apr 23 17:52:10.208339 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.207256 2574 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-135-87.ec2.internal.18a90dd38b3eeded default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-135-87.ec2.internal,UID:ip-10-0-135-87.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:10.199789037 +0000 UTC m=+0.563829959,LastTimestamp:2026-04-23 17:52:10.199789037 +0000 UTC m=+0.563829959,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:10.266592 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.266522 2574 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 23 17:52:10.267860 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.267831 2574 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 23 17:52:10.267988 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.267873 2574 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 23 17:52:10.267988 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.267896 2574 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 23 17:52:10.267988 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.267905 2574 kubelet.go:2451] "Starting kubelet main sync loop" Apr 23 17:52:10.268114 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.268004 2574 kubelet.go:2475] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Apr 23 17:52:10.278051 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.278028 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 17:52:10.298337 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.298303 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:10.299241 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.299225 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:10.299315 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.299255 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:10.299315 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.299287 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:10.299383 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.299317 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-135-87.ec2.internal" Apr 23 17:52:10.308195 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.308103 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-135-87.ec2.internal.18a90dd388c54f29\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-135-87.ec2.internal.18a90dd388c54f29 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-135-87.ec2.internal,UID:ip-10-0-135-87.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-135-87.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:10.158264105 +0000 UTC m=+0.522305037,LastTimestamp:2026-04-23 17:52:10.299239728 +0000 UTC m=+0.663280660,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:10.317339 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.317316 2574 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-135-87.ec2.internal" Apr 23 17:52:10.317419 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.317358 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-135-87.ec2.internal.18a90dd388c5915c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-135-87.ec2.internal.18a90dd388c5915c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-135-87.ec2.internal,UID:ip-10-0-135-87.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-135-87.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:10.158281052 +0000 UTC m=+0.522321983,LastTimestamp:2026-04-23 17:52:10.299262713 +0000 UTC m=+0.663303646,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:10.325722 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.325661 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-135-87.ec2.internal.18a90dd388c5bae8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-135-87.ec2.internal.18a90dd388c5bae8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-135-87.ec2.internal,UID:ip-10-0-135-87.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-135-87.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:10.158291688 +0000 UTC m=+0.522332620,LastTimestamp:2026-04-23 17:52:10.299292972 +0000 UTC m=+0.663333904,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:10.353350 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.353328 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Apr 23 17:52:10.368452 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.368420 2574 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal","kube-system/kube-apiserver-proxy-ip-10-0-135-87.ec2.internal"] Apr 23 17:52:10.368553 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.368536 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:10.372051 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.372024 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:10.372130 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.372058 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:10.372130 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.372068 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:10.373201 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.373188 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:10.373329 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.373315 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal" Apr 23 17:52:10.373398 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.373347 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:10.373962 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.373948 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:10.374031 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.373971 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:10.374031 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.373985 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:10.374031 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.373948 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:10.374138 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.374034 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:10.374138 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.374052 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:10.374928 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.374914 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-135-87.ec2.internal" Apr 23 17:52:10.374968 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.374940 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:10.375562 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.375545 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:10.375562 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.375574 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:10.375719 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.375589 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:10.380831 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.380732 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-135-87.ec2.internal.18a90dd388c54f29\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-135-87.ec2.internal.18a90dd388c54f29 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-135-87.ec2.internal,UID:ip-10-0-135-87.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-135-87.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:10.158264105 +0000 UTC m=+0.522305037,LastTimestamp:2026-04-23 17:52:10.372041521 +0000 UTC m=+0.736082454,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:10.390991 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.390921 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-135-87.ec2.internal.18a90dd388c5915c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-135-87.ec2.internal.18a90dd388c5915c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-135-87.ec2.internal,UID:ip-10-0-135-87.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-135-87.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:10.158281052 +0000 UTC m=+0.522321983,LastTimestamp:2026-04-23 17:52:10.372063151 +0000 UTC m=+0.736104083,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:10.396506 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.396491 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-135-87.ec2.internal\" not found" node="ip-10-0-135-87.ec2.internal" Apr 23 17:52:10.399946 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.399861 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-135-87.ec2.internal.18a90dd388c5bae8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-135-87.ec2.internal.18a90dd388c5bae8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-135-87.ec2.internal,UID:ip-10-0-135-87.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-135-87.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:10.158291688 +0000 UTC m=+0.522332620,LastTimestamp:2026-04-23 17:52:10.372071518 +0000 UTC m=+0.736112450,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:10.401708 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.401692 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-135-87.ec2.internal\" not found" node="ip-10-0-135-87.ec2.internal" Apr 23 17:52:10.408985 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.408921 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-135-87.ec2.internal.18a90dd388c54f29\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-135-87.ec2.internal.18a90dd388c54f29 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-135-87.ec2.internal,UID:ip-10-0-135-87.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-135-87.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:10.158264105 +0000 UTC m=+0.522305037,LastTimestamp:2026-04-23 17:52:10.373962953 +0000 UTC m=+0.738003884,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:10.417799 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.417735 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-135-87.ec2.internal.18a90dd388c5915c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-135-87.ec2.internal.18a90dd388c5915c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-135-87.ec2.internal,UID:ip-10-0-135-87.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-135-87.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:10.158281052 +0000 UTC m=+0.522321983,LastTimestamp:2026-04-23 17:52:10.373977614 +0000 UTC m=+0.738018549,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:10.426558 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.426502 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-135-87.ec2.internal.18a90dd388c5bae8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-135-87.ec2.internal.18a90dd388c5bae8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-135-87.ec2.internal,UID:ip-10-0-135-87.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-135-87.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:10.158291688 +0000 UTC m=+0.522332620,LastTimestamp:2026-04-23 17:52:10.373991604 +0000 UTC m=+0.738032538,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:10.436647 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.436584 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-135-87.ec2.internal.18a90dd388c54f29\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-135-87.ec2.internal.18a90dd388c54f29 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-135-87.ec2.internal,UID:ip-10-0-135-87.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-135-87.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:10.158264105 +0000 UTC m=+0.522305037,LastTimestamp:2026-04-23 17:52:10.374023202 +0000 UTC m=+0.738064139,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:10.439348 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.439326 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b86d5a8aaa7fecdf67a597e125a8b168-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal\" (UID: \"b86d5a8aaa7fecdf67a597e125a8b168\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal" Apr 23 17:52:10.439409 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.439356 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b86d5a8aaa7fecdf67a597e125a8b168-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal\" (UID: \"b86d5a8aaa7fecdf67a597e125a8b168\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal" Apr 23 17:52:10.439409 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.439396 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/53b72ef69aad199cf5c99ac6ebdc0a72-config\") pod \"kube-apiserver-proxy-ip-10-0-135-87.ec2.internal\" (UID: \"53b72ef69aad199cf5c99ac6ebdc0a72\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-135-87.ec2.internal" Apr 23 17:52:10.447979 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.447918 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-135-87.ec2.internal.18a90dd388c5915c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-135-87.ec2.internal.18a90dd388c5915c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-135-87.ec2.internal,UID:ip-10-0-135-87.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-135-87.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:10.158281052 +0000 UTC m=+0.522321983,LastTimestamp:2026-04-23 17:52:10.374040918 +0000 UTC m=+0.738081853,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:10.455565 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.455504 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-135-87.ec2.internal.18a90dd388c5bae8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-135-87.ec2.internal.18a90dd388c5bae8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-135-87.ec2.internal,UID:ip-10-0-135-87.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-135-87.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:10.158291688 +0000 UTC m=+0.522332620,LastTimestamp:2026-04-23 17:52:10.37405816 +0000 UTC m=+0.738099093,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:10.464588 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.464522 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-135-87.ec2.internal.18a90dd388c54f29\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-135-87.ec2.internal.18a90dd388c54f29 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-135-87.ec2.internal,UID:ip-10-0-135-87.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-135-87.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:10.158264105 +0000 UTC m=+0.522305037,LastTimestamp:2026-04-23 17:52:10.375559205 +0000 UTC m=+0.739600136,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:10.473995 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.473938 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-135-87.ec2.internal.18a90dd388c5915c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-135-87.ec2.internal.18a90dd388c5915c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-135-87.ec2.internal,UID:ip-10-0-135-87.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-135-87.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:10.158281052 +0000 UTC m=+0.522321983,LastTimestamp:2026-04-23 17:52:10.375580635 +0000 UTC m=+0.739621571,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:10.483186 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.483121 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-135-87.ec2.internal.18a90dd388c5bae8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-135-87.ec2.internal.18a90dd388c5bae8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-135-87.ec2.internal,UID:ip-10-0-135-87.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-135-87.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:10.158291688 +0000 UTC m=+0.522332620,LastTimestamp:2026-04-23 17:52:10.375594448 +0000 UTC m=+0.739635379,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:10.518323 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.518241 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:10.519196 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.519180 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:10.519246 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.519213 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:10.519246 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.519227 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:10.519317 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.519255 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-135-87.ec2.internal" Apr 23 17:52:10.526555 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.526494 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-135-87.ec2.internal.18a90dd388c54f29\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-135-87.ec2.internal.18a90dd388c54f29 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-135-87.ec2.internal,UID:ip-10-0-135-87.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-135-87.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:10.158264105 +0000 UTC m=+0.522305037,LastTimestamp:2026-04-23 17:52:10.519197203 +0000 UTC m=+0.883238134,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:10.528402 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.528382 2574 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-135-87.ec2.internal" Apr 23 17:52:10.529751 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.528420 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-135-87.ec2.internal.18a90dd388c5915c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-135-87.ec2.internal.18a90dd388c5915c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-135-87.ec2.internal,UID:ip-10-0-135-87.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-135-87.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:10.158281052 +0000 UTC m=+0.522321983,LastTimestamp:2026-04-23 17:52:10.519220804 +0000 UTC m=+0.883261741,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:10.535737 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.535682 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-135-87.ec2.internal.18a90dd388c5bae8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-135-87.ec2.internal.18a90dd388c5bae8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-135-87.ec2.internal,UID:ip-10-0-135-87.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-135-87.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:10.158291688 +0000 UTC m=+0.522332620,LastTimestamp:2026-04-23 17:52:10.519231319 +0000 UTC m=+0.883272252,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:10.540540 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.540521 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/53b72ef69aad199cf5c99ac6ebdc0a72-config\") pod \"kube-apiserver-proxy-ip-10-0-135-87.ec2.internal\" (UID: \"53b72ef69aad199cf5c99ac6ebdc0a72\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-135-87.ec2.internal" Apr 23 17:52:10.540631 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.540548 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b86d5a8aaa7fecdf67a597e125a8b168-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal\" (UID: \"b86d5a8aaa7fecdf67a597e125a8b168\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal" Apr 23 17:52:10.540631 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.540566 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b86d5a8aaa7fecdf67a597e125a8b168-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal\" (UID: \"b86d5a8aaa7fecdf67a597e125a8b168\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal" Apr 23 17:52:10.540725 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.540635 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b86d5a8aaa7fecdf67a597e125a8b168-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal\" (UID: \"b86d5a8aaa7fecdf67a597e125a8b168\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal" Apr 23 17:52:10.540725 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.540645 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/53b72ef69aad199cf5c99ac6ebdc0a72-config\") pod \"kube-apiserver-proxy-ip-10-0-135-87.ec2.internal\" (UID: \"53b72ef69aad199cf5c99ac6ebdc0a72\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-135-87.ec2.internal" Apr 23 17:52:10.540725 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.540635 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b86d5a8aaa7fecdf67a597e125a8b168-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal\" (UID: \"b86d5a8aaa7fecdf67a597e125a8b168\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal" Apr 23 17:52:10.699407 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.699366 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal" Apr 23 17:52:10.703245 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.703226 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-135-87.ec2.internal" Apr 23 17:52:10.755523 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.755497 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Apr 23 17:52:10.929005 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.928934 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:10.929874 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.929838 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:10.929944 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.929888 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:10.929944 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.929902 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:10.929944 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:10.929938 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-135-87.ec2.internal" Apr 23 17:52:10.938613 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.938519 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-135-87.ec2.internal.18a90dd388c54f29\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-135-87.ec2.internal.18a90dd388c54f29 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-135-87.ec2.internal,UID:ip-10-0-135-87.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-135-87.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:10.158264105 +0000 UTC m=+0.522305037,LastTimestamp:2026-04-23 17:52:10.929873214 +0000 UTC m=+1.293914148,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:10.946824 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.946801 2574 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-135-87.ec2.internal" Apr 23 17:52:10.946824 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:10.946753 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-135-87.ec2.internal.18a90dd388c5915c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-135-87.ec2.internal.18a90dd388c5915c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-135-87.ec2.internal,UID:ip-10-0-135-87.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-135-87.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:10.158281052 +0000 UTC m=+0.522321983,LastTimestamp:2026-04-23 17:52:10.929893799 +0000 UTC m=+1.293934731,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:11.012243 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:11.012211 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:52:11.140743 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:11.140716 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:11.223184 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:11.223137 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53b72ef69aad199cf5c99ac6ebdc0a72.slice/crio-2219db3d976d696b91eaeb06b9413a1e82c20285e1668505d421abd8f0eca560 WatchSource:0}: Error finding container 2219db3d976d696b91eaeb06b9413a1e82c20285e1668505d421abd8f0eca560: Status 404 returned error can't find the container with id 2219db3d976d696b91eaeb06b9413a1e82c20285e1668505d421abd8f0eca560 Apr 23 17:52:11.223645 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:52:11.223624 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb86d5a8aaa7fecdf67a597e125a8b168.slice/crio-9f038d14080ac89bfca91421d5d4912bd90eb70e1755c0d3841703c3fdf8d3a5 WatchSource:0}: Error finding container 9f038d14080ac89bfca91421d5d4912bd90eb70e1755c0d3841703c3fdf8d3a5: Status 404 returned error can't find the container with id 9f038d14080ac89bfca91421d5d4912bd90eb70e1755c0d3841703c3fdf8d3a5 Apr 23 17:52:11.227589 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:11.227575 2574 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 17:52:11.235604 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:11.235534 2574 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd3c884de40 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal,UID:b86d5a8aaa7fecdf67a597e125a8b168,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\",Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:11.22778272 +0000 UTC m=+1.591823638,LastTimestamp:2026-04-23 17:52:11.22778272 +0000 UTC m=+1.591823638,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:11.245179 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:11.245113 2574 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{kube-apiserver-proxy-ip-10-0-135-87.ec2.internal.18a90dd3c8862680 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-proxy-ip-10-0-135-87.ec2.internal,UID:53b72ef69aad199cf5c99ac6ebdc0a72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{haproxy},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:294af5c64228434d1ed6ee8ea3ac802e3c999aa847223e3b2efa18425a9fe421\",Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:11.227866752 +0000 UTC m=+1.591907671,LastTimestamp:2026-04-23 17:52:11.227866752 +0000 UTC m=+1.591907671,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:11.270947 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:11.270903 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal" event={"ID":"b86d5a8aaa7fecdf67a597e125a8b168","Type":"ContainerStarted","Data":"9f038d14080ac89bfca91421d5d4912bd90eb70e1755c0d3841703c3fdf8d3a5"} Apr 23 17:52:11.271724 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:11.271707 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-135-87.ec2.internal" event={"ID":"53b72ef69aad199cf5c99ac6ebdc0a72","Type":"ContainerStarted","Data":"2219db3d976d696b91eaeb06b9413a1e82c20285e1668505d421abd8f0eca560"} Apr 23 17:52:11.287227 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:11.287207 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:52:11.565114 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:11.565024 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="1.6s" Apr 23 17:52:11.619787 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:11.619751 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 17:52:11.619787 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:11.619751 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:52:11.747347 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:11.747313 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:11.749484 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:11.749459 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:11.749599 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:11.749497 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:11.749599 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:11.749513 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:11.749599 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:11.749547 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-135-87.ec2.internal" Apr 23 17:52:11.759532 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:11.759484 2574 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-135-87.ec2.internal" Apr 23 17:52:12.137590 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:12.137559 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:12.844017 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:12.843941 2574 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{kube-apiserver-proxy-ip-10-0-135-87.ec2.internal.18a90dd4284bebdb kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-proxy-ip-10-0-135-87.ec2.internal,UID:53b72ef69aad199cf5c99ac6ebdc0a72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{haproxy},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:294af5c64228434d1ed6ee8ea3ac802e3c999aa847223e3b2efa18425a9fe421\" in 1.606s (1.606s including waiting). Image size: 488332864 bytes.,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:12.834663387 +0000 UTC m=+3.198704329,LastTimestamp:2026-04-23 17:52:12.834663387 +0000 UTC m=+3.198704329,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:12.853316 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:12.853239 2574 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd4286aa068 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal,UID:b86d5a8aaa7fecdf67a597e125a8b168,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\" in 1.608s (1.608s including waiting). Image size: 468435751 bytes.,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:12.836675688 +0000 UTC m=+3.200716628,LastTimestamp:2026-04-23 17:52:12.836675688 +0000 UTC m=+3.200716628,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:12.921664 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:12.921590 2574 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{kube-apiserver-proxy-ip-10-0-135-87.ec2.internal.18a90dd42cf3f07e kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-proxy-ip-10-0-135-87.ec2.internal,UID:53b72ef69aad199cf5c99ac6ebdc0a72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{haproxy},},Reason:Created,Message:Created container: haproxy,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:12.912783486 +0000 UTC m=+3.276824421,LastTimestamp:2026-04-23 17:52:12.912783486 +0000 UTC m=+3.276824421,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:12.930512 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:12.930447 2574 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{kube-apiserver-proxy-ip-10-0-135-87.ec2.internal.18a90dd42d662013 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-proxy-ip-10-0-135-87.ec2.internal,UID:53b72ef69aad199cf5c99ac6ebdc0a72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{haproxy},},Reason:Started,Message:Started container haproxy,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:12.920266771 +0000 UTC m=+3.284307708,LastTimestamp:2026-04-23 17:52:12.920266771 +0000 UTC m=+3.284307708,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:13.013052 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:13.013021 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:52:13.135578 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:13.135515 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:13.175231 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:13.175205 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="3.2s" Apr 23 17:52:13.277025 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:13.276988 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-135-87.ec2.internal" event={"ID":"53b72ef69aad199cf5c99ac6ebdc0a72","Type":"ContainerStarted","Data":"7523ec995b3505aeb02eb505d9dc29366f7c027d1a9cc25198992e54af73dcbd"} Apr 23 17:52:13.277174 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:13.277054 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:13.277905 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:13.277888 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:13.278017 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:13.277921 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:13.278017 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:13.277935 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:13.278193 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:13.278178 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-135-87.ec2.internal\" not found" node="ip-10-0-135-87.ec2.internal" Apr 23 17:52:13.359760 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:13.359723 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:13.360713 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:13.360695 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:13.360812 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:13.360725 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:13.360812 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:13.360736 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:13.360812 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:13.360761 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-135-87.ec2.internal" Apr 23 17:52:13.378370 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:13.378349 2574 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-135-87.ec2.internal" Apr 23 17:52:13.501973 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:13.501951 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:52:13.523896 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:13.523802 2574 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd450b390c6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal,UID:b86d5a8aaa7fecdf67a597e125a8b168,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:13.512544454 +0000 UTC m=+3.876585389,LastTimestamp:2026-04-23 17:52:13.512544454 +0000 UTC m=+3.876585389,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:13.530559 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:13.530477 2574 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd451372faf openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal,UID:b86d5a8aaa7fecdf67a597e125a8b168,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:13.521170351 +0000 UTC m=+3.885211283,LastTimestamp:2026-04-23 17:52:13.521170351 +0000 UTC m=+3.885211283,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:13.735638 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:13.735563 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:52:14.135024 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:14.134944 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:14.280325 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:14.280166 2574 generic.go:358] "Generic (PLEG): container finished" podID="b86d5a8aaa7fecdf67a597e125a8b168" containerID="acf69241d7a075c733daf5e70ff7ee639bf160bc5612a775a4f73612fe6b801b" exitCode=0 Apr 23 17:52:14.280325 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:14.280245 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:14.280770 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:14.280385 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:14.280770 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:14.280245 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal" event={"ID":"b86d5a8aaa7fecdf67a597e125a8b168","Type":"ContainerDied","Data":"acf69241d7a075c733daf5e70ff7ee639bf160bc5612a775a4f73612fe6b801b"} Apr 23 17:52:14.281225 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:14.281207 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:14.281340 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:14.281210 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:14.281340 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:14.281262 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:14.281340 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:14.281278 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:14.281340 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:14.281235 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:14.281340 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:14.281314 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:14.281564 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:14.281530 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-135-87.ec2.internal\" not found" node="ip-10-0-135-87.ec2.internal" Apr 23 17:52:14.281622 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:14.281569 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-135-87.ec2.internal\" not found" node="ip-10-0-135-87.ec2.internal" Apr 23 17:52:14.293381 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:14.293306 2574 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd47ea6fabb openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal,UID:b86d5a8aaa7fecdf67a597e125a8b168,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\" already present on machine,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:14.283471547 +0000 UTC m=+4.647512482,LastTimestamp:2026-04-23 17:52:14.283471547 +0000 UTC m=+4.647512482,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:14.397970 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:14.397899 2574 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd484f3ea02 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal,UID:b86d5a8aaa7fecdf67a597e125a8b168,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:14.389176834 +0000 UTC m=+4.753217765,LastTimestamp:2026-04-23 17:52:14.389176834 +0000 UTC m=+4.753217765,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:14.408611 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:14.408530 2574 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd485722d2e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal,UID:b86d5a8aaa7fecdf67a597e125a8b168,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:14.397451566 +0000 UTC m=+4.761492497,LastTimestamp:2026-04-23 17:52:14.397451566 +0000 UTC m=+4.761492497,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:14.488042 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:14.488005 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 17:52:15.138090 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:15.138058 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:15.282894 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:15.282867 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/0.log" Apr 23 17:52:15.283257 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:15.283173 2574 generic.go:358] "Generic (PLEG): container finished" podID="b86d5a8aaa7fecdf67a597e125a8b168" containerID="fb503b8d6c2ff577bebc16b3bf7474e8ec19e3e8e7f0f646bf31bdbdde64151d" exitCode=1 Apr 23 17:52:15.283257 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:15.283206 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal" event={"ID":"b86d5a8aaa7fecdf67a597e125a8b168","Type":"ContainerDied","Data":"fb503b8d6c2ff577bebc16b3bf7474e8ec19e3e8e7f0f646bf31bdbdde64151d"} Apr 23 17:52:15.283355 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:15.283256 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:15.284063 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:15.284049 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:15.284125 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:15.284077 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:15.284125 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:15.284088 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:15.284261 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:15.284250 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-135-87.ec2.internal\" not found" node="ip-10-0-135-87.ec2.internal" Apr 23 17:52:15.284308 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:15.284299 2574 scope.go:117] "RemoveContainer" containerID="fb503b8d6c2ff577bebc16b3bf7474e8ec19e3e8e7f0f646bf31bdbdde64151d" Apr 23 17:52:15.293225 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:15.293131 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd47ea6fabb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd47ea6fabb openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal,UID:b86d5a8aaa7fecdf67a597e125a8b168,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\" already present on machine,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:14.283471547 +0000 UTC m=+4.647512482,LastTimestamp:2026-04-23 17:52:15.286116488 +0000 UTC m=+5.650157428,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:15.393672 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:15.393566 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd484f3ea02\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd484f3ea02 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal,UID:b86d5a8aaa7fecdf67a597e125a8b168,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:14.389176834 +0000 UTC m=+4.753217765,LastTimestamp:2026-04-23 17:52:15.383623647 +0000 UTC m=+5.747664585,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:15.404016 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:15.403933 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd485722d2e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd485722d2e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal,UID:b86d5a8aaa7fecdf67a597e125a8b168,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:14.397451566 +0000 UTC m=+4.761492497,LastTimestamp:2026-04-23 17:52:15.392244101 +0000 UTC m=+5.756285033,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:16.136835 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:16.136802 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:16.288290 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:16.288265 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/1.log" Apr 23 17:52:16.288623 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:16.288613 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/0.log" Apr 23 17:52:16.288939 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:16.288918 2574 generic.go:358] "Generic (PLEG): container finished" podID="b86d5a8aaa7fecdf67a597e125a8b168" containerID="7539ac0988805b856bd1b94d908241f0beb5ba1c424a47acc34935a52844a5ef" exitCode=1 Apr 23 17:52:16.289007 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:16.288951 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal" event={"ID":"b86d5a8aaa7fecdf67a597e125a8b168","Type":"ContainerDied","Data":"7539ac0988805b856bd1b94d908241f0beb5ba1c424a47acc34935a52844a5ef"} Apr 23 17:52:16.289007 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:16.288976 2574 scope.go:117] "RemoveContainer" containerID="fb503b8d6c2ff577bebc16b3bf7474e8ec19e3e8e7f0f646bf31bdbdde64151d" Apr 23 17:52:16.289081 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:16.289016 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:16.290159 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:16.289885 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:16.290159 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:16.289920 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:16.290159 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:16.289931 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:16.290159 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:16.290158 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-135-87.ec2.internal\" not found" node="ip-10-0-135-87.ec2.internal" Apr 23 17:52:16.290361 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:16.290208 2574 scope.go:117] "RemoveContainer" containerID="7539ac0988805b856bd1b94d908241f0beb5ba1c424a47acc34935a52844a5ef" Apr 23 17:52:16.290408 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:16.290358 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_openshift-machine-config-operator(b86d5a8aaa7fecdf67a597e125a8b168)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal" podUID="b86d5a8aaa7fecdf67a597e125a8b168" Apr 23 17:52:16.298191 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:16.298096 2574 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd4f645064a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal,UID:b86d5a8aaa7fecdf67a597e125a8b168,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_openshift-machine-config-operator(b86d5a8aaa7fecdf67a597e125a8b168),Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:16.290317898 +0000 UTC m=+6.654358833,LastTimestamp:2026-04-23 17:52:16.290317898 +0000 UTC m=+6.654358833,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:16.384667 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:16.384639 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="6.4s" Apr 23 17:52:16.578893 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:16.578785 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:16.579794 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:16.579774 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:16.579922 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:16.579804 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:16.579922 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:16.579813 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:16.579922 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:16.579839 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-135-87.ec2.internal" Apr 23 17:52:16.595271 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:16.595245 2574 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-135-87.ec2.internal" Apr 23 17:52:17.134156 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:17.134127 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:17.134292 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:17.134226 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:52:17.291683 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:17.291657 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/1.log" Apr 23 17:52:17.292122 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:17.292106 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:17.292871 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:17.292829 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:17.292941 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:17.292885 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:17.292941 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:17.292901 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:17.293196 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:17.293181 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-135-87.ec2.internal\" not found" node="ip-10-0-135-87.ec2.internal" Apr 23 17:52:17.293254 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:17.293243 2574 scope.go:117] "RemoveContainer" containerID="7539ac0988805b856bd1b94d908241f0beb5ba1c424a47acc34935a52844a5ef" Apr 23 17:52:17.293392 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:17.293377 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_openshift-machine-config-operator(b86d5a8aaa7fecdf67a597e125a8b168)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal" podUID="b86d5a8aaa7fecdf67a597e125a8b168" Apr 23 17:52:17.295724 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:17.295650 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd4f645064a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd4f645064a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal,UID:b86d5a8aaa7fecdf67a597e125a8b168,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_openshift-machine-config-operator(b86d5a8aaa7fecdf67a597e125a8b168),Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:16.290317898 +0000 UTC m=+6.654358833,LastTimestamp:2026-04-23 17:52:17.293342722 +0000 UTC m=+7.657383653,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:18.134379 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:18.134351 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:18.241761 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:18.241730 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:52:18.989833 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:18.989801 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:52:19.136991 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:19.136964 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:20.134980 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:20.134947 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:20.199047 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:20.199024 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-135-87.ec2.internal\" not found" Apr 23 17:52:20.261689 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:20.261657 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 17:52:21.136222 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:21.136190 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:22.137461 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:22.137427 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:22.792851 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:22.792824 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:52:22.996264 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:22.996238 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:22.997260 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:22.997244 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:22.997327 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:22.997276 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:22.997327 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:22.997286 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:22.997327 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:22.997311 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-135-87.ec2.internal" Apr 23 17:52:23.013950 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:23.013930 2574 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-135-87.ec2.internal" Apr 23 17:52:23.137016 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:23.136967 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:24.135485 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:24.135450 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:25.137089 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:25.137053 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:25.310440 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:25.310402 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:52:26.135712 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:26.135683 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:26.925288 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:26.925259 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:52:27.138545 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:27.138518 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:28.134510 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:28.134478 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:28.625168 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:28.625085 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:52:29.137275 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:29.137249 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:29.803042 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:29.803014 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:52:30.015030 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:30.015001 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:30.016384 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:30.016369 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:30.016433 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:30.016402 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:30.016433 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:30.016415 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:30.016498 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:30.016439 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-135-87.ec2.internal" Apr 23 17:52:30.032799 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:30.032773 2574 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-135-87.ec2.internal" Apr 23 17:52:30.135650 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:30.135602 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:30.200078 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:30.200047 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-135-87.ec2.internal\" not found" Apr 23 17:52:31.138463 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:31.138431 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:32.135007 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:32.134975 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:32.268306 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:32.268273 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:32.269218 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:32.269200 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:32.269331 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:32.269237 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:32.269331 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:32.269252 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:32.269525 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:32.269509 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-135-87.ec2.internal\" not found" node="ip-10-0-135-87.ec2.internal" Apr 23 17:52:32.269587 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:32.269573 2574 scope.go:117] "RemoveContainer" containerID="7539ac0988805b856bd1b94d908241f0beb5ba1c424a47acc34935a52844a5ef" Apr 23 17:52:32.280259 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:32.280175 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd47ea6fabb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd47ea6fabb openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal,UID:b86d5a8aaa7fecdf67a597e125a8b168,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\" already present on machine,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:14.283471547 +0000 UTC m=+4.647512482,LastTimestamp:2026-04-23 17:52:32.27148196 +0000 UTC m=+22.635522900,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:32.374256 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:32.374174 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd484f3ea02\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd484f3ea02 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal,UID:b86d5a8aaa7fecdf67a597e125a8b168,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:14.389176834 +0000 UTC m=+4.753217765,LastTimestamp:2026-04-23 17:52:32.36534493 +0000 UTC m=+22.729385865,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:32.384138 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:32.384069 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd485722d2e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd485722d2e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal,UID:b86d5a8aaa7fecdf67a597e125a8b168,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:14.397451566 +0000 UTC m=+4.761492497,LastTimestamp:2026-04-23 17:52:32.374019551 +0000 UTC m=+22.738060493,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:32.432515 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:32.432491 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 17:52:33.136019 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:33.135990 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:33.314419 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:33.314395 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/2.log" Apr 23 17:52:33.314827 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:33.314810 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/1.log" Apr 23 17:52:33.315186 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:33.315166 2574 generic.go:358] "Generic (PLEG): container finished" podID="b86d5a8aaa7fecdf67a597e125a8b168" containerID="7f6ccc6553ad51c7b5b53a4cc794d355fb8ce681d1162c033f4b332ac0d58f6f" exitCode=1 Apr 23 17:52:33.315234 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:33.315200 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal" event={"ID":"b86d5a8aaa7fecdf67a597e125a8b168","Type":"ContainerDied","Data":"7f6ccc6553ad51c7b5b53a4cc794d355fb8ce681d1162c033f4b332ac0d58f6f"} Apr 23 17:52:33.315234 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:33.315226 2574 scope.go:117] "RemoveContainer" containerID="7539ac0988805b856bd1b94d908241f0beb5ba1c424a47acc34935a52844a5ef" Apr 23 17:52:33.315346 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:33.315332 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:33.316282 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:33.316068 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:33.316282 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:33.316099 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:33.316282 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:33.316113 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:33.316401 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:33.316385 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-135-87.ec2.internal\" not found" node="ip-10-0-135-87.ec2.internal" Apr 23 17:52:33.316457 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:33.316446 2574 scope.go:117] "RemoveContainer" containerID="7f6ccc6553ad51c7b5b53a4cc794d355fb8ce681d1162c033f4b332ac0d58f6f" Apr 23 17:52:33.316590 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:33.316576 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_openshift-machine-config-operator(b86d5a8aaa7fecdf67a597e125a8b168)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal" podUID="b86d5a8aaa7fecdf67a597e125a8b168" Apr 23 17:52:33.323531 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:33.323460 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd4f645064a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd4f645064a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal,UID:b86d5a8aaa7fecdf67a597e125a8b168,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_openshift-machine-config-operator(b86d5a8aaa7fecdf67a597e125a8b168),Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:16.290317898 +0000 UTC m=+6.654358833,LastTimestamp:2026-04-23 17:52:33.316542705 +0000 UTC m=+23.680583637,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:34.138261 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:34.138229 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:34.318338 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:34.318311 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/2.log" Apr 23 17:52:35.134915 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:35.134878 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:36.137206 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:36.137173 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:36.810822 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:36.810789 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:52:37.033484 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:37.033445 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:37.034432 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:37.034414 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:37.034518 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:37.034444 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:37.034518 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:37.034454 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:37.034518 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:37.034481 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-135-87.ec2.internal" Apr 23 17:52:37.052388 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:37.052363 2574 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-135-87.ec2.internal" Apr 23 17:52:37.137072 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:37.137011 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:38.138135 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:38.138104 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:39.136206 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:39.136173 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:40.133980 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:40.133947 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:40.200216 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:40.200190 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-135-87.ec2.internal\" not found" Apr 23 17:52:41.067891 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:41.067839 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:52:41.134625 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:41.134601 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:42.135953 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:42.135919 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:43.134310 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:43.134285 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:43.450864 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:43.450819 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:52:43.819837 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:43.819759 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:52:44.052749 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:44.052711 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:44.053723 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:44.053705 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:44.053801 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:44.053739 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:44.053801 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:44.053754 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:44.053801 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:44.053779 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-135-87.ec2.internal" Apr 23 17:52:44.070920 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:44.070834 2574 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-135-87.ec2.internal" Apr 23 17:52:44.137069 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:44.137044 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:45.135242 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:45.135204 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:45.708077 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:45.708041 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 17:52:46.137085 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:46.137015 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:47.137365 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:47.137333 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:48.134963 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:48.134932 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:48.268888 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:48.268821 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:48.270151 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:48.270133 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:48.270249 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:48.270163 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:48.270249 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:48.270173 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:48.270375 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:48.270362 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-135-87.ec2.internal\" not found" node="ip-10-0-135-87.ec2.internal" Apr 23 17:52:48.270424 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:48.270415 2574 scope.go:117] "RemoveContainer" containerID="7f6ccc6553ad51c7b5b53a4cc794d355fb8ce681d1162c033f4b332ac0d58f6f" Apr 23 17:52:48.270547 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:48.270533 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_openshift-machine-config-operator(b86d5a8aaa7fecdf67a597e125a8b168)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal" podUID="b86d5a8aaa7fecdf67a597e125a8b168" Apr 23 17:52:48.279762 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:48.279686 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd4f645064a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd4f645064a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal,UID:b86d5a8aaa7fecdf67a597e125a8b168,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_openshift-machine-config-operator(b86d5a8aaa7fecdf67a597e125a8b168),Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:16.290317898 +0000 UTC m=+6.654358833,LastTimestamp:2026-04-23 17:52:48.27050876 +0000 UTC m=+38.634549692,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:52:49.137273 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:49.137229 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:50.137300 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:50.137266 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:50.200755 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:50.200720 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-135-87.ec2.internal\" not found" Apr 23 17:52:50.830785 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:50.830744 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:52:51.071894 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:51.071836 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:51.072876 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:51.072831 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:51.072964 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:51.072897 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:51.072964 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:51.072908 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:51.072964 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:51.072936 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-135-87.ec2.internal" Apr 23 17:52:51.088814 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:51.088749 2574 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-135-87.ec2.internal" Apr 23 17:52:51.135361 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:51.135335 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:52.135160 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:52.135126 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:53.136890 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:53.136836 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:53.143937 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:53.143913 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:52:54.135720 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:54.135687 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:55.135945 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:55.135914 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:56.138956 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:56.138922 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:57.134634 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:57.134602 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:57.841661 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:57.841614 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:52:58.089793 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:58.089766 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:58.090713 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:58.090691 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:58.090870 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:58.090726 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:58.090870 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:58.090736 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:58.090870 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:58.090763 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-135-87.ec2.internal" Apr 23 17:52:58.107979 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:52:58.107916 2574 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-135-87.ec2.internal" Apr 23 17:52:58.133941 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:58.133922 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:59.136172 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:52:59.136146 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:00.136225 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:00.136197 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:00.201671 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:00.201647 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-135-87.ec2.internal\" not found" Apr 23 17:53:00.268553 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:00.268524 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:00.269399 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:00.269381 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:00.269460 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:00.269414 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:00.269460 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:00.269430 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:00.269720 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:00.269705 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-135-87.ec2.internal\" not found" node="ip-10-0-135-87.ec2.internal" Apr 23 17:53:00.269770 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:00.269761 2574 scope.go:117] "RemoveContainer" containerID="7f6ccc6553ad51c7b5b53a4cc794d355fb8ce681d1162c033f4b332ac0d58f6f" Apr 23 17:53:00.279195 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:00.279086 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd47ea6fabb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd47ea6fabb openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal,UID:b86d5a8aaa7fecdf67a597e125a8b168,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\" already present on machine,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:14.283471547 +0000 UTC m=+4.647512482,LastTimestamp:2026-04-23 17:53:00.270506568 +0000 UTC m=+50.634547508,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:53:00.374962 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:00.374887 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd484f3ea02\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd484f3ea02 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal,UID:b86d5a8aaa7fecdf67a597e125a8b168,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:14.389176834 +0000 UTC m=+4.753217765,LastTimestamp:2026-04-23 17:53:00.366739956 +0000 UTC m=+50.730780897,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:53:00.384444 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:00.384358 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd485722d2e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd485722d2e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal,UID:b86d5a8aaa7fecdf67a597e125a8b168,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:14.397451566 +0000 UTC m=+4.761492497,LastTimestamp:2026-04-23 17:53:00.374416073 +0000 UTC m=+50.738457008,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:53:01.137397 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:01.137359 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:01.357053 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:01.357023 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/3.log" Apr 23 17:53:01.357403 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:01.357386 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/2.log" Apr 23 17:53:01.357734 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:01.357716 2574 generic.go:358] "Generic (PLEG): container finished" podID="b86d5a8aaa7fecdf67a597e125a8b168" containerID="f90a7116f92678b3771dbc3054cabdfc6ab656c6f26d2898bf6089949fa3116b" exitCode=1 Apr 23 17:53:01.357778 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:01.357747 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal" event={"ID":"b86d5a8aaa7fecdf67a597e125a8b168","Type":"ContainerDied","Data":"f90a7116f92678b3771dbc3054cabdfc6ab656c6f26d2898bf6089949fa3116b"} Apr 23 17:53:01.357811 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:01.357775 2574 scope.go:117] "RemoveContainer" containerID="7f6ccc6553ad51c7b5b53a4cc794d355fb8ce681d1162c033f4b332ac0d58f6f" Apr 23 17:53:01.357891 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:01.357878 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:01.363832 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:01.363815 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:01.363946 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:01.363860 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:01.363946 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:01.363871 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:01.364086 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:01.364073 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-135-87.ec2.internal\" not found" node="ip-10-0-135-87.ec2.internal" Apr 23 17:53:01.364143 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:01.364121 2574 scope.go:117] "RemoveContainer" containerID="f90a7116f92678b3771dbc3054cabdfc6ab656c6f26d2898bf6089949fa3116b" Apr 23 17:53:01.364253 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:01.364239 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_openshift-machine-config-operator(b86d5a8aaa7fecdf67a597e125a8b168)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal" podUID="b86d5a8aaa7fecdf67a597e125a8b168" Apr 23 17:53:01.373084 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:01.373007 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd4f645064a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd4f645064a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal,UID:b86d5a8aaa7fecdf67a597e125a8b168,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_openshift-machine-config-operator(b86d5a8aaa7fecdf67a597e125a8b168),Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:16.290317898 +0000 UTC m=+6.654358833,LastTimestamp:2026-04-23 17:53:01.364211531 +0000 UTC m=+51.728252466,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:53:02.135115 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:02.135090 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:02.360789 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:02.360761 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/3.log" Apr 23 17:53:03.137021 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:03.136985 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:04.134560 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:04.134526 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:04.851068 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:04.851026 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:05.108555 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:05.108446 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:05.109410 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:05.109392 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:05.109520 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:05.109427 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:05.109520 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:05.109441 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:05.109520 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:05.109476 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-135-87.ec2.internal" Apr 23 17:53:05.126647 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:05.126614 2574 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-135-87.ec2.internal" Apr 23 17:53:05.135090 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:05.135066 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:06.140775 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:06.140739 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:07.134496 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:07.134462 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:08.136412 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:08.136378 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:09.135550 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:09.135516 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:09.301393 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:09.301360 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:53:10.137375 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:10.137342 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:10.201965 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:10.201934 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-135-87.ec2.internal\" not found" Apr 23 17:53:11.137620 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:11.137586 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:11.861046 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:11.861012 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:12.127679 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:12.127581 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:12.129445 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:12.129428 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:12.129531 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:12.129466 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:12.129531 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:12.129476 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:12.129531 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:12.129503 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-135-87.ec2.internal" Apr 23 17:53:12.136781 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:12.136758 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:12.143718 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:12.143695 2574 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-135-87.ec2.internal" Apr 23 17:53:13.137410 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:13.137380 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:14.135425 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:14.135389 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:15.138149 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:15.138115 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:15.268665 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:15.268635 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:15.269605 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:15.269586 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:15.269695 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:15.269617 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:15.269695 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:15.269627 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:15.269861 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:15.269834 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-135-87.ec2.internal\" not found" node="ip-10-0-135-87.ec2.internal" Apr 23 17:53:15.269907 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:15.269898 2574 scope.go:117] "RemoveContainer" containerID="f90a7116f92678b3771dbc3054cabdfc6ab656c6f26d2898bf6089949fa3116b" Apr 23 17:53:15.270058 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:15.270043 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_openshift-machine-config-operator(b86d5a8aaa7fecdf67a597e125a8b168)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal" podUID="b86d5a8aaa7fecdf67a597e125a8b168" Apr 23 17:53:15.276488 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:15.276414 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd4f645064a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd4f645064a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal,UID:b86d5a8aaa7fecdf67a597e125a8b168,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_openshift-machine-config-operator(b86d5a8aaa7fecdf67a597e125a8b168),Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:16.290317898 +0000 UTC m=+6.654358833,LastTimestamp:2026-04-23 17:53:15.269990562 +0000 UTC m=+65.634031494,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:53:16.135596 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:16.135563 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:17.138027 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:17.137997 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:18.135931 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:18.135902 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:18.869034 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:18.868996 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:19.137437 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:19.137363 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:19.144597 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:19.144575 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:19.145458 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:19.145438 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:19.145556 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:19.145468 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:19.145556 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:19.145478 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:19.145556 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:19.145507 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-135-87.ec2.internal" Apr 23 17:53:19.162404 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:19.162381 2574 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-135-87.ec2.internal" Apr 23 17:53:20.134677 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:20.134637 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:20.202336 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:20.202299 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-135-87.ec2.internal\" not found" Apr 23 17:53:20.268751 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:20.268723 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:20.269525 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:20.269508 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:20.269616 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:20.269543 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:20.269616 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:20.269560 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:20.269802 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:20.269788 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-135-87.ec2.internal\" not found" node="ip-10-0-135-87.ec2.internal" Apr 23 17:53:21.138576 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:21.138544 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:22.135996 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:22.135969 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:23.137158 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:23.137126 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:23.822506 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:23.822469 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:53:24.136962 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:24.136889 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:25.136605 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:25.136578 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:25.880637 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:25.880598 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:26.135381 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:26.135313 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:26.162903 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:26.162883 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:26.163809 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:26.163792 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:26.163911 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:26.163821 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:26.163911 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:26.163830 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:26.163911 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:26.163870 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-135-87.ec2.internal" Apr 23 17:53:26.181514 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:26.181487 2574 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-135-87.ec2.internal" Apr 23 17:53:27.094779 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:27.094738 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 17:53:27.134086 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:27.134061 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:28.136150 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:28.136084 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:29.136788 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:29.136753 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:29.269004 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:29.268974 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:29.269901 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:29.269884 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:29.269984 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:29.269916 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:29.269984 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:29.269928 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:29.270145 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:29.270132 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-135-87.ec2.internal\" not found" node="ip-10-0-135-87.ec2.internal" Apr 23 17:53:29.270201 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:29.270181 2574 scope.go:117] "RemoveContainer" containerID="f90a7116f92678b3771dbc3054cabdfc6ab656c6f26d2898bf6089949fa3116b" Apr 23 17:53:29.270320 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:29.270305 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_openshift-machine-config-operator(b86d5a8aaa7fecdf67a597e125a8b168)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal" podUID="b86d5a8aaa7fecdf67a597e125a8b168" Apr 23 17:53:29.281477 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:29.281395 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd4f645064a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd4f645064a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal,UID:b86d5a8aaa7fecdf67a597e125a8b168,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_openshift-machine-config-operator(b86d5a8aaa7fecdf67a597e125a8b168),Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:16.290317898 +0000 UTC m=+6.654358833,LastTimestamp:2026-04-23 17:53:29.270278844 +0000 UTC m=+79.634319775,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:53:30.134969 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:30.134936 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:30.202792 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:30.202750 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-135-87.ec2.internal\" not found" Apr 23 17:53:31.137331 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:31.137298 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:32.135403 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:32.135369 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:32.890795 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:32.890761 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:33.134168 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:33.134137 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:33.182263 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:33.182238 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:33.183156 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:33.183137 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:33.183228 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:33.183170 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:33.183228 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:33.183185 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:33.183228 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:33.183216 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-135-87.ec2.internal" Apr 23 17:53:33.201764 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:33.201741 2574 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-135-87.ec2.internal" Apr 23 17:53:34.137965 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:34.137927 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:35.134054 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:35.134021 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:36.136810 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:36.136776 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:37.135172 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:37.135135 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:37.281810 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:37.281775 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:53:38.136158 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:38.136128 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:39.135528 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:39.135501 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:39.899031 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:39.898997 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:40.137268 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:40.137237 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:40.202852 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:40.202822 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:40.203008 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:40.202880 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-135-87.ec2.internal\" not found" Apr 23 17:53:40.203647 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:40.203631 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:40.203711 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:40.203666 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:40.203711 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:40.203681 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:40.203808 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:40.203716 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-135-87.ec2.internal" Apr 23 17:53:40.221939 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:40.221915 2574 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-135-87.ec2.internal" Apr 23 17:53:41.133993 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:41.133964 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:42.136379 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:42.136350 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:42.269121 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:42.269091 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:42.270051 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:42.270028 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:42.270176 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:42.270062 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:42.270176 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:42.270076 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:42.270389 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:42.270374 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-135-87.ec2.internal\" not found" node="ip-10-0-135-87.ec2.internal" Apr 23 17:53:42.270450 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:42.270439 2574 scope.go:117] "RemoveContainer" containerID="f90a7116f92678b3771dbc3054cabdfc6ab656c6f26d2898bf6089949fa3116b" Apr 23 17:53:42.280621 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:42.280520 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd47ea6fabb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd47ea6fabb openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal,UID:b86d5a8aaa7fecdf67a597e125a8b168,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\" already present on machine,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:14.283471547 +0000 UTC m=+4.647512482,LastTimestamp:2026-04-23 17:53:42.271170431 +0000 UTC m=+92.635211353,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:53:42.377003 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:42.376921 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd484f3ea02\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd484f3ea02 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal,UID:b86d5a8aaa7fecdf67a597e125a8b168,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:14.389176834 +0000 UTC m=+4.753217765,LastTimestamp:2026-04-23 17:53:42.368661711 +0000 UTC m=+92.732702666,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:53:42.384198 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:42.384122 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd485722d2e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd485722d2e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal,UID:b86d5a8aaa7fecdf67a597e125a8b168,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:14.397451566 +0000 UTC m=+4.761492497,LastTimestamp:2026-04-23 17:53:42.377403258 +0000 UTC m=+92.741444197,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:53:42.418753 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:42.418730 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/3.log" Apr 23 17:53:42.419036 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:42.419015 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal" event={"ID":"b86d5a8aaa7fecdf67a597e125a8b168","Type":"ContainerStarted","Data":"2155eb5bb8ef913dc126d852cf2f1710ed43f1b18d89c0100bd6368cefe84deb"} Apr 23 17:53:42.419133 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:42.419122 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:42.419887 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:42.419874 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:42.419940 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:42.419901 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:42.419940 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:42.419911 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:42.420089 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:42.420075 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-135-87.ec2.internal\" not found" node="ip-10-0-135-87.ec2.internal" Apr 23 17:53:43.136636 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:43.136607 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:43.421797 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:43.421725 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/4.log" Apr 23 17:53:43.422166 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:43.422148 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/3.log" Apr 23 17:53:43.422470 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:43.422447 2574 generic.go:358] "Generic (PLEG): container finished" podID="b86d5a8aaa7fecdf67a597e125a8b168" containerID="2155eb5bb8ef913dc126d852cf2f1710ed43f1b18d89c0100bd6368cefe84deb" exitCode=1 Apr 23 17:53:43.422556 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:43.422483 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal" event={"ID":"b86d5a8aaa7fecdf67a597e125a8b168","Type":"ContainerDied","Data":"2155eb5bb8ef913dc126d852cf2f1710ed43f1b18d89c0100bd6368cefe84deb"} Apr 23 17:53:43.422556 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:43.422522 2574 scope.go:117] "RemoveContainer" containerID="f90a7116f92678b3771dbc3054cabdfc6ab656c6f26d2898bf6089949fa3116b" Apr 23 17:53:43.422717 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:43.422701 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:43.423683 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:43.423554 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:43.423683 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:43.423587 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:43.423683 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:43.423603 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:43.423821 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:43.423806 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-135-87.ec2.internal\" not found" node="ip-10-0-135-87.ec2.internal" Apr 23 17:53:43.423900 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:43.423888 2574 scope.go:117] "RemoveContainer" containerID="2155eb5bb8ef913dc126d852cf2f1710ed43f1b18d89c0100bd6368cefe84deb" Apr 23 17:53:43.424059 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:43.424044 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_openshift-machine-config-operator(b86d5a8aaa7fecdf67a597e125a8b168)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal" podUID="b86d5a8aaa7fecdf67a597e125a8b168" Apr 23 17:53:43.432067 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:43.431999 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd4f645064a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd4f645064a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal,UID:b86d5a8aaa7fecdf67a597e125a8b168,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_openshift-machine-config-operator(b86d5a8aaa7fecdf67a597e125a8b168),Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:16.290317898 +0000 UTC m=+6.654358833,LastTimestamp:2026-04-23 17:53:43.424014314 +0000 UTC m=+93.788055245,Count:8,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:53:44.138013 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:44.137979 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:44.425019 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:44.424947 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/4.log" Apr 23 17:53:45.135564 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:45.135532 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:46.138129 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:46.138094 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:46.907562 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:46.907527 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:47.137817 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:47.137788 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:47.222488 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:47.222463 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:47.223381 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:47.223366 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:47.223429 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:47.223396 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:47.223429 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:47.223406 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:47.223429 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:47.223429 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-135-87.ec2.internal" Apr 23 17:53:47.241532 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:47.241511 2574 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-135-87.ec2.internal" Apr 23 17:53:48.136206 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:48.136181 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:49.137791 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:49.137762 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:50.135836 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:50.135804 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:50.203584 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:50.203559 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-135-87.ec2.internal\" not found" Apr 23 17:53:51.138166 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:51.138129 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:52.135642 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:52.135616 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:53.136374 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:53.136341 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:53.916576 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:53.916540 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:54.137021 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:54.136993 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:54.241747 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:54.241717 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:54.244642 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:54.244625 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:54.244709 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:54.244659 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:54.244709 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:54.244669 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:54.244709 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:54.244696 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-135-87.ec2.internal" Apr 23 17:53:54.263421 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:54.263398 2574 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-135-87.ec2.internal" Apr 23 17:53:55.135466 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:55.135434 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:55.268233 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:55.268205 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:55.269950 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:55.269933 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:55.270052 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:55.269974 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:55.270052 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:55.269987 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:55.270224 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:55.270210 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-135-87.ec2.internal\" not found" node="ip-10-0-135-87.ec2.internal" Apr 23 17:53:55.270279 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:55.270267 2574 scope.go:117] "RemoveContainer" containerID="2155eb5bb8ef913dc126d852cf2f1710ed43f1b18d89c0100bd6368cefe84deb" Apr 23 17:53:55.270419 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:55.270401 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_openshift-machine-config-operator(b86d5a8aaa7fecdf67a597e125a8b168)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal" podUID="b86d5a8aaa7fecdf67a597e125a8b168" Apr 23 17:53:55.280299 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:55.280216 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd4f645064a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal.18a90dd4f645064a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal,UID:b86d5a8aaa7fecdf67a597e125a8b168,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_openshift-machine-config-operator(b86d5a8aaa7fecdf67a597e125a8b168),Source:EventSource{Component:kubelet,Host:ip-10-0-135-87.ec2.internal,},FirstTimestamp:2026-04-23 17:52:16.290317898 +0000 UTC m=+6.654358833,LastTimestamp:2026-04-23 17:53:55.270372611 +0000 UTC m=+105.634413552,Count:9,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-135-87.ec2.internal,}" Apr 23 17:53:56.146164 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:56.146134 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:57.139105 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:57.139069 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:57.784945 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:53:57.784912 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 17:53:58.137946 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:58.137879 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:59.137070 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:53:59.137041 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:00.136118 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:00.136089 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:00.204572 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:00.204544 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-135-87.ec2.internal\" not found" Apr 23 17:54:00.926574 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:00.926540 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:54:01.138722 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:01.138698 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:01.264552 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:01.264491 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:54:01.265422 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:01.265405 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:54:01.265506 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:01.265435 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:54:01.265506 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:01.265445 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:54:01.265506 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:01.265471 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-135-87.ec2.internal" Apr 23 17:54:01.282065 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:01.282043 2574 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-135-87.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-135-87.ec2.internal" Apr 23 17:54:02.135501 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:02.135472 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:03.135997 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:03.135967 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:04.135059 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:04.135026 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:05.138394 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:05.138359 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-135-87.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:05.266093 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:05.266064 2574 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-t7bnq" Apr 23 17:54:05.980652 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:05.980624 2574 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:54:06.028479 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:06.028458 2574 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Apr 23 17:54:06.028589 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:06.028573 2574 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Apr 23 17:54:06.171970 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:06.171948 2574 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-135-87.ec2.internal" not found Apr 23 17:54:06.203473 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:06.203455 2574 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-135-87.ec2.internal" not found Apr 23 17:54:06.267683 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:06.267629 2574 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2028-04-22 17:49:05 +0000 UTC" deadline="2028-01-07 16:00:07.940731468 +0000 UTC" Apr 23 17:54:06.267683 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:06.267653 2574 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="14974h6m1.673080627s" Apr 23 17:54:06.282882 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:06.282864 2574 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-135-87.ec2.internal" not found Apr 23 17:54:06.558310 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:06.558240 2574 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-135-87.ec2.internal" not found Apr 23 17:54:06.558310 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:06.558268 2574 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-10-0-135-87.ec2.internal" not found Apr 23 17:54:06.595409 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:06.595396 2574 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-135-87.ec2.internal" not found Apr 23 17:54:06.618152 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:06.618136 2574 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-135-87.ec2.internal" not found Apr 23 17:54:06.683999 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:06.683982 2574 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-135-87.ec2.internal" not found Apr 23 17:54:06.971239 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:06.971212 2574 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-135-87.ec2.internal" not found Apr 23 17:54:06.971239 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:06.971234 2574 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-10-0-135-87.ec2.internal" not found Apr 23 17:54:07.235855 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:07.235779 2574 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-135-87.ec2.internal" not found Apr 23 17:54:07.255156 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:07.255139 2574 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-135-87.ec2.internal" not found Apr 23 17:54:07.315068 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:07.315049 2574 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-135-87.ec2.internal" not found Apr 23 17:54:07.591491 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:07.591419 2574 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-135-87.ec2.internal" not found Apr 23 17:54:07.591491 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:07.591443 2574 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-10-0-135-87.ec2.internal" not found Apr 23 17:54:07.935017 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:07.934992 2574 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-10-0-135-87.ec2.internal\" not found" node="ip-10-0-135-87.ec2.internal" Apr 23 17:54:08.268781 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:08.268719 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:54:08.269806 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:08.269787 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:54:08.269946 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:08.269833 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:54:08.269946 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:08.269869 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:54:08.270193 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:08.270174 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-135-87.ec2.internal\" not found" node="ip-10-0-135-87.ec2.internal" Apr 23 17:54:08.270263 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:08.270250 2574 scope.go:117] "RemoveContainer" containerID="2155eb5bb8ef913dc126d852cf2f1710ed43f1b18d89c0100bd6368cefe84deb" Apr 23 17:54:08.270445 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:08.270423 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_openshift-machine-config-operator(b86d5a8aaa7fecdf67a597e125a8b168)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal" podUID="b86d5a8aaa7fecdf67a597e125a8b168" Apr 23 17:54:08.282678 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:08.282657 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:54:08.283492 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:08.283471 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:54:08.283579 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:08.283506 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:54:08.283579 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:08.283523 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:54:08.283579 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:08.283562 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-135-87.ec2.internal" Apr 23 17:54:08.300313 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:08.300290 2574 kubelet_node_status.go:81] "Successfully registered node" node="ip-10-0-135-87.ec2.internal" Apr 23 17:54:08.300399 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:08.300314 2574 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-135-87.ec2.internal\": node \"ip-10-0-135-87.ec2.internal\" not found" Apr 23 17:54:08.317027 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:08.316999 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-135-87.ec2.internal\" not found" Apr 23 17:54:08.417397 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:08.417362 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-135-87.ec2.internal\" not found" Apr 23 17:54:08.517595 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:08.517573 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-135-87.ec2.internal\" not found" Apr 23 17:54:08.617919 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:08.617871 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-135-87.ec2.internal\" not found" Apr 23 17:54:08.690248 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:08.690222 2574 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:54:08.718505 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:08.718482 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-135-87.ec2.internal\" not found" Apr 23 17:54:08.818958 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:08.818929 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-135-87.ec2.internal\" not found" Apr 23 17:54:08.919455 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:08.919399 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-135-87.ec2.internal\" not found" Apr 23 17:54:09.019906 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:09.019883 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-135-87.ec2.internal\" not found" Apr 23 17:54:09.120570 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:09.120547 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-135-87.ec2.internal\" not found" Apr 23 17:54:09.209295 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:09.209275 2574 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Apr 23 17:54:09.220895 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:09.220878 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-135-87.ec2.internal\" not found" Apr 23 17:54:09.225311 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:09.225294 2574 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 23 17:54:09.254971 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:09.254951 2574 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-zh74z" Apr 23 17:54:09.264193 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:09.264173 2574 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-zh74z" Apr 23 17:54:09.321764 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:09.321748 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-135-87.ec2.internal\" not found" Apr 23 17:54:09.422221 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:09.422191 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-135-87.ec2.internal\" not found" Apr 23 17:54:09.522775 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:09.522712 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-135-87.ec2.internal\" not found" Apr 23 17:54:09.623244 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:09.623220 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-135-87.ec2.internal\" not found" Apr 23 17:54:09.723817 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:09.723794 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-135-87.ec2.internal\" not found" Apr 23 17:54:09.824240 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:09.824192 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-135-87.ec2.internal\" not found" Apr 23 17:54:09.924297 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:09.924274 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-135-87.ec2.internal\" not found" Apr 23 17:54:10.024921 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:10.024902 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-135-87.ec2.internal\" not found" Apr 23 17:54:10.126036 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:10.125984 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-135-87.ec2.internal\" not found" Apr 23 17:54:10.204884 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:10.204838 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-135-87.ec2.internal\" not found" Apr 23 17:54:10.226708 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:10.226691 2574 kubelet_node_status.go:509] "Node not becoming ready in time after startup" Apr 23 17:54:10.265143 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:10.265120 2574 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-22 17:49:09 +0000 UTC" deadline="2028-01-13 23:52:38.831468238 +0000 UTC" Apr 23 17:54:10.265143 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:10.265140 2574 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="15125h58m28.566330942s" Apr 23 17:54:11.265917 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:11.265867 2574 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-22 17:49:09 +0000 UTC" deadline="2027-12-24 06:09:23.430046456 +0000 UTC" Apr 23 17:54:11.265917 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:11.265908 2574 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="14628h15m12.16414286s" Apr 23 17:54:12.188524 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:12.188494 2574 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:54:12.237034 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:12.237009 2574 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal" Apr 23 17:54:12.252632 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:12.252609 2574 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 23 17:54:12.253696 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:12.253680 2574 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-135-87.ec2.internal" Apr 23 17:54:12.262866 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:12.262835 2574 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 23 17:54:13.173577 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.173554 2574 apiserver.go:52] "Watching apiserver" Apr 23 17:54:13.181156 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.181138 2574 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Apr 23 17:54:13.181502 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.181483 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kube-system/konnectivity-agent-v2msm","openshift-image-registry/node-ca-4767s","openshift-multus/multus-hhc5p","openshift-multus/network-metrics-daemon-v8bcb","openshift-network-diagnostics/network-check-target-vfxjl","openshift-network-operator/iptables-alerter-jrchc","openshift-ovn-kubernetes/ovnkube-node-hhdnl","kube-system/kube-apiserver-proxy-ip-10-0-135-87.ec2.internal","openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-zb5cp","openshift-cluster-node-tuning-operator/tuned-8htbm","openshift-dns/node-resolver-z72z9","openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal","openshift-multus/multus-additional-cni-plugins-rn6ls"] Apr 23 17:54:13.184091 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.184072 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.186177 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.186150 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-4767s" Apr 23 17:54:13.186732 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.186713 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Apr 23 17:54:13.188026 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.187975 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Apr 23 17:54:13.188026 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.188001 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Apr 23 17:54:13.188216 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.188001 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Apr 23 17:54:13.188216 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.188046 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Apr 23 17:54:13.188216 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.188078 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Apr 23 17:54:13.188216 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.188111 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-2vg5p\"" Apr 23 17:54:13.188387 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.188373 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.190394 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.190089 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Apr 23 17:54:13.190394 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.190345 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Apr 23 17:54:13.190495 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.190396 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-xb587\"" Apr 23 17:54:13.192366 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.190957 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Apr 23 17:54:13.192529 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.192505 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Apr 23 17:54:13.192764 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.192732 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8bcb" Apr 23 17:54:13.192948 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.192931 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Apr 23 17:54:13.193009 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:13.192969 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8bcb" podUID="43c90ba9-23a0-4be9-a89b-8ff980f1bb05" Apr 23 17:54:13.193348 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.193332 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Apr 23 17:54:13.194003 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.193925 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Apr 23 17:54:13.194141 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.194124 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-4s597\"" Apr 23 17:54:13.195104 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.195086 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vfxjl" Apr 23 17:54:13.195185 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:13.195165 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vfxjl" podUID="194b68f6-135d-472e-a449-ddda482b9755" Apr 23 17:54:13.197276 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.197260 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-jrchc" Apr 23 17:54:13.199514 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.199495 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-v2msm" Apr 23 17:54:13.201625 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.201608 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Apr 23 17:54:13.201718 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.201608 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Apr 23 17:54:13.201770 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.201718 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Apr 23 17:54:13.201888 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.201868 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-dockercfg-2758j\"" Apr 23 17:54:13.201990 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.201944 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-zb5cp" Apr 23 17:54:13.203626 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.203611 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"default-dockercfg-z5mb4\"" Apr 23 17:54:13.203710 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.203640 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kube-system\"/\"konnectivity-ca-bundle\"" Apr 23 17:54:13.204187 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.204167 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.205175 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.205162 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"kube-root-ca.crt\"" Apr 23 17:54:13.206381 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.206368 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-z72z9" Apr 23 17:54:13.206635 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.206587 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"konnectivity-agent\"" Apr 23 17:54:13.206821 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.206806 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"openshift-service-ca.crt\"" Apr 23 17:54:13.206921 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.206892 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-metrics-serving-cert\"" Apr 23 17:54:13.206981 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.206935 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-sa-dockercfg-k7xsh\"" Apr 23 17:54:13.206981 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.206945 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"tuned-dockercfg-r6lhx\"" Apr 23 17:54:13.206981 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.206957 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"kube-root-ca.crt\"" Apr 23 17:54:13.207141 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.207010 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"openshift-service-ca.crt\"" Apr 23 17:54:13.208718 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.208568 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rn6ls" Apr 23 17:54:13.208718 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.208678 2574 scope.go:117] "RemoveContainer" containerID="2155eb5bb8ef913dc126d852cf2f1710ed43f1b18d89c0100bd6368cefe84deb" Apr 23 17:54:13.208920 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:13.208897 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_openshift-machine-config-operator(b86d5a8aaa7fecdf67a597e125a8b168)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal" podUID="b86d5a8aaa7fecdf67a597e125a8b168" Apr 23 17:54:13.209336 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.209319 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Apr 23 17:54:13.210335 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.210320 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Apr 23 17:54:13.211202 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.211182 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-x8mkm\"" Apr 23 17:54:13.211283 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.211220 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Apr 23 17:54:13.211283 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.211228 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Apr 23 17:54:13.211365 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.211295 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-btltb\"" Apr 23 17:54:13.238987 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.238973 2574 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 23 17:54:13.272109 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.272085 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-host-run-netns\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.272207 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.272114 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/93f9e76a-768c-4e49-8238-031ed17ddef2-etc-kubernetes\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.272207 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.272136 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/93f9e76a-768c-4e49-8238-031ed17ddef2-var-lib-kubelet\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.272207 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.272160 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-host-slash\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.272207 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.272195 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-host-cni-bin\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.272343 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.272225 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-host-run-k8s-cni-cncf-io\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.272343 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.272256 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-host-var-lib-cni-bin\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.272343 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.272277 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-systemd-units\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.272343 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.272293 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-var-lib-openvswitch\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.272343 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.272308 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4f4d7e96-7d49-43ba-bd2c-ee439980c9ed-registration-dir\") pod \"aws-ebs-csi-driver-node-zb5cp\" (UID: \"4f4d7e96-7d49-43ba-bd2c-ee439980c9ed\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-zb5cp" Apr 23 17:54:13.272528 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.272343 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-system-cni-dir\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.272528 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.272369 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/bbe2b171-bf55-475a-a044-e38bab188f11-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rn6ls\" (UID: \"bbe2b171-bf55-475a-a044-e38bab188f11\") " pod="openshift-multus/multus-additional-cni-plugins-rn6ls" Apr 23 17:54:13.272528 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.272387 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-host-cni-netd\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.272528 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.272404 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/934aa068-0f79-4196-9fc1-e81a90b22334-ovnkube-script-lib\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.272528 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.272424 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr8gz\" (UniqueName: \"kubernetes.io/projected/194b68f6-135d-472e-a449-ddda482b9755-kube-api-access-tr8gz\") pod \"network-check-target-vfxjl\" (UID: \"194b68f6-135d-472e-a449-ddda482b9755\") " pod="openshift-network-diagnostics/network-check-target-vfxjl" Apr 23 17:54:13.272528 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.272438 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/4f4d7e96-7d49-43ba-bd2c-ee439980c9ed-device-dir\") pod \"aws-ebs-csi-driver-node-zb5cp\" (UID: \"4f4d7e96-7d49-43ba-bd2c-ee439980c9ed\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-zb5cp" Apr 23 17:54:13.272528 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.272453 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/4f4d7e96-7d49-43ba-bd2c-ee439980c9ed-etc-selinux\") pod \"aws-ebs-csi-driver-node-zb5cp\" (UID: \"4f4d7e96-7d49-43ba-bd2c-ee439980c9ed\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-zb5cp" Apr 23 17:54:13.272528 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.272469 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/58265a7e-9515-43ed-8838-b59c7bc68f1a-serviceca\") pod \"node-ca-4767s\" (UID: \"58265a7e-9515-43ed-8838-b59c7bc68f1a\") " pod="openshift-image-registry/node-ca-4767s" Apr 23 17:54:13.272528 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.272482 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-cni-binary-copy\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.272528 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.272523 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/93f9e76a-768c-4e49-8238-031ed17ddef2-etc-sysconfig\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.273043 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.272550 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/93f9e76a-768c-4e49-8238-031ed17ddef2-run\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.273043 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.272589 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4f625df8-2016-4ff3-8cc7-d03314b05183-hosts-file\") pod \"node-resolver-z72z9\" (UID: \"4f625df8-2016-4ff3-8cc7-d03314b05183\") " pod="openshift-dns/node-resolver-z72z9" Apr 23 17:54:13.273043 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.272616 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xgpz\" (UniqueName: \"kubernetes.io/projected/4f4d7e96-7d49-43ba-bd2c-ee439980c9ed-kube-api-access-8xgpz\") pod \"aws-ebs-csi-driver-node-zb5cp\" (UID: \"4f4d7e96-7d49-43ba-bd2c-ee439980c9ed\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-zb5cp" Apr 23 17:54:13.273043 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.272643 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8wgg\" (UniqueName: \"kubernetes.io/projected/58265a7e-9515-43ed-8838-b59c7bc68f1a-kube-api-access-f8wgg\") pod \"node-ca-4767s\" (UID: \"58265a7e-9515-43ed-8838-b59c7bc68f1a\") " pod="openshift-image-registry/node-ca-4767s" Apr 23 17:54:13.273043 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.272668 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-host-var-lib-cni-multus\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.273043 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.272700 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-host-var-lib-kubelet\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.273043 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.272726 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ppnk\" (UniqueName: \"kubernetes.io/projected/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-kube-api-access-7ppnk\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.273043 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.272752 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/93f9e76a-768c-4e49-8238-031ed17ddef2-tmp\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.273043 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.272776 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxftg\" (UniqueName: \"kubernetes.io/projected/934aa068-0f79-4196-9fc1-e81a90b22334-kube-api-access-pxftg\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.273043 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.272801 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/bbe2b171-bf55-475a-a044-e38bab188f11-cni-binary-copy\") pod \"multus-additional-cni-plugins-rn6ls\" (UID: \"bbe2b171-bf55-475a-a044-e38bab188f11\") " pod="openshift-multus/multus-additional-cni-plugins-rn6ls" Apr 23 17:54:13.273043 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.272825 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/93f9e76a-768c-4e49-8238-031ed17ddef2-etc-modprobe-d\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.273043 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.272879 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/934aa068-0f79-4196-9fc1-e81a90b22334-ovnkube-config\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.273043 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.272903 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/4f4d7e96-7d49-43ba-bd2c-ee439980c9ed-sys-fs\") pod \"aws-ebs-csi-driver-node-zb5cp\" (UID: \"4f4d7e96-7d49-43ba-bd2c-ee439980c9ed\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-zb5cp" Apr 23 17:54:13.273043 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.272925 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4f625df8-2016-4ff3-8cc7-d03314b05183-tmp-dir\") pod \"node-resolver-z72z9\" (UID: \"4f625df8-2016-4ff3-8cc7-d03314b05183\") " pod="openshift-dns/node-resolver-z72z9" Apr 23 17:54:13.273043 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.272946 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-run-systemd\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.273043 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.272970 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43c90ba9-23a0-4be9-a89b-8ff980f1bb05-metrics-certs\") pod \"network-metrics-daemon-v8bcb\" (UID: \"43c90ba9-23a0-4be9-a89b-8ff980f1bb05\") " pod="openshift-multus/network-metrics-daemon-v8bcb" Apr 23 17:54:13.273729 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.272995 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntzzc\" (UniqueName: \"kubernetes.io/projected/43c90ba9-23a0-4be9-a89b-8ff980f1bb05-kube-api-access-ntzzc\") pod \"network-metrics-daemon-v8bcb\" (UID: \"43c90ba9-23a0-4be9-a89b-8ff980f1bb05\") " pod="openshift-multus/network-metrics-daemon-v8bcb" Apr 23 17:54:13.273729 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273033 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v479s\" (UniqueName: \"kubernetes.io/projected/bbe2b171-bf55-475a-a044-e38bab188f11-kube-api-access-v479s\") pod \"multus-additional-cni-plugins-rn6ls\" (UID: \"bbe2b171-bf55-475a-a044-e38bab188f11\") " pod="openshift-multus/multus-additional-cni-plugins-rn6ls" Apr 23 17:54:13.273729 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273055 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-cnibin\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.273729 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273077 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-hostroot\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.273729 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273102 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-multus-daemon-config\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.273729 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273126 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/93f9e76a-768c-4e49-8238-031ed17ddef2-etc-sysctl-d\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.273729 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273147 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/93f9e76a-768c-4e49-8238-031ed17ddef2-etc-tuned\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.273729 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273170 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-host-run-ovn-kubernetes\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.273729 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273193 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/58265a7e-9515-43ed-8838-b59c7bc68f1a-host\") pod \"node-ca-4767s\" (UID: \"58265a7e-9515-43ed-8838-b59c7bc68f1a\") " pod="openshift-image-registry/node-ca-4767s" Apr 23 17:54:13.273729 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273215 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-multus-socket-dir-parent\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.273729 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273237 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-host-run-netns\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.273729 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273260 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-host-run-multus-certs\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.273729 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273288 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/78150dd7-ba24-49a4-841f-fe57e5708a0b-host-slash\") pod \"iptables-alerter-jrchc\" (UID: \"78150dd7-ba24-49a4-841f-fe57e5708a0b\") " pod="openshift-network-operator/iptables-alerter-jrchc" Apr 23 17:54:13.273729 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273316 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/b4678728-6bf6-4a08-98fc-620935708987-agent-certs\") pod \"konnectivity-agent-v2msm\" (UID: \"b4678728-6bf6-4a08-98fc-620935708987\") " pod="kube-system/konnectivity-agent-v2msm" Apr 23 17:54:13.273729 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273343 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/b4678728-6bf6-4a08-98fc-620935708987-konnectivity-ca\") pod \"konnectivity-agent-v2msm\" (UID: \"b4678728-6bf6-4a08-98fc-620935708987\") " pod="kube-system/konnectivity-agent-v2msm" Apr 23 17:54:13.273729 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273375 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/93f9e76a-768c-4e49-8238-031ed17ddef2-host\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.273729 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273406 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/93f9e76a-768c-4e49-8238-031ed17ddef2-etc-sysctl-conf\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.274279 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273428 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/93f9e76a-768c-4e49-8238-031ed17ddef2-sys\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.274279 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273452 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-log-socket\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.274279 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273474 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bbe2b171-bf55-475a-a044-e38bab188f11-system-cni-dir\") pod \"multus-additional-cni-plugins-rn6ls\" (UID: \"bbe2b171-bf55-475a-a044-e38bab188f11\") " pod="openshift-multus/multus-additional-cni-plugins-rn6ls" Apr 23 17:54:13.274279 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273510 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/78150dd7-ba24-49a4-841f-fe57e5708a0b-iptables-alerter-script\") pod \"iptables-alerter-jrchc\" (UID: \"78150dd7-ba24-49a4-841f-fe57e5708a0b\") " pod="openshift-network-operator/iptables-alerter-jrchc" Apr 23 17:54:13.274279 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273539 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-run-openvswitch\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.274279 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273575 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4f4d7e96-7d49-43ba-bd2c-ee439980c9ed-kubelet-dir\") pod \"aws-ebs-csi-driver-node-zb5cp\" (UID: \"4f4d7e96-7d49-43ba-bd2c-ee439980c9ed\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-zb5cp" Apr 23 17:54:13.274279 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273594 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/bbe2b171-bf55-475a-a044-e38bab188f11-cnibin\") pod \"multus-additional-cni-plugins-rn6ls\" (UID: \"bbe2b171-bf55-475a-a044-e38bab188f11\") " pod="openshift-multus/multus-additional-cni-plugins-rn6ls" Apr 23 17:54:13.274279 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273609 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/bbe2b171-bf55-475a-a044-e38bab188f11-os-release\") pod \"multus-additional-cni-plugins-rn6ls\" (UID: \"bbe2b171-bf55-475a-a044-e38bab188f11\") " pod="openshift-multus/multus-additional-cni-plugins-rn6ls" Apr 23 17:54:13.274279 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273632 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/bbe2b171-bf55-475a-a044-e38bab188f11-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-rn6ls\" (UID: \"bbe2b171-bf55-475a-a044-e38bab188f11\") " pod="openshift-multus/multus-additional-cni-plugins-rn6ls" Apr 23 17:54:13.274279 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273654 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-multus-conf-dir\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.274279 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273675 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5td5\" (UniqueName: \"kubernetes.io/projected/93f9e76a-768c-4e49-8238-031ed17ddef2-kube-api-access-v5td5\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.274279 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273690 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-etc-openvswitch\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.274279 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273708 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-multus-cni-dir\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.274279 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273730 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-os-release\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.274279 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273745 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/93f9e76a-768c-4e49-8238-031ed17ddef2-etc-systemd\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.274279 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273758 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-host-kubelet\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.274716 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273771 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/934aa068-0f79-4196-9fc1-e81a90b22334-ovn-node-metrics-cert\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.274716 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273785 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93f9e76a-768c-4e49-8238-031ed17ddef2-lib-modules\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.274716 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273807 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw2f8\" (UniqueName: \"kubernetes.io/projected/4f625df8-2016-4ff3-8cc7-d03314b05183-kube-api-access-mw2f8\") pod \"node-resolver-z72z9\" (UID: \"4f625df8-2016-4ff3-8cc7-d03314b05183\") " pod="openshift-dns/node-resolver-z72z9" Apr 23 17:54:13.274716 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273821 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-run-ovn\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.274716 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273834 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-node-log\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.274716 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273866 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.274716 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273906 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9btkl\" (UniqueName: \"kubernetes.io/projected/78150dd7-ba24-49a4-841f-fe57e5708a0b-kube-api-access-9btkl\") pod \"iptables-alerter-jrchc\" (UID: \"78150dd7-ba24-49a4-841f-fe57e5708a0b\") " pod="openshift-network-operator/iptables-alerter-jrchc" Apr 23 17:54:13.274716 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273931 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/934aa068-0f79-4196-9fc1-e81a90b22334-env-overrides\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.274716 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273947 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4f4d7e96-7d49-43ba-bd2c-ee439980c9ed-socket-dir\") pod \"aws-ebs-csi-driver-node-zb5cp\" (UID: \"4f4d7e96-7d49-43ba-bd2c-ee439980c9ed\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-zb5cp" Apr 23 17:54:13.274716 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273970 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bbe2b171-bf55-475a-a044-e38bab188f11-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rn6ls\" (UID: \"bbe2b171-bf55-475a-a044-e38bab188f11\") " pod="openshift-multus/multus-additional-cni-plugins-rn6ls" Apr 23 17:54:13.274716 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.273983 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-etc-kubernetes\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.280097 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.278689 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-proxy-ip-10-0-135-87.ec2.internal" podStartSLOduration=1.278677509 podStartE2EDuration="1.278677509s" podCreationTimestamp="2026-04-23 17:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:54:13.278256859 +0000 UTC m=+123.642297801" watchObservedRunningTime="2026-04-23 17:54:13.278677509 +0000 UTC m=+123.642718451" Apr 23 17:54:13.375192 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375166 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-multus-cni-dir\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.375192 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375192 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-os-release\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.375367 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375207 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/93f9e76a-768c-4e49-8238-031ed17ddef2-etc-systemd\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.375367 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375220 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-host-kubelet\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.375367 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375235 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/934aa068-0f79-4196-9fc1-e81a90b22334-ovn-node-metrics-cert\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.375367 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375289 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-host-kubelet\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.375367 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375300 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-os-release\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.375367 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375307 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/93f9e76a-768c-4e49-8238-031ed17ddef2-etc-systemd\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.375367 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375317 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-multus-cni-dir\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.375367 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375332 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93f9e76a-768c-4e49-8238-031ed17ddef2-lib-modules\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.375367 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375359 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mw2f8\" (UniqueName: \"kubernetes.io/projected/4f625df8-2016-4ff3-8cc7-d03314b05183-kube-api-access-mw2f8\") pod \"node-resolver-z72z9\" (UID: \"4f625df8-2016-4ff3-8cc7-d03314b05183\") " pod="openshift-dns/node-resolver-z72z9" Apr 23 17:54:13.376001 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375402 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-run-ovn\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.376001 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375421 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-node-log\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.376001 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375443 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.376001 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375467 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9btkl\" (UniqueName: \"kubernetes.io/projected/78150dd7-ba24-49a4-841f-fe57e5708a0b-kube-api-access-9btkl\") pod \"iptables-alerter-jrchc\" (UID: \"78150dd7-ba24-49a4-841f-fe57e5708a0b\") " pod="openshift-network-operator/iptables-alerter-jrchc" Apr 23 17:54:13.376001 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375470 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-run-ovn\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.376001 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375485 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/934aa068-0f79-4196-9fc1-e81a90b22334-env-overrides\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.376001 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375487 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93f9e76a-768c-4e49-8238-031ed17ddef2-lib-modules\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.376001 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375500 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4f4d7e96-7d49-43ba-bd2c-ee439980c9ed-socket-dir\") pod \"aws-ebs-csi-driver-node-zb5cp\" (UID: \"4f4d7e96-7d49-43ba-bd2c-ee439980c9ed\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-zb5cp" Apr 23 17:54:13.376001 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375495 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-node-log\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.376001 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375517 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bbe2b171-bf55-475a-a044-e38bab188f11-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rn6ls\" (UID: \"bbe2b171-bf55-475a-a044-e38bab188f11\") " pod="openshift-multus/multus-additional-cni-plugins-rn6ls" Apr 23 17:54:13.376001 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375534 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.376001 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375542 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-etc-kubernetes\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.376001 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375565 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-host-run-netns\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.376001 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375588 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/93f9e76a-768c-4e49-8238-031ed17ddef2-etc-kubernetes\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.376001 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375611 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/93f9e76a-768c-4e49-8238-031ed17ddef2-var-lib-kubelet\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.376001 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375657 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-host-slash\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.376001 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375663 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-host-run-netns\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.376747 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375679 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-host-cni-bin\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.376747 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375673 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/93f9e76a-768c-4e49-8238-031ed17ddef2-etc-kubernetes\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.376747 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375700 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bbe2b171-bf55-475a-a044-e38bab188f11-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rn6ls\" (UID: \"bbe2b171-bf55-475a-a044-e38bab188f11\") " pod="openshift-multus/multus-additional-cni-plugins-rn6ls" Apr 23 17:54:13.376747 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375712 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-etc-kubernetes\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.376747 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375746 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-host-cni-bin\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.376747 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375770 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/93f9e76a-768c-4e49-8238-031ed17ddef2-var-lib-kubelet\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.376747 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375769 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4f4d7e96-7d49-43ba-bd2c-ee439980c9ed-socket-dir\") pod \"aws-ebs-csi-driver-node-zb5cp\" (UID: \"4f4d7e96-7d49-43ba-bd2c-ee439980c9ed\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-zb5cp" Apr 23 17:54:13.376747 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375783 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-host-run-k8s-cni-cncf-io\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.376747 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375809 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-host-slash\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.376747 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375830 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-host-var-lib-cni-bin\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.376747 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375832 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-host-run-k8s-cni-cncf-io\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.376747 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375880 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-systemd-units\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.376747 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375901 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-host-var-lib-cni-bin\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.376747 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375905 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-var-lib-openvswitch\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.376747 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375944 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-systemd-units\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.376747 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375960 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-var-lib-openvswitch\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.376747 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375973 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4f4d7e96-7d49-43ba-bd2c-ee439980c9ed-registration-dir\") pod \"aws-ebs-csi-driver-node-zb5cp\" (UID: \"4f4d7e96-7d49-43ba-bd2c-ee439980c9ed\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-zb5cp" Apr 23 17:54:13.376747 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.375991 2574 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Apr 23 17:54:13.377597 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376001 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-system-cni-dir\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.377597 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376028 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/bbe2b171-bf55-475a-a044-e38bab188f11-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rn6ls\" (UID: \"bbe2b171-bf55-475a-a044-e38bab188f11\") " pod="openshift-multus/multus-additional-cni-plugins-rn6ls" Apr 23 17:54:13.377597 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376036 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-system-cni-dir\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.377597 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376043 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4f4d7e96-7d49-43ba-bd2c-ee439980c9ed-registration-dir\") pod \"aws-ebs-csi-driver-node-zb5cp\" (UID: \"4f4d7e96-7d49-43ba-bd2c-ee439980c9ed\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-zb5cp" Apr 23 17:54:13.377597 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376004 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/934aa068-0f79-4196-9fc1-e81a90b22334-env-overrides\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.377597 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376067 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-host-cni-netd\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.377597 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376098 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/934aa068-0f79-4196-9fc1-e81a90b22334-ovnkube-script-lib\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.377597 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376121 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tr8gz\" (UniqueName: \"kubernetes.io/projected/194b68f6-135d-472e-a449-ddda482b9755-kube-api-access-tr8gz\") pod \"network-check-target-vfxjl\" (UID: \"194b68f6-135d-472e-a449-ddda482b9755\") " pod="openshift-network-diagnostics/network-check-target-vfxjl" Apr 23 17:54:13.377597 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376099 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-host-cni-netd\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.377597 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376136 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/4f4d7e96-7d49-43ba-bd2c-ee439980c9ed-device-dir\") pod \"aws-ebs-csi-driver-node-zb5cp\" (UID: \"4f4d7e96-7d49-43ba-bd2c-ee439980c9ed\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-zb5cp" Apr 23 17:54:13.377597 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376169 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/4f4d7e96-7d49-43ba-bd2c-ee439980c9ed-etc-selinux\") pod \"aws-ebs-csi-driver-node-zb5cp\" (UID: \"4f4d7e96-7d49-43ba-bd2c-ee439980c9ed\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-zb5cp" Apr 23 17:54:13.377597 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376173 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/4f4d7e96-7d49-43ba-bd2c-ee439980c9ed-device-dir\") pod \"aws-ebs-csi-driver-node-zb5cp\" (UID: \"4f4d7e96-7d49-43ba-bd2c-ee439980c9ed\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-zb5cp" Apr 23 17:54:13.377597 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376198 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/58265a7e-9515-43ed-8838-b59c7bc68f1a-serviceca\") pod \"node-ca-4767s\" (UID: \"58265a7e-9515-43ed-8838-b59c7bc68f1a\") " pod="openshift-image-registry/node-ca-4767s" Apr 23 17:54:13.377597 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376222 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-cni-binary-copy\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.377597 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376260 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/4f4d7e96-7d49-43ba-bd2c-ee439980c9ed-etc-selinux\") pod \"aws-ebs-csi-driver-node-zb5cp\" (UID: \"4f4d7e96-7d49-43ba-bd2c-ee439980c9ed\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-zb5cp" Apr 23 17:54:13.377597 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376270 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/93f9e76a-768c-4e49-8238-031ed17ddef2-etc-sysconfig\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.377597 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376299 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/93f9e76a-768c-4e49-8238-031ed17ddef2-run\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.378254 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376309 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/93f9e76a-768c-4e49-8238-031ed17ddef2-etc-sysconfig\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.378254 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376325 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4f625df8-2016-4ff3-8cc7-d03314b05183-hosts-file\") pod \"node-resolver-z72z9\" (UID: \"4f625df8-2016-4ff3-8cc7-d03314b05183\") " pod="openshift-dns/node-resolver-z72z9" Apr 23 17:54:13.378254 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376352 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/93f9e76a-768c-4e49-8238-031ed17ddef2-run\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.378254 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376353 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8xgpz\" (UniqueName: \"kubernetes.io/projected/4f4d7e96-7d49-43ba-bd2c-ee439980c9ed-kube-api-access-8xgpz\") pod \"aws-ebs-csi-driver-node-zb5cp\" (UID: \"4f4d7e96-7d49-43ba-bd2c-ee439980c9ed\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-zb5cp" Apr 23 17:54:13.378254 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376388 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f8wgg\" (UniqueName: \"kubernetes.io/projected/58265a7e-9515-43ed-8838-b59c7bc68f1a-kube-api-access-f8wgg\") pod \"node-ca-4767s\" (UID: \"58265a7e-9515-43ed-8838-b59c7bc68f1a\") " pod="openshift-image-registry/node-ca-4767s" Apr 23 17:54:13.378254 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376414 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-host-var-lib-cni-multus\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.378254 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376427 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4f625df8-2016-4ff3-8cc7-d03314b05183-hosts-file\") pod \"node-resolver-z72z9\" (UID: \"4f625df8-2016-4ff3-8cc7-d03314b05183\") " pod="openshift-dns/node-resolver-z72z9" Apr 23 17:54:13.378254 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376440 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-host-var-lib-kubelet\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.378254 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376464 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7ppnk\" (UniqueName: \"kubernetes.io/projected/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-kube-api-access-7ppnk\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.378254 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376491 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/93f9e76a-768c-4e49-8238-031ed17ddef2-tmp\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.378254 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376516 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pxftg\" (UniqueName: \"kubernetes.io/projected/934aa068-0f79-4196-9fc1-e81a90b22334-kube-api-access-pxftg\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.378254 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376540 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/bbe2b171-bf55-475a-a044-e38bab188f11-cni-binary-copy\") pod \"multus-additional-cni-plugins-rn6ls\" (UID: \"bbe2b171-bf55-475a-a044-e38bab188f11\") " pod="openshift-multus/multus-additional-cni-plugins-rn6ls" Apr 23 17:54:13.378254 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376564 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/93f9e76a-768c-4e49-8238-031ed17ddef2-etc-modprobe-d\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.378254 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376593 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/934aa068-0f79-4196-9fc1-e81a90b22334-ovnkube-config\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.378254 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376605 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-host-var-lib-cni-multus\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.378254 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376618 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/4f4d7e96-7d49-43ba-bd2c-ee439980c9ed-sys-fs\") pod \"aws-ebs-csi-driver-node-zb5cp\" (UID: \"4f4d7e96-7d49-43ba-bd2c-ee439980c9ed\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-zb5cp" Apr 23 17:54:13.378254 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376642 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4f625df8-2016-4ff3-8cc7-d03314b05183-tmp-dir\") pod \"node-resolver-z72z9\" (UID: \"4f625df8-2016-4ff3-8cc7-d03314b05183\") " pod="openshift-dns/node-resolver-z72z9" Apr 23 17:54:13.378254 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376651 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-host-var-lib-kubelet\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.378800 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376563 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/bbe2b171-bf55-475a-a044-e38bab188f11-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rn6ls\" (UID: \"bbe2b171-bf55-475a-a044-e38bab188f11\") " pod="openshift-multus/multus-additional-cni-plugins-rn6ls" Apr 23 17:54:13.378800 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376670 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-run-systemd\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.378800 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376674 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/934aa068-0f79-4196-9fc1-e81a90b22334-ovnkube-script-lib\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.378800 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376667 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/58265a7e-9515-43ed-8838-b59c7bc68f1a-serviceca\") pod \"node-ca-4767s\" (UID: \"58265a7e-9515-43ed-8838-b59c7bc68f1a\") " pod="openshift-image-registry/node-ca-4767s" Apr 23 17:54:13.378800 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376711 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43c90ba9-23a0-4be9-a89b-8ff980f1bb05-metrics-certs\") pod \"network-metrics-daemon-v8bcb\" (UID: \"43c90ba9-23a0-4be9-a89b-8ff980f1bb05\") " pod="openshift-multus/network-metrics-daemon-v8bcb" Apr 23 17:54:13.378800 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376737 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ntzzc\" (UniqueName: \"kubernetes.io/projected/43c90ba9-23a0-4be9-a89b-8ff980f1bb05-kube-api-access-ntzzc\") pod \"network-metrics-daemon-v8bcb\" (UID: \"43c90ba9-23a0-4be9-a89b-8ff980f1bb05\") " pod="openshift-multus/network-metrics-daemon-v8bcb" Apr 23 17:54:13.378800 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376747 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-cni-binary-copy\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.378800 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376764 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v479s\" (UniqueName: \"kubernetes.io/projected/bbe2b171-bf55-475a-a044-e38bab188f11-kube-api-access-v479s\") pod \"multus-additional-cni-plugins-rn6ls\" (UID: \"bbe2b171-bf55-475a-a044-e38bab188f11\") " pod="openshift-multus/multus-additional-cni-plugins-rn6ls" Apr 23 17:54:13.378800 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376780 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/93f9e76a-768c-4e49-8238-031ed17ddef2-etc-modprobe-d\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.378800 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376790 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-cnibin\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.378800 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376813 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-hostroot\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.378800 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376836 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-multus-daemon-config\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.378800 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:13.376840 2574 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:13.378800 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376894 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/93f9e76a-768c-4e49-8238-031ed17ddef2-etc-sysctl-d\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.378800 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376918 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/93f9e76a-768c-4e49-8238-031ed17ddef2-etc-tuned\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.378800 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:13.376945 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43c90ba9-23a0-4be9-a89b-8ff980f1bb05-metrics-certs podName:43c90ba9-23a0-4be9-a89b-8ff980f1bb05 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:13.876924893 +0000 UTC m=+124.240965833 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/43c90ba9-23a0-4be9-a89b-8ff980f1bb05-metrics-certs") pod "network-metrics-daemon-v8bcb" (UID: "43c90ba9-23a0-4be9-a89b-8ff980f1bb05") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:13.378800 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.376977 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-host-run-ovn-kubernetes\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.379293 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377009 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/58265a7e-9515-43ed-8838-b59c7bc68f1a-host\") pod \"node-ca-4767s\" (UID: \"58265a7e-9515-43ed-8838-b59c7bc68f1a\") " pod="openshift-image-registry/node-ca-4767s" Apr 23 17:54:13.379293 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377014 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4f625df8-2016-4ff3-8cc7-d03314b05183-tmp-dir\") pod \"node-resolver-z72z9\" (UID: \"4f625df8-2016-4ff3-8cc7-d03314b05183\") " pod="openshift-dns/node-resolver-z72z9" Apr 23 17:54:13.379293 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377038 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-multus-socket-dir-parent\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.379293 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377042 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-hostroot\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.379293 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377061 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-host-run-netns\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.379293 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377085 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-host-run-multus-certs\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.379293 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377106 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/78150dd7-ba24-49a4-841f-fe57e5708a0b-host-slash\") pod \"iptables-alerter-jrchc\" (UID: \"78150dd7-ba24-49a4-841f-fe57e5708a0b\") " pod="openshift-network-operator/iptables-alerter-jrchc" Apr 23 17:54:13.379293 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377109 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-cnibin\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.379293 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377126 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/b4678728-6bf6-4a08-98fc-620935708987-agent-certs\") pod \"konnectivity-agent-v2msm\" (UID: \"b4678728-6bf6-4a08-98fc-620935708987\") " pod="kube-system/konnectivity-agent-v2msm" Apr 23 17:54:13.379293 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377147 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/b4678728-6bf6-4a08-98fc-620935708987-konnectivity-ca\") pod \"konnectivity-agent-v2msm\" (UID: \"b4678728-6bf6-4a08-98fc-620935708987\") " pod="kube-system/konnectivity-agent-v2msm" Apr 23 17:54:13.379293 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377166 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/93f9e76a-768c-4e49-8238-031ed17ddef2-host\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.379293 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377187 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/93f9e76a-768c-4e49-8238-031ed17ddef2-etc-sysctl-conf\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.379293 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377205 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/93f9e76a-768c-4e49-8238-031ed17ddef2-sys\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.379293 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377226 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-log-socket\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.379293 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377230 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/934aa068-0f79-4196-9fc1-e81a90b22334-ovnkube-config\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.379293 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377246 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bbe2b171-bf55-475a-a044-e38bab188f11-system-cni-dir\") pod \"multus-additional-cni-plugins-rn6ls\" (UID: \"bbe2b171-bf55-475a-a044-e38bab188f11\") " pod="openshift-multus/multus-additional-cni-plugins-rn6ls" Apr 23 17:54:13.379293 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377268 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/78150dd7-ba24-49a4-841f-fe57e5708a0b-iptables-alerter-script\") pod \"iptables-alerter-jrchc\" (UID: \"78150dd7-ba24-49a4-841f-fe57e5708a0b\") " pod="openshift-network-operator/iptables-alerter-jrchc" Apr 23 17:54:13.379293 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377281 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-host-run-multus-certs\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.379779 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377318 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/93f9e76a-768c-4e49-8238-031ed17ddef2-etc-sysctl-d\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.379779 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377327 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/78150dd7-ba24-49a4-841f-fe57e5708a0b-host-slash\") pod \"iptables-alerter-jrchc\" (UID: \"78150dd7-ba24-49a4-841f-fe57e5708a0b\") " pod="openshift-network-operator/iptables-alerter-jrchc" Apr 23 17:54:13.379779 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377366 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/93f9e76a-768c-4e49-8238-031ed17ddef2-sys\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.379779 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377373 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-multus-socket-dir-parent\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.379779 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377410 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-host-run-netns\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.379779 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377062 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/4f4d7e96-7d49-43ba-bd2c-ee439980c9ed-sys-fs\") pod \"aws-ebs-csi-driver-node-zb5cp\" (UID: \"4f4d7e96-7d49-43ba-bd2c-ee439980c9ed\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-zb5cp" Apr 23 17:54:13.379779 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377291 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-run-openvswitch\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.379779 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377478 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/58265a7e-9515-43ed-8838-b59c7bc68f1a-host\") pod \"node-ca-4767s\" (UID: \"58265a7e-9515-43ed-8838-b59c7bc68f1a\") " pod="openshift-image-registry/node-ca-4767s" Apr 23 17:54:13.379779 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377479 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4f4d7e96-7d49-43ba-bd2c-ee439980c9ed-kubelet-dir\") pod \"aws-ebs-csi-driver-node-zb5cp\" (UID: \"4f4d7e96-7d49-43ba-bd2c-ee439980c9ed\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-zb5cp" Apr 23 17:54:13.379779 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377523 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-multus-daemon-config\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.379779 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377530 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4f4d7e96-7d49-43ba-bd2c-ee439980c9ed-kubelet-dir\") pod \"aws-ebs-csi-driver-node-zb5cp\" (UID: \"4f4d7e96-7d49-43ba-bd2c-ee439980c9ed\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-zb5cp" Apr 23 17:54:13.379779 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377526 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/bbe2b171-bf55-475a-a044-e38bab188f11-cnibin\") pod \"multus-additional-cni-plugins-rn6ls\" (UID: \"bbe2b171-bf55-475a-a044-e38bab188f11\") " pod="openshift-multus/multus-additional-cni-plugins-rn6ls" Apr 23 17:54:13.379779 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377554 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/bbe2b171-bf55-475a-a044-e38bab188f11-cnibin\") pod \"multus-additional-cni-plugins-rn6ls\" (UID: \"bbe2b171-bf55-475a-a044-e38bab188f11\") " pod="openshift-multus/multus-additional-cni-plugins-rn6ls" Apr 23 17:54:13.379779 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377578 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/bbe2b171-bf55-475a-a044-e38bab188f11-os-release\") pod \"multus-additional-cni-plugins-rn6ls\" (UID: \"bbe2b171-bf55-475a-a044-e38bab188f11\") " pod="openshift-multus/multus-additional-cni-plugins-rn6ls" Apr 23 17:54:13.379779 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377590 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/93f9e76a-768c-4e49-8238-031ed17ddef2-host\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.379779 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377593 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bbe2b171-bf55-475a-a044-e38bab188f11-system-cni-dir\") pod \"multus-additional-cni-plugins-rn6ls\" (UID: \"bbe2b171-bf55-475a-a044-e38bab188f11\") " pod="openshift-multus/multus-additional-cni-plugins-rn6ls" Apr 23 17:54:13.379779 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377602 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/bbe2b171-bf55-475a-a044-e38bab188f11-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-rn6ls\" (UID: \"bbe2b171-bf55-475a-a044-e38bab188f11\") " pod="openshift-multus/multus-additional-cni-plugins-rn6ls" Apr 23 17:54:13.380272 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377626 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-multus-conf-dir\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.380272 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377634 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-log-socket\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.380272 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377647 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v5td5\" (UniqueName: \"kubernetes.io/projected/93f9e76a-768c-4e49-8238-031ed17ddef2-kube-api-access-v5td5\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.380272 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377668 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-etc-openvswitch\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.380272 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377682 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-run-systemd\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.380272 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377689 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/93f9e76a-768c-4e49-8238-031ed17ddef2-etc-sysctl-conf\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.380272 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377732 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-etc-openvswitch\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.380272 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377734 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-run-openvswitch\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.380272 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377268 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/bbe2b171-bf55-475a-a044-e38bab188f11-cni-binary-copy\") pod \"multus-additional-cni-plugins-rn6ls\" (UID: \"bbe2b171-bf55-475a-a044-e38bab188f11\") " pod="openshift-multus/multus-additional-cni-plugins-rn6ls" Apr 23 17:54:13.380272 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377769 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-multus-conf-dir\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.380272 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377790 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/bbe2b171-bf55-475a-a044-e38bab188f11-os-release\") pod \"multus-additional-cni-plugins-rn6ls\" (UID: \"bbe2b171-bf55-475a-a044-e38bab188f11\") " pod="openshift-multus/multus-additional-cni-plugins-rn6ls" Apr 23 17:54:13.380272 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377816 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/934aa068-0f79-4196-9fc1-e81a90b22334-host-run-ovn-kubernetes\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.380272 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.377836 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/b4678728-6bf6-4a08-98fc-620935708987-konnectivity-ca\") pod \"konnectivity-agent-v2msm\" (UID: \"b4678728-6bf6-4a08-98fc-620935708987\") " pod="kube-system/konnectivity-agent-v2msm" Apr 23 17:54:13.380272 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.378129 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/78150dd7-ba24-49a4-841f-fe57e5708a0b-iptables-alerter-script\") pod \"iptables-alerter-jrchc\" (UID: \"78150dd7-ba24-49a4-841f-fe57e5708a0b\") " pod="openshift-network-operator/iptables-alerter-jrchc" Apr 23 17:54:13.380272 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.378190 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/bbe2b171-bf55-475a-a044-e38bab188f11-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-rn6ls\" (UID: \"bbe2b171-bf55-475a-a044-e38bab188f11\") " pod="openshift-multus/multus-additional-cni-plugins-rn6ls" Apr 23 17:54:13.380931 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.380628 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/93f9e76a-768c-4e49-8238-031ed17ddef2-tmp\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.380931 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.380646 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/93f9e76a-768c-4e49-8238-031ed17ddef2-etc-tuned\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.380931 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.380689 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/934aa068-0f79-4196-9fc1-e81a90b22334-ovn-node-metrics-cert\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.380931 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.380872 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/b4678728-6bf6-4a08-98fc-620935708987-agent-certs\") pod \"konnectivity-agent-v2msm\" (UID: \"b4678728-6bf6-4a08-98fc-620935708987\") " pod="kube-system/konnectivity-agent-v2msm" Apr 23 17:54:13.384958 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:13.384933 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:54:13.385061 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:13.384961 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:54:13.385061 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:13.384974 2574 projected.go:194] Error preparing data for projected volume kube-api-access-tr8gz for pod openshift-network-diagnostics/network-check-target-vfxjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:13.385061 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:13.385031 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/194b68f6-135d-472e-a449-ddda482b9755-kube-api-access-tr8gz podName:194b68f6-135d-472e-a449-ddda482b9755 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:13.885013376 +0000 UTC m=+124.249054308 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tr8gz" (UniqueName: "kubernetes.io/projected/194b68f6-135d-472e-a449-ddda482b9755-kube-api-access-tr8gz") pod "network-check-target-vfxjl" (UID: "194b68f6-135d-472e-a449-ddda482b9755") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:13.385210 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.385194 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9btkl\" (UniqueName: \"kubernetes.io/projected/78150dd7-ba24-49a4-841f-fe57e5708a0b-kube-api-access-9btkl\") pod \"iptables-alerter-jrchc\" (UID: \"78150dd7-ba24-49a4-841f-fe57e5708a0b\") " pod="openshift-network-operator/iptables-alerter-jrchc" Apr 23 17:54:13.386254 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.386234 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw2f8\" (UniqueName: \"kubernetes.io/projected/4f625df8-2016-4ff3-8cc7-d03314b05183-kube-api-access-mw2f8\") pod \"node-resolver-z72z9\" (UID: \"4f625df8-2016-4ff3-8cc7-d03314b05183\") " pod="openshift-dns/node-resolver-z72z9" Apr 23 17:54:13.389727 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.389703 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ppnk\" (UniqueName: \"kubernetes.io/projected/339ba7f9-7ad9-40ca-b311-6f109fbcfc6a-kube-api-access-7ppnk\") pod \"multus-hhc5p\" (UID: \"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a\") " pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.390232 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.390215 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8wgg\" (UniqueName: \"kubernetes.io/projected/58265a7e-9515-43ed-8838-b59c7bc68f1a-kube-api-access-f8wgg\") pod \"node-ca-4767s\" (UID: \"58265a7e-9515-43ed-8838-b59c7bc68f1a\") " pod="openshift-image-registry/node-ca-4767s" Apr 23 17:54:13.394746 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.394716 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v479s\" (UniqueName: \"kubernetes.io/projected/bbe2b171-bf55-475a-a044-e38bab188f11-kube-api-access-v479s\") pod \"multus-additional-cni-plugins-rn6ls\" (UID: \"bbe2b171-bf55-475a-a044-e38bab188f11\") " pod="openshift-multus/multus-additional-cni-plugins-rn6ls" Apr 23 17:54:13.394828 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.394815 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5td5\" (UniqueName: \"kubernetes.io/projected/93f9e76a-768c-4e49-8238-031ed17ddef2-kube-api-access-v5td5\") pod \"tuned-8htbm\" (UID: \"93f9e76a-768c-4e49-8238-031ed17ddef2\") " pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.394906 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.394882 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntzzc\" (UniqueName: \"kubernetes.io/projected/43c90ba9-23a0-4be9-a89b-8ff980f1bb05-kube-api-access-ntzzc\") pod \"network-metrics-daemon-v8bcb\" (UID: \"43c90ba9-23a0-4be9-a89b-8ff980f1bb05\") " pod="openshift-multus/network-metrics-daemon-v8bcb" Apr 23 17:54:13.395322 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.395307 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxftg\" (UniqueName: \"kubernetes.io/projected/934aa068-0f79-4196-9fc1-e81a90b22334-kube-api-access-pxftg\") pod \"ovnkube-node-hhdnl\" (UID: \"934aa068-0f79-4196-9fc1-e81a90b22334\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.395550 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.395530 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xgpz\" (UniqueName: \"kubernetes.io/projected/4f4d7e96-7d49-43ba-bd2c-ee439980c9ed-kube-api-access-8xgpz\") pod \"aws-ebs-csi-driver-node-zb5cp\" (UID: \"4f4d7e96-7d49-43ba-bd2c-ee439980c9ed\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-zb5cp" Apr 23 17:54:13.495178 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.495162 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:13.499751 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.499734 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-4767s" Apr 23 17:54:13.501283 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:54:13.501262 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod934aa068_0f79_4196_9fc1_e81a90b22334.slice/crio-eb7d15d7beba15e3e1fcf987c1820cc136b530b87ec8a4cad0ed8c9f9bdea389 WatchSource:0}: Error finding container eb7d15d7beba15e3e1fcf987c1820cc136b530b87ec8a4cad0ed8c9f9bdea389: Status 404 returned error can't find the container with id eb7d15d7beba15e3e1fcf987c1820cc136b530b87ec8a4cad0ed8c9f9bdea389 Apr 23 17:54:13.505096 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.505073 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hhc5p" Apr 23 17:54:13.505319 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:54:13.505296 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58265a7e_9515_43ed_8838_b59c7bc68f1a.slice/crio-c33336706fe6d4836cd1ed949b698ec9bc5660caa7c68d868136ed240fc15fed WatchSource:0}: Error finding container c33336706fe6d4836cd1ed949b698ec9bc5660caa7c68d868136ed240fc15fed: Status 404 returned error can't find the container with id c33336706fe6d4836cd1ed949b698ec9bc5660caa7c68d868136ed240fc15fed Apr 23 17:54:13.510149 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:54:13.510129 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod339ba7f9_7ad9_40ca_b311_6f109fbcfc6a.slice/crio-3004b13ef200dccbc99a2353fbe67230ced969af4ae9a97bb0038cdc9b3ee481 WatchSource:0}: Error finding container 3004b13ef200dccbc99a2353fbe67230ced969af4ae9a97bb0038cdc9b3ee481: Status 404 returned error can't find the container with id 3004b13ef200dccbc99a2353fbe67230ced969af4ae9a97bb0038cdc9b3ee481 Apr 23 17:54:13.512686 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.512665 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-jrchc" Apr 23 17:54:13.517401 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.517382 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-v2msm" Apr 23 17:54:13.519662 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:54:13.519639 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78150dd7_ba24_49a4_841f_fe57e5708a0b.slice/crio-02bc45039005d52d5c0eec735beeec5b31262a9b0b144500434821e8a7f50214 WatchSource:0}: Error finding container 02bc45039005d52d5c0eec735beeec5b31262a9b0b144500434821e8a7f50214: Status 404 returned error can't find the container with id 02bc45039005d52d5c0eec735beeec5b31262a9b0b144500434821e8a7f50214 Apr 23 17:54:13.522805 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.522791 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-zb5cp" Apr 23 17:54:13.524144 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:54:13.524122 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4678728_6bf6_4a08_98fc_620935708987.slice/crio-97418bb3505f23d73c093c34e3b9ad64c8e825f647caf7e4b245da8639d0e519 WatchSource:0}: Error finding container 97418bb3505f23d73c093c34e3b9ad64c8e825f647caf7e4b245da8639d0e519: Status 404 returned error can't find the container with id 97418bb3505f23d73c093c34e3b9ad64c8e825f647caf7e4b245da8639d0e519 Apr 23 17:54:13.528075 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.528053 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-8htbm" Apr 23 17:54:13.528235 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:54:13.528216 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f4d7e96_7d49_43ba_bd2c_ee439980c9ed.slice/crio-22997a8f00cb4ae0da70be02a48d90aa26439da917e6e03fae6061a457497f1b WatchSource:0}: Error finding container 22997a8f00cb4ae0da70be02a48d90aa26439da917e6e03fae6061a457497f1b: Status 404 returned error can't find the container with id 22997a8f00cb4ae0da70be02a48d90aa26439da917e6e03fae6061a457497f1b Apr 23 17:54:13.533271 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.533241 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-z72z9" Apr 23 17:54:13.533733 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:54:13.533714 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93f9e76a_768c_4e49_8238_031ed17ddef2.slice/crio-f701cdac38e49ec0800e5213bca1809a6e6bc07b8eaddb524fb5bc606b197c55 WatchSource:0}: Error finding container f701cdac38e49ec0800e5213bca1809a6e6bc07b8eaddb524fb5bc606b197c55: Status 404 returned error can't find the container with id f701cdac38e49ec0800e5213bca1809a6e6bc07b8eaddb524fb5bc606b197c55 Apr 23 17:54:13.538071 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.536934 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rn6ls" Apr 23 17:54:13.543271 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:54:13.543253 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f625df8_2016_4ff3_8cc7_d03314b05183.slice/crio-849cbc012b27c838cd828075e35b75e75f6d50b007b634830b0d1f756fe22b7f WatchSource:0}: Error finding container 849cbc012b27c838cd828075e35b75e75f6d50b007b634830b0d1f756fe22b7f: Status 404 returned error can't find the container with id 849cbc012b27c838cd828075e35b75e75f6d50b007b634830b0d1f756fe22b7f Apr 23 17:54:13.545977 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:54:13.545956 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbe2b171_bf55_475a_a044_e38bab188f11.slice/crio-54d4c08c83c45680e314df9c93c2d2e2b7285ce71bf3effad991657c91ffd007 WatchSource:0}: Error finding container 54d4c08c83c45680e314df9c93c2d2e2b7285ce71bf3effad991657c91ffd007: Status 404 returned error can't find the container with id 54d4c08c83c45680e314df9c93c2d2e2b7285ce71bf3effad991657c91ffd007 Apr 23 17:54:13.881485 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.881408 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43c90ba9-23a0-4be9-a89b-8ff980f1bb05-metrics-certs\") pod \"network-metrics-daemon-v8bcb\" (UID: \"43c90ba9-23a0-4be9-a89b-8ff980f1bb05\") " pod="openshift-multus/network-metrics-daemon-v8bcb" Apr 23 17:54:13.881620 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:13.881507 2574 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:13.881620 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:13.881575 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43c90ba9-23a0-4be9-a89b-8ff980f1bb05-metrics-certs podName:43c90ba9-23a0-4be9-a89b-8ff980f1bb05 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:14.881558993 +0000 UTC m=+125.245599912 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/43c90ba9-23a0-4be9-a89b-8ff980f1bb05-metrics-certs") pod "network-metrics-daemon-v8bcb" (UID: "43c90ba9-23a0-4be9-a89b-8ff980f1bb05") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:13.981839 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:13.981788 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tr8gz\" (UniqueName: \"kubernetes.io/projected/194b68f6-135d-472e-a449-ddda482b9755-kube-api-access-tr8gz\") pod \"network-check-target-vfxjl\" (UID: \"194b68f6-135d-472e-a449-ddda482b9755\") " pod="openshift-network-diagnostics/network-check-target-vfxjl" Apr 23 17:54:13.982036 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:13.981977 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:54:13.982036 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:13.981997 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:54:13.982036 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:13.982009 2574 projected.go:194] Error preparing data for projected volume kube-api-access-tr8gz for pod openshift-network-diagnostics/network-check-target-vfxjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:13.982206 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:13.982068 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/194b68f6-135d-472e-a449-ddda482b9755-kube-api-access-tr8gz podName:194b68f6-135d-472e-a449-ddda482b9755 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:14.98204773 +0000 UTC m=+125.346088652 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tr8gz" (UniqueName: "kubernetes.io/projected/194b68f6-135d-472e-a449-ddda482b9755-kube-api-access-tr8gz") pod "network-check-target-vfxjl" (UID: "194b68f6-135d-472e-a449-ddda482b9755") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:14.470325 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:14.470286 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-z72z9" event={"ID":"4f625df8-2016-4ff3-8cc7-d03314b05183","Type":"ContainerStarted","Data":"849cbc012b27c838cd828075e35b75e75f6d50b007b634830b0d1f756fe22b7f"} Apr 23 17:54:14.476024 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:14.475990 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-8htbm" event={"ID":"93f9e76a-768c-4e49-8238-031ed17ddef2","Type":"ContainerStarted","Data":"f701cdac38e49ec0800e5213bca1809a6e6bc07b8eaddb524fb5bc606b197c55"} Apr 23 17:54:14.486083 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:14.486054 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-zb5cp" event={"ID":"4f4d7e96-7d49-43ba-bd2c-ee439980c9ed","Type":"ContainerStarted","Data":"22997a8f00cb4ae0da70be02a48d90aa26439da917e6e03fae6061a457497f1b"} Apr 23 17:54:14.489143 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:14.489115 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-v2msm" event={"ID":"b4678728-6bf6-4a08-98fc-620935708987","Type":"ContainerStarted","Data":"97418bb3505f23d73c093c34e3b9ad64c8e825f647caf7e4b245da8639d0e519"} Apr 23 17:54:14.501274 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:14.501228 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hhc5p" event={"ID":"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a","Type":"ContainerStarted","Data":"3004b13ef200dccbc99a2353fbe67230ced969af4ae9a97bb0038cdc9b3ee481"} Apr 23 17:54:14.506454 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:14.506429 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-4767s" event={"ID":"58265a7e-9515-43ed-8838-b59c7bc68f1a","Type":"ContainerStarted","Data":"c33336706fe6d4836cd1ed949b698ec9bc5660caa7c68d868136ed240fc15fed"} Apr 23 17:54:14.523252 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:14.523223 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" event={"ID":"934aa068-0f79-4196-9fc1-e81a90b22334","Type":"ContainerStarted","Data":"eb7d15d7beba15e3e1fcf987c1820cc136b530b87ec8a4cad0ed8c9f9bdea389"} Apr 23 17:54:14.532028 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:14.532000 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rn6ls" event={"ID":"bbe2b171-bf55-475a-a044-e38bab188f11","Type":"ContainerStarted","Data":"54d4c08c83c45680e314df9c93c2d2e2b7285ce71bf3effad991657c91ffd007"} Apr 23 17:54:14.537955 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:14.537929 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-jrchc" event={"ID":"78150dd7-ba24-49a4-841f-fe57e5708a0b","Type":"ContainerStarted","Data":"02bc45039005d52d5c0eec735beeec5b31262a9b0b144500434821e8a7f50214"} Apr 23 17:54:14.888569 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:14.888483 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43c90ba9-23a0-4be9-a89b-8ff980f1bb05-metrics-certs\") pod \"network-metrics-daemon-v8bcb\" (UID: \"43c90ba9-23a0-4be9-a89b-8ff980f1bb05\") " pod="openshift-multus/network-metrics-daemon-v8bcb" Apr 23 17:54:14.888723 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:14.888633 2574 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:14.888723 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:14.888700 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43c90ba9-23a0-4be9-a89b-8ff980f1bb05-metrics-certs podName:43c90ba9-23a0-4be9-a89b-8ff980f1bb05 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:16.888680883 +0000 UTC m=+127.252721813 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/43c90ba9-23a0-4be9-a89b-8ff980f1bb05-metrics-certs") pod "network-metrics-daemon-v8bcb" (UID: "43c90ba9-23a0-4be9-a89b-8ff980f1bb05") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:14.993957 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:14.989689 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tr8gz\" (UniqueName: \"kubernetes.io/projected/194b68f6-135d-472e-a449-ddda482b9755-kube-api-access-tr8gz\") pod \"network-check-target-vfxjl\" (UID: \"194b68f6-135d-472e-a449-ddda482b9755\") " pod="openshift-network-diagnostics/network-check-target-vfxjl" Apr 23 17:54:14.993957 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:14.989862 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:54:14.993957 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:14.989884 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:54:14.993957 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:14.989898 2574 projected.go:194] Error preparing data for projected volume kube-api-access-tr8gz for pod openshift-network-diagnostics/network-check-target-vfxjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:14.993957 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:14.989957 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/194b68f6-135d-472e-a449-ddda482b9755-kube-api-access-tr8gz podName:194b68f6-135d-472e-a449-ddda482b9755 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:16.989937668 +0000 UTC m=+127.353978592 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tr8gz" (UniqueName: "kubernetes.io/projected/194b68f6-135d-472e-a449-ddda482b9755-kube-api-access-tr8gz") pod "network-check-target-vfxjl" (UID: "194b68f6-135d-472e-a449-ddda482b9755") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:15.223380 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:15.223336 2574 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:15.269235 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:15.269196 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8bcb" Apr 23 17:54:15.269387 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:15.269359 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8bcb" podUID="43c90ba9-23a0-4be9-a89b-8ff980f1bb05" Apr 23 17:54:15.269906 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:15.269876 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vfxjl" Apr 23 17:54:15.270005 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:15.269975 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vfxjl" podUID="194b68f6-135d-472e-a449-ddda482b9755" Apr 23 17:54:16.904692 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:16.904129 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43c90ba9-23a0-4be9-a89b-8ff980f1bb05-metrics-certs\") pod \"network-metrics-daemon-v8bcb\" (UID: \"43c90ba9-23a0-4be9-a89b-8ff980f1bb05\") " pod="openshift-multus/network-metrics-daemon-v8bcb" Apr 23 17:54:16.904692 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:16.904290 2574 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:16.904692 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:16.904356 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43c90ba9-23a0-4be9-a89b-8ff980f1bb05-metrics-certs podName:43c90ba9-23a0-4be9-a89b-8ff980f1bb05 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:20.90433668 +0000 UTC m=+131.268377604 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/43c90ba9-23a0-4be9-a89b-8ff980f1bb05-metrics-certs") pod "network-metrics-daemon-v8bcb" (UID: "43c90ba9-23a0-4be9-a89b-8ff980f1bb05") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:17.004768 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:17.004712 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tr8gz\" (UniqueName: \"kubernetes.io/projected/194b68f6-135d-472e-a449-ddda482b9755-kube-api-access-tr8gz\") pod \"network-check-target-vfxjl\" (UID: \"194b68f6-135d-472e-a449-ddda482b9755\") " pod="openshift-network-diagnostics/network-check-target-vfxjl" Apr 23 17:54:17.005460 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:17.005007 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:54:17.005460 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:17.005029 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:54:17.005460 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:17.005042 2574 projected.go:194] Error preparing data for projected volume kube-api-access-tr8gz for pod openshift-network-diagnostics/network-check-target-vfxjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:17.005460 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:17.005098 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/194b68f6-135d-472e-a449-ddda482b9755-kube-api-access-tr8gz podName:194b68f6-135d-472e-a449-ddda482b9755 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:21.005081293 +0000 UTC m=+131.369122217 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tr8gz" (UniqueName: "kubernetes.io/projected/194b68f6-135d-472e-a449-ddda482b9755-kube-api-access-tr8gz") pod "network-check-target-vfxjl" (UID: "194b68f6-135d-472e-a449-ddda482b9755") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:17.268395 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:17.268254 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8bcb" Apr 23 17:54:17.268580 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:17.268401 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8bcb" podUID="43c90ba9-23a0-4be9-a89b-8ff980f1bb05" Apr 23 17:54:17.268964 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:17.268758 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vfxjl" Apr 23 17:54:17.268964 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:17.268884 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vfxjl" podUID="194b68f6-135d-472e-a449-ddda482b9755" Apr 23 17:54:19.268560 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:19.268522 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vfxjl" Apr 23 17:54:19.268933 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:19.268664 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vfxjl" podUID="194b68f6-135d-472e-a449-ddda482b9755" Apr 23 17:54:19.268933 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:19.268736 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8bcb" Apr 23 17:54:19.268933 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:19.268823 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8bcb" podUID="43c90ba9-23a0-4be9-a89b-8ff980f1bb05" Apr 23 17:54:20.224624 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:20.224573 2574 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:20.937563 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:20.937053 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43c90ba9-23a0-4be9-a89b-8ff980f1bb05-metrics-certs\") pod \"network-metrics-daemon-v8bcb\" (UID: \"43c90ba9-23a0-4be9-a89b-8ff980f1bb05\") " pod="openshift-multus/network-metrics-daemon-v8bcb" Apr 23 17:54:20.937563 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:20.937225 2574 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:20.937563 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:20.937278 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43c90ba9-23a0-4be9-a89b-8ff980f1bb05-metrics-certs podName:43c90ba9-23a0-4be9-a89b-8ff980f1bb05 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:28.937264243 +0000 UTC m=+139.301305162 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/43c90ba9-23a0-4be9-a89b-8ff980f1bb05-metrics-certs") pod "network-metrics-daemon-v8bcb" (UID: "43c90ba9-23a0-4be9-a89b-8ff980f1bb05") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:21.038877 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:21.038259 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tr8gz\" (UniqueName: \"kubernetes.io/projected/194b68f6-135d-472e-a449-ddda482b9755-kube-api-access-tr8gz\") pod \"network-check-target-vfxjl\" (UID: \"194b68f6-135d-472e-a449-ddda482b9755\") " pod="openshift-network-diagnostics/network-check-target-vfxjl" Apr 23 17:54:21.038877 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:21.038426 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:54:21.038877 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:21.038447 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:54:21.038877 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:21.038461 2574 projected.go:194] Error preparing data for projected volume kube-api-access-tr8gz for pod openshift-network-diagnostics/network-check-target-vfxjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:21.038877 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:21.038524 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/194b68f6-135d-472e-a449-ddda482b9755-kube-api-access-tr8gz podName:194b68f6-135d-472e-a449-ddda482b9755 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:29.038504931 +0000 UTC m=+139.402545853 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-tr8gz" (UniqueName: "kubernetes.io/projected/194b68f6-135d-472e-a449-ddda482b9755-kube-api-access-tr8gz") pod "network-check-target-vfxjl" (UID: "194b68f6-135d-472e-a449-ddda482b9755") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:21.269155 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:21.268611 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vfxjl" Apr 23 17:54:21.269155 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:21.268719 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vfxjl" podUID="194b68f6-135d-472e-a449-ddda482b9755" Apr 23 17:54:21.269155 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:21.268784 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8bcb" Apr 23 17:54:21.269155 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:21.268872 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8bcb" podUID="43c90ba9-23a0-4be9-a89b-8ff980f1bb05" Apr 23 17:54:23.268812 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:23.268773 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vfxjl" Apr 23 17:54:23.269264 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:23.268790 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8bcb" Apr 23 17:54:23.269264 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:23.268919 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vfxjl" podUID="194b68f6-135d-472e-a449-ddda482b9755" Apr 23 17:54:23.269264 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:23.268999 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8bcb" podUID="43c90ba9-23a0-4be9-a89b-8ff980f1bb05" Apr 23 17:54:25.225712 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:25.225679 2574 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:25.268995 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:25.268954 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vfxjl" Apr 23 17:54:25.269147 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:25.268966 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8bcb" Apr 23 17:54:25.269147 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:25.269079 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vfxjl" podUID="194b68f6-135d-472e-a449-ddda482b9755" Apr 23 17:54:25.269249 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:25.269183 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8bcb" podUID="43c90ba9-23a0-4be9-a89b-8ff980f1bb05" Apr 23 17:54:27.268600 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:27.268563 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8bcb" Apr 23 17:54:27.268600 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:27.268563 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vfxjl" Apr 23 17:54:27.269117 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:27.268926 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8bcb" podUID="43c90ba9-23a0-4be9-a89b-8ff980f1bb05" Apr 23 17:54:27.269117 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:27.268934 2574 scope.go:117] "RemoveContainer" containerID="2155eb5bb8ef913dc126d852cf2f1710ed43f1b18d89c0100bd6368cefe84deb" Apr 23 17:54:27.269117 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:27.269011 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vfxjl" podUID="194b68f6-135d-472e-a449-ddda482b9755" Apr 23 17:54:27.269264 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:27.269171 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_openshift-machine-config-operator(b86d5a8aaa7fecdf67a597e125a8b168)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal" podUID="b86d5a8aaa7fecdf67a597e125a8b168" Apr 23 17:54:29.001347 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:29.001308 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43c90ba9-23a0-4be9-a89b-8ff980f1bb05-metrics-certs\") pod \"network-metrics-daemon-v8bcb\" (UID: \"43c90ba9-23a0-4be9-a89b-8ff980f1bb05\") " pod="openshift-multus/network-metrics-daemon-v8bcb" Apr 23 17:54:29.001802 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:29.001463 2574 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:29.001802 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:29.001534 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43c90ba9-23a0-4be9-a89b-8ff980f1bb05-metrics-certs podName:43c90ba9-23a0-4be9-a89b-8ff980f1bb05 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:45.001513474 +0000 UTC m=+155.365554411 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/43c90ba9-23a0-4be9-a89b-8ff980f1bb05-metrics-certs") pod "network-metrics-daemon-v8bcb" (UID: "43c90ba9-23a0-4be9-a89b-8ff980f1bb05") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:29.102104 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:29.102058 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tr8gz\" (UniqueName: \"kubernetes.io/projected/194b68f6-135d-472e-a449-ddda482b9755-kube-api-access-tr8gz\") pod \"network-check-target-vfxjl\" (UID: \"194b68f6-135d-472e-a449-ddda482b9755\") " pod="openshift-network-diagnostics/network-check-target-vfxjl" Apr 23 17:54:29.102272 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:29.102245 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:54:29.102272 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:29.102271 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:54:29.102360 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:29.102286 2574 projected.go:194] Error preparing data for projected volume kube-api-access-tr8gz for pod openshift-network-diagnostics/network-check-target-vfxjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:29.102360 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:29.102353 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/194b68f6-135d-472e-a449-ddda482b9755-kube-api-access-tr8gz podName:194b68f6-135d-472e-a449-ddda482b9755 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:45.102332739 +0000 UTC m=+155.466373659 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-tr8gz" (UniqueName: "kubernetes.io/projected/194b68f6-135d-472e-a449-ddda482b9755-kube-api-access-tr8gz") pod "network-check-target-vfxjl" (UID: "194b68f6-135d-472e-a449-ddda482b9755") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:29.268294 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:29.268213 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vfxjl" Apr 23 17:54:29.268470 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:29.268362 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vfxjl" podUID="194b68f6-135d-472e-a449-ddda482b9755" Apr 23 17:54:29.268470 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:29.268410 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8bcb" Apr 23 17:54:29.268596 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:29.268507 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8bcb" podUID="43c90ba9-23a0-4be9-a89b-8ff980f1bb05" Apr 23 17:54:30.230029 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:30.229994 2574 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:30.572700 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:30.572558 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" event={"ID":"934aa068-0f79-4196-9fc1-e81a90b22334","Type":"ContainerStarted","Data":"40520414d312ea041e88f5292afdbf9d1c03cb0496ed0c2fdef96553a3d96d20"} Apr 23 17:54:30.573836 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:30.573811 2574 generic.go:358] "Generic (PLEG): container finished" podID="bbe2b171-bf55-475a-a044-e38bab188f11" containerID="d58438e70618d6ac6b509b0fab481b46228e30fe311b316635d03e80a4f4ef85" exitCode=0 Apr 23 17:54:30.573963 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:30.573883 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rn6ls" event={"ID":"bbe2b171-bf55-475a-a044-e38bab188f11","Type":"ContainerDied","Data":"d58438e70618d6ac6b509b0fab481b46228e30fe311b316635d03e80a4f4ef85"} Apr 23 17:54:30.575207 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:30.575183 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-z72z9" event={"ID":"4f625df8-2016-4ff3-8cc7-d03314b05183","Type":"ContainerStarted","Data":"58170352a20feb9c0f46abad29708309b0a4a17535e7b32c5adcd9b00905ca61"} Apr 23 17:54:30.579165 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:30.576897 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-8htbm" event={"ID":"93f9e76a-768c-4e49-8238-031ed17ddef2","Type":"ContainerStarted","Data":"a00666107afc517287ea7dcbf8cb58dc3156ab80739b09e45c5cb077ed3e62f2"} Apr 23 17:54:30.583125 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:30.583094 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-zb5cp" event={"ID":"4f4d7e96-7d49-43ba-bd2c-ee439980c9ed","Type":"ContainerStarted","Data":"8ecd07e11629dfed25287ae67c6c93934e56033d64d48d357193216630d2a56d"} Apr 23 17:54:30.585013 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:30.584977 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-v2msm" event={"ID":"b4678728-6bf6-4a08-98fc-620935708987","Type":"ContainerStarted","Data":"1ec33eef6b9fc671ed83869c13919e14791c006eabf7c7e118fb39475a11fe7e"} Apr 23 17:54:30.586303 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:30.586273 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hhc5p" event={"ID":"339ba7f9-7ad9-40ca-b311-6f109fbcfc6a","Type":"ContainerStarted","Data":"11b5fe7555e845c377948fe1192fd7337895111b0d39fac62c278a0b8fa0c741"} Apr 23 17:54:30.587638 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:30.587620 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-4767s" event={"ID":"58265a7e-9515-43ed-8838-b59c7bc68f1a","Type":"ContainerStarted","Data":"8341bc1ad4be26f27a008bfaa0409d858c756f9a2c905d13ed3e7fe49a3be09e"} Apr 23 17:54:30.622721 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:30.622676 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-4767s" podStartSLOduration=5.937743527 podStartE2EDuration="22.622664861s" podCreationTimestamp="2026-04-23 17:54:08 +0000 UTC" firstStartedPulling="2026-04-23 17:54:13.507563701 +0000 UTC m=+123.871604620" lastFinishedPulling="2026-04-23 17:54:30.192485032 +0000 UTC m=+140.556525954" observedRunningTime="2026-04-23 17:54:30.622333447 +0000 UTC m=+140.986374388" watchObservedRunningTime="2026-04-23 17:54:30.622664861 +0000 UTC m=+140.986705844" Apr 23 17:54:30.637821 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:30.637785 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-hhc5p" podStartSLOduration=5.849763823 podStartE2EDuration="22.637773556s" podCreationTimestamp="2026-04-23 17:54:08 +0000 UTC" firstStartedPulling="2026-04-23 17:54:13.511388314 +0000 UTC m=+123.875429232" lastFinishedPulling="2026-04-23 17:54:30.299398029 +0000 UTC m=+140.663438965" observedRunningTime="2026-04-23 17:54:30.63738142 +0000 UTC m=+141.001422362" watchObservedRunningTime="2026-04-23 17:54:30.637773556 +0000 UTC m=+141.001814520" Apr 23 17:54:30.652740 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:30.652698 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-z72z9" podStartSLOduration=6.004802119 podStartE2EDuration="22.652686789s" podCreationTimestamp="2026-04-23 17:54:08 +0000 UTC" firstStartedPulling="2026-04-23 17:54:13.544554658 +0000 UTC m=+123.908595581" lastFinishedPulling="2026-04-23 17:54:30.192439318 +0000 UTC m=+140.556480251" observedRunningTime="2026-04-23 17:54:30.652204263 +0000 UTC m=+141.016245208" watchObservedRunningTime="2026-04-23 17:54:30.652686789 +0000 UTC m=+141.016727731" Apr 23 17:54:30.668898 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:30.668837 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/konnectivity-agent-v2msm" podStartSLOduration=10.305508265 podStartE2EDuration="22.668825299s" podCreationTimestamp="2026-04-23 17:54:08 +0000 UTC" firstStartedPulling="2026-04-23 17:54:13.525697294 +0000 UTC m=+123.889738217" lastFinishedPulling="2026-04-23 17:54:25.88901433 +0000 UTC m=+136.253055251" observedRunningTime="2026-04-23 17:54:30.668255781 +0000 UTC m=+141.032296723" watchObservedRunningTime="2026-04-23 17:54:30.668825299 +0000 UTC m=+141.032866239" Apr 23 17:54:31.268450 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:31.268425 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vfxjl" Apr 23 17:54:31.269106 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:31.268434 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8bcb" Apr 23 17:54:31.269106 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:31.268544 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vfxjl" podUID="194b68f6-135d-472e-a449-ddda482b9755" Apr 23 17:54:31.269106 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:31.268663 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8bcb" podUID="43c90ba9-23a0-4be9-a89b-8ff980f1bb05" Apr 23 17:54:31.310532 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:31.310495 2574 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock" Apr 23 17:54:31.591177 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:31.591142 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-zb5cp" event={"ID":"4f4d7e96-7d49-43ba-bd2c-ee439980c9ed","Type":"ContainerStarted","Data":"8869d90f6abc47e07b90257b50cc39158164ed100356ea839fd2226183ac125d"} Apr 23 17:54:31.594108 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:31.594066 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" event={"ID":"934aa068-0f79-4196-9fc1-e81a90b22334","Type":"ContainerStarted","Data":"74c8997f4e61393d29efce72239e61ec3d08dd8ba277e376c1d7fc4865ff1969"} Apr 23 17:54:31.594108 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:31.594100 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" event={"ID":"934aa068-0f79-4196-9fc1-e81a90b22334","Type":"ContainerStarted","Data":"9ee18c5b3fa89169cc5a480967b0888312bb7ce415104497d7fcd82f76b1a006"} Apr 23 17:54:31.594266 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:31.594114 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" event={"ID":"934aa068-0f79-4196-9fc1-e81a90b22334","Type":"ContainerStarted","Data":"2807a2ab1ceb372829b027f712d6de75e87faa618878aea9dfd40c38c06e50de"} Apr 23 17:54:31.594266 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:31.594125 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" event={"ID":"934aa068-0f79-4196-9fc1-e81a90b22334","Type":"ContainerStarted","Data":"5203f6376cc48a6fa34553c8a7718232284aa9de79d4a787c78dca5244d96ba0"} Apr 23 17:54:31.594266 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:31.594135 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" event={"ID":"934aa068-0f79-4196-9fc1-e81a90b22334","Type":"ContainerStarted","Data":"33ed14771cb99fda2f3bd4c69e0c057b7a539bb29c442e8c760fc11dc4ab14cf"} Apr 23 17:54:32.284825 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:32.284717 2574 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock","Timestamp":"2026-04-23T17:54:31.310515342Z","UUID":"93ebe674-b705-42d3-a273-e5ef0e6983c4","Handler":null,"Name":"","Endpoint":""} Apr 23 17:54:32.288151 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:32.288128 2574 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: ebs.csi.aws.com endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock versions: 1.0.0 Apr 23 17:54:32.288258 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:32.288159 2574 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: ebs.csi.aws.com at endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock Apr 23 17:54:32.598055 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:32.597959 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-jrchc" event={"ID":"78150dd7-ba24-49a4-841f-fe57e5708a0b","Type":"ContainerStarted","Data":"b7b475c5963b0f86f45571a1363081100d9ea02a22cbcf22568326a5f61fdf30"} Apr 23 17:54:32.601089 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:32.601063 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-zb5cp" event={"ID":"4f4d7e96-7d49-43ba-bd2c-ee439980c9ed","Type":"ContainerStarted","Data":"494175cf7579423a2964370cae7308b2ee802902ccbc46c8e8a1a23d0fa10e43"} Apr 23 17:54:32.612763 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:32.612719 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/iptables-alerter-jrchc" podStartSLOduration=7.928678853 podStartE2EDuration="24.612708012s" podCreationTimestamp="2026-04-23 17:54:08 +0000 UTC" firstStartedPulling="2026-04-23 17:54:13.521226982 +0000 UTC m=+123.885267901" lastFinishedPulling="2026-04-23 17:54:30.205256135 +0000 UTC m=+140.569297060" observedRunningTime="2026-04-23 17:54:32.612567393 +0000 UTC m=+142.976608334" watchObservedRunningTime="2026-04-23 17:54:32.612708012 +0000 UTC m=+142.976748953" Apr 23 17:54:32.613093 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:32.613065 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-8htbm" podStartSLOduration=7.948886795 podStartE2EDuration="24.613058713s" podCreationTimestamp="2026-04-23 17:54:08 +0000 UTC" firstStartedPulling="2026-04-23 17:54:13.537404094 +0000 UTC m=+123.901445028" lastFinishedPulling="2026-04-23 17:54:30.201576025 +0000 UTC m=+140.565616946" observedRunningTime="2026-04-23 17:54:30.68858337 +0000 UTC m=+141.052624310" watchObservedRunningTime="2026-04-23 17:54:32.613058713 +0000 UTC m=+142.977099661" Apr 23 17:54:32.630393 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:32.630345 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-zb5cp" podStartSLOduration=5.954652468 podStartE2EDuration="24.630329921s" podCreationTimestamp="2026-04-23 17:54:08 +0000 UTC" firstStartedPulling="2026-04-23 17:54:13.530905282 +0000 UTC m=+123.894946200" lastFinishedPulling="2026-04-23 17:54:32.206582731 +0000 UTC m=+142.570623653" observedRunningTime="2026-04-23 17:54:32.629874494 +0000 UTC m=+142.993915435" watchObservedRunningTime="2026-04-23 17:54:32.630329921 +0000 UTC m=+142.994370864" Apr 23 17:54:33.268734 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:33.268647 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vfxjl" Apr 23 17:54:33.268935 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:33.268772 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vfxjl" podUID="194b68f6-135d-472e-a449-ddda482b9755" Apr 23 17:54:33.268935 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:33.268826 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8bcb" Apr 23 17:54:33.269055 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:33.268952 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8bcb" podUID="43c90ba9-23a0-4be9-a89b-8ff980f1bb05" Apr 23 17:54:33.518916 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:33.518608 2574 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/konnectivity-agent-v2msm" Apr 23 17:54:33.519329 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:33.519318 2574 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/konnectivity-agent-v2msm" Apr 23 17:54:33.603740 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:33.603679 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kube-system/konnectivity-agent-v2msm" Apr 23 17:54:33.604039 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:33.604023 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/konnectivity-agent-v2msm" Apr 23 17:54:34.608191 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:34.608160 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" event={"ID":"934aa068-0f79-4196-9fc1-e81a90b22334","Type":"ContainerStarted","Data":"5b90fb56ba95fd750efd55cb99d339cb67b0c270c1eeb059624194eba7b11355"} Apr 23 17:54:35.231214 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:35.231183 2574 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:35.268704 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:35.268681 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vfxjl" Apr 23 17:54:35.268793 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:35.268686 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8bcb" Apr 23 17:54:35.268793 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:35.268777 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vfxjl" podUID="194b68f6-135d-472e-a449-ddda482b9755" Apr 23 17:54:35.268904 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:35.268876 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8bcb" podUID="43c90ba9-23a0-4be9-a89b-8ff980f1bb05" Apr 23 17:54:35.611389 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:35.611308 2574 generic.go:358] "Generic (PLEG): container finished" podID="bbe2b171-bf55-475a-a044-e38bab188f11" containerID="be56228fe52da3eb3e623973d7a82c343eee2679278961e7f1c5b306fa326a9c" exitCode=0 Apr 23 17:54:35.612019 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:35.611405 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rn6ls" event={"ID":"bbe2b171-bf55-475a-a044-e38bab188f11","Type":"ContainerDied","Data":"be56228fe52da3eb3e623973d7a82c343eee2679278961e7f1c5b306fa326a9c"} Apr 23 17:54:37.268394 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:37.268208 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vfxjl" Apr 23 17:54:37.268955 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:37.268223 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8bcb" Apr 23 17:54:37.268955 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:37.268493 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vfxjl" podUID="194b68f6-135d-472e-a449-ddda482b9755" Apr 23 17:54:37.268955 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:37.268648 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8bcb" podUID="43c90ba9-23a0-4be9-a89b-8ff980f1bb05" Apr 23 17:54:37.617885 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:37.617803 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" event={"ID":"934aa068-0f79-4196-9fc1-e81a90b22334","Type":"ContainerStarted","Data":"051bfc8a13ed246a61e4f717494128a748ceca0b0388b1269d31bb502d2b7527"} Apr 23 17:54:37.618161 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:37.618132 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:37.619683 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:37.619662 2574 generic.go:358] "Generic (PLEG): container finished" podID="bbe2b171-bf55-475a-a044-e38bab188f11" containerID="f797688ddfa2d2ab5f8120558520620dd9b766e6bc5073cbd167cf0894818579" exitCode=0 Apr 23 17:54:37.619765 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:37.619689 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rn6ls" event={"ID":"bbe2b171-bf55-475a-a044-e38bab188f11","Type":"ContainerDied","Data":"f797688ddfa2d2ab5f8120558520620dd9b766e6bc5073cbd167cf0894818579"} Apr 23 17:54:37.634254 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:37.634231 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:37.641705 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:37.641668 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" podStartSLOduration=12.779568429 podStartE2EDuration="29.641657943s" podCreationTimestamp="2026-04-23 17:54:08 +0000 UTC" firstStartedPulling="2026-04-23 17:54:13.502515349 +0000 UTC m=+123.866556268" lastFinishedPulling="2026-04-23 17:54:30.364604862 +0000 UTC m=+140.728645782" observedRunningTime="2026-04-23 17:54:37.641205161 +0000 UTC m=+148.005246101" watchObservedRunningTime="2026-04-23 17:54:37.641657943 +0000 UTC m=+148.005698884" Apr 23 17:54:38.623117 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:38.623057 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:38.623117 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:38.623096 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:38.642102 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:38.641882 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:54:38.877166 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:38.874871 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-v8bcb"] Apr 23 17:54:38.877166 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:38.875029 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8bcb" Apr 23 17:54:38.877166 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:38.875178 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8bcb" podUID="43c90ba9-23a0-4be9-a89b-8ff980f1bb05" Apr 23 17:54:38.877166 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:38.876532 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-vfxjl"] Apr 23 17:54:38.877166 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:38.876732 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vfxjl" Apr 23 17:54:38.877491 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:38.877205 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vfxjl" podUID="194b68f6-135d-472e-a449-ddda482b9755" Apr 23 17:54:39.626294 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:39.626257 2574 generic.go:358] "Generic (PLEG): container finished" podID="bbe2b171-bf55-475a-a044-e38bab188f11" containerID="0fffce18889a51e867b821415db9bcab344419d11bbcac0d485c491fa5cf3142" exitCode=0 Apr 23 17:54:39.626681 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:39.626333 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rn6ls" event={"ID":"bbe2b171-bf55-475a-a044-e38bab188f11","Type":"ContainerDied","Data":"0fffce18889a51e867b821415db9bcab344419d11bbcac0d485c491fa5cf3142"} Apr 23 17:54:40.231863 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:40.231817 2574 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:40.269451 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:40.269413 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8bcb" Apr 23 17:54:40.269574 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:40.269526 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vfxjl" Apr 23 17:54:40.269631 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:40.269566 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8bcb" podUID="43c90ba9-23a0-4be9-a89b-8ff980f1bb05" Apr 23 17:54:40.269631 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:40.269600 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vfxjl" podUID="194b68f6-135d-472e-a449-ddda482b9755" Apr 23 17:54:40.269942 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:40.269914 2574 scope.go:117] "RemoveContainer" containerID="2155eb5bb8ef913dc126d852cf2f1710ed43f1b18d89c0100bd6368cefe84deb" Apr 23 17:54:40.270108 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:40.270080 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_openshift-machine-config-operator(b86d5a8aaa7fecdf67a597e125a8b168)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal" podUID="b86d5a8aaa7fecdf67a597e125a8b168" Apr 23 17:54:41.964468 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:41.964290 2574 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:54:42.268885 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:42.268710 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8bcb" Apr 23 17:54:42.268885 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:42.268763 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vfxjl" Apr 23 17:54:42.269140 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:42.268889 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8bcb" podUID="43c90ba9-23a0-4be9-a89b-8ff980f1bb05" Apr 23 17:54:42.269140 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:42.269016 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vfxjl" podUID="194b68f6-135d-472e-a449-ddda482b9755" Apr 23 17:54:44.268536 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:44.268498 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8bcb" Apr 23 17:54:44.269066 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:44.268619 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8bcb" podUID="43c90ba9-23a0-4be9-a89b-8ff980f1bb05" Apr 23 17:54:44.269066 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:44.268677 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vfxjl" Apr 23 17:54:44.269066 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:44.268784 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vfxjl" podUID="194b68f6-135d-472e-a449-ddda482b9755" Apr 23 17:54:45.004406 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:45.004348 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43c90ba9-23a0-4be9-a89b-8ff980f1bb05-metrics-certs\") pod \"network-metrics-daemon-v8bcb\" (UID: \"43c90ba9-23a0-4be9-a89b-8ff980f1bb05\") " pod="openshift-multus/network-metrics-daemon-v8bcb" Apr 23 17:54:45.004587 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:45.004482 2574 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:45.004587 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:45.004555 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43c90ba9-23a0-4be9-a89b-8ff980f1bb05-metrics-certs podName:43c90ba9-23a0-4be9-a89b-8ff980f1bb05 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:17.004539216 +0000 UTC m=+187.368580138 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/43c90ba9-23a0-4be9-a89b-8ff980f1bb05-metrics-certs") pod "network-metrics-daemon-v8bcb" (UID: "43c90ba9-23a0-4be9-a89b-8ff980f1bb05") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:45.105289 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:45.105253 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tr8gz\" (UniqueName: \"kubernetes.io/projected/194b68f6-135d-472e-a449-ddda482b9755-kube-api-access-tr8gz\") pod \"network-check-target-vfxjl\" (UID: \"194b68f6-135d-472e-a449-ddda482b9755\") " pod="openshift-network-diagnostics/network-check-target-vfxjl" Apr 23 17:54:45.105447 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:45.105417 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:54:45.105447 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:45.105439 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:54:45.105447 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:45.105448 2574 projected.go:194] Error preparing data for projected volume kube-api-access-tr8gz for pod openshift-network-diagnostics/network-check-target-vfxjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:45.105551 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:45.105500 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/194b68f6-135d-472e-a449-ddda482b9755-kube-api-access-tr8gz podName:194b68f6-135d-472e-a449-ddda482b9755 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:17.105486252 +0000 UTC m=+187.469527172 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-tr8gz" (UniqueName: "kubernetes.io/projected/194b68f6-135d-472e-a449-ddda482b9755-kube-api-access-tr8gz") pod "network-check-target-vfxjl" (UID: "194b68f6-135d-472e-a449-ddda482b9755") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:46.268145 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:46.268110 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8bcb" Apr 23 17:54:46.268658 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:46.268162 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vfxjl" Apr 23 17:54:46.272019 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:46.271990 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 23 17:54:46.272264 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:46.272028 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 23 17:54:46.272264 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:46.272065 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-5487j\"" Apr 23 17:54:46.272264 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:46.272083 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-gszvz\"" Apr 23 17:54:46.272264 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:46.272095 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 23 17:54:46.641331 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:46.641257 2574 generic.go:358] "Generic (PLEG): container finished" podID="bbe2b171-bf55-475a-a044-e38bab188f11" containerID="dcfcc1abcb3a72637b85e979d13ef77c81f34ef2e79a59004afb5afa02be72c9" exitCode=0 Apr 23 17:54:46.641331 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:46.641296 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rn6ls" event={"ID":"bbe2b171-bf55-475a-a044-e38bab188f11","Type":"ContainerDied","Data":"dcfcc1abcb3a72637b85e979d13ef77c81f34ef2e79a59004afb5afa02be72c9"} Apr 23 17:54:47.645558 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:47.645515 2574 generic.go:358] "Generic (PLEG): container finished" podID="bbe2b171-bf55-475a-a044-e38bab188f11" containerID="fe2dbb83b555237aa378e4fbf335ae4ac48b6b16ade656c74c71bcc5a7ae3b67" exitCode=0 Apr 23 17:54:47.645994 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:47.645586 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rn6ls" event={"ID":"bbe2b171-bf55-475a-a044-e38bab188f11","Type":"ContainerDied","Data":"fe2dbb83b555237aa378e4fbf335ae4ac48b6b16ade656c74c71bcc5a7ae3b67"} Apr 23 17:54:48.650062 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:48.650032 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rn6ls" event={"ID":"bbe2b171-bf55-475a-a044-e38bab188f11","Type":"ContainerStarted","Data":"18efe84ee2665c47ab5dae270c77078b7e184f48ae8e02ca33c0d75b14bea559"} Apr 23 17:54:48.671415 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:48.671375 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-rn6ls" podStartSLOduration=8.648876151 podStartE2EDuration="40.671361227s" podCreationTimestamp="2026-04-23 17:54:08 +0000 UTC" firstStartedPulling="2026-04-23 17:54:13.547226367 +0000 UTC m=+123.911267286" lastFinishedPulling="2026-04-23 17:54:45.569711443 +0000 UTC m=+155.933752362" observedRunningTime="2026-04-23 17:54:48.669582446 +0000 UTC m=+159.033623387" watchObservedRunningTime="2026-04-23 17:54:48.671361227 +0000 UTC m=+159.035402167" Apr 23 17:54:49.153987 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.153952 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-135-87.ec2.internal" event="NodeReady" Apr 23 17:54:49.190074 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.190041 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-9d4b6777b-phhz6"] Apr 23 17:54:49.218202 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.218180 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-585dfdc468-7h7vz"] Apr 23 17:54:49.218349 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.218332 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-9d4b6777b-phhz6" Apr 23 17:54:49.221028 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.221004 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Apr 23 17:54:49.221146 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.221004 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Apr 23 17:54:49.221283 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.221267 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Apr 23 17:54:49.221545 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.221523 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-ffgdh\"" Apr 23 17:54:49.221545 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.221536 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Apr 23 17:54:49.232653 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.232629 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Apr 23 17:54:49.236285 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.236267 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-85cf97bcfb-crk2g"] Apr 23 17:54:49.236436 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.236421 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-585dfdc468-7h7vz" Apr 23 17:54:49.239118 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.238919 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"openshift-insights-serving-cert\"" Apr 23 17:54:49.239118 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.238963 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"openshift-service-ca.crt\"" Apr 23 17:54:49.239118 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.238964 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"operator-dockercfg-d9fsv\"" Apr 23 17:54:49.239118 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.239041 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-root-ca.crt\"" Apr 23 17:54:49.239118 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.238967 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"service-ca-bundle\"" Apr 23 17:54:49.244737 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.244716 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"trusted-ca-bundle\"" Apr 23 17:54:49.251339 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.251322 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-jhzwk"] Apr 23 17:54:49.251468 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.251452 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-85cf97bcfb-crk2g" Apr 23 17:54:49.253743 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.253726 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Apr 23 17:54:49.253880 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.253819 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"default-ingress-cert\"" Apr 23 17:54:49.253959 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.253901 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Apr 23 17:54:49.254094 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.254077 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-pgfz6\"" Apr 23 17:54:49.254189 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.254119 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Apr 23 17:54:49.254493 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.254476 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Apr 23 17:54:49.254555 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.254485 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Apr 23 17:54:49.279224 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.279199 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5bb94bc895-f4jk5"] Apr 23 17:54:49.279345 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.279331 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-jhzwk" Apr 23 17:54:49.281914 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.281895 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Apr 23 17:54:49.282024 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.281981 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Apr 23 17:54:49.282081 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.282029 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Apr 23 17:54:49.282130 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.282111 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-l2jds\"" Apr 23 17:54:49.292085 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.292067 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-24f9z"] Apr 23 17:54:49.292200 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.292184 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" Apr 23 17:54:49.294676 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.294654 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Apr 23 17:54:49.294676 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.294670 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-psc88\"" Apr 23 17:54:49.294793 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.294659 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-private-configuration\"" Apr 23 17:54:49.295125 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.295112 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Apr 23 17:54:49.300436 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.300421 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Apr 23 17:54:49.304237 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.304223 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-567k5"] Apr 23 17:54:49.304350 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.304336 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-24f9z" Apr 23 17:54:49.307112 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.307094 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-storage-operator\"/\"kube-root-ca.crt\"" Apr 23 17:54:49.307112 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.307104 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-storage-operator\"/\"openshift-service-ca.crt\"" Apr 23 17:54:49.307544 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.307525 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-storage-operator\"/\"volume-data-source-validator-dockercfg-vqpc9\"" Apr 23 17:54:49.325442 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.325423 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-9d4b6777b-phhz6"] Apr 23 17:54:49.325578 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.325557 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-585dfdc468-7h7vz"] Apr 23 17:54:49.325673 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.325579 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-567k5" Apr 23 17:54:49.325721 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.325583 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-75587bd455-8b4st"] Apr 23 17:54:49.328273 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.328253 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Apr 23 17:54:49.328696 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.328313 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Apr 23 17:54:49.328696 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.328252 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Apr 23 17:54:49.328696 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.328495 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Apr 23 17:54:49.328696 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.328552 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-q8s58\"" Apr 23 17:54:49.333963 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.333943 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a622cde-4463-4b2b-a60a-0724fdeeb5e3-config\") pod \"console-operator-9d4b6777b-phhz6\" (UID: \"5a622cde-4463-4b2b-a60a-0724fdeeb5e3\") " pod="openshift-console-operator/console-operator-9d4b6777b-phhz6" Apr 23 17:54:49.334057 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.333985 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5a622cde-4463-4b2b-a60a-0724fdeeb5e3-trusted-ca\") pod \"console-operator-9d4b6777b-phhz6\" (UID: \"5a622cde-4463-4b2b-a60a-0724fdeeb5e3\") " pod="openshift-console-operator/console-operator-9d4b6777b-phhz6" Apr 23 17:54:49.334057 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.334002 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d01e1208-1867-464a-822f-89683cda0372-trusted-ca-bundle\") pod \"insights-operator-585dfdc468-7h7vz\" (UID: \"d01e1208-1867-464a-822f-89683cda0372\") " pod="openshift-insights/insights-operator-585dfdc468-7h7vz" Apr 23 17:54:49.334057 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.334018 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d01e1208-1867-464a-822f-89683cda0372-serving-cert\") pod \"insights-operator-585dfdc468-7h7vz\" (UID: \"d01e1208-1867-464a-822f-89683cda0372\") " pod="openshift-insights/insights-operator-585dfdc468-7h7vz" Apr 23 17:54:49.334163 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.334064 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/d01e1208-1867-464a-822f-89683cda0372-snapshots\") pod \"insights-operator-585dfdc468-7h7vz\" (UID: \"d01e1208-1867-464a-822f-89683cda0372\") " pod="openshift-insights/insights-operator-585dfdc468-7h7vz" Apr 23 17:54:49.334163 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.334135 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls6t7\" (UniqueName: \"kubernetes.io/projected/5a622cde-4463-4b2b-a60a-0724fdeeb5e3-kube-api-access-ls6t7\") pod \"console-operator-9d4b6777b-phhz6\" (UID: \"5a622cde-4463-4b2b-a60a-0724fdeeb5e3\") " pod="openshift-console-operator/console-operator-9d4b6777b-phhz6" Apr 23 17:54:49.334228 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.334163 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d01e1208-1867-464a-822f-89683cda0372-service-ca-bundle\") pod \"insights-operator-585dfdc468-7h7vz\" (UID: \"d01e1208-1867-464a-822f-89683cda0372\") " pod="openshift-insights/insights-operator-585dfdc468-7h7vz" Apr 23 17:54:49.334228 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.334183 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a622cde-4463-4b2b-a60a-0724fdeeb5e3-serving-cert\") pod \"console-operator-9d4b6777b-phhz6\" (UID: \"5a622cde-4463-4b2b-a60a-0724fdeeb5e3\") " pod="openshift-console-operator/console-operator-9d4b6777b-phhz6" Apr 23 17:54:49.334228 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.334204 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d01e1208-1867-464a-822f-89683cda0372-tmp\") pod \"insights-operator-585dfdc468-7h7vz\" (UID: \"d01e1208-1867-464a-822f-89683cda0372\") " pod="openshift-insights/insights-operator-585dfdc468-7h7vz" Apr 23 17:54:49.334228 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.334226 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ktn2\" (UniqueName: \"kubernetes.io/projected/d01e1208-1867-464a-822f-89683cda0372-kube-api-access-2ktn2\") pod \"insights-operator-585dfdc468-7h7vz\" (UID: \"d01e1208-1867-464a-822f-89683cda0372\") " pod="openshift-insights/insights-operator-585dfdc468-7h7vz" Apr 23 17:54:49.350534 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.350518 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-24f9z"] Apr 23 17:54:49.350534 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.350537 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-jhzwk"] Apr 23 17:54:49.350677 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.350547 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-d6fc45fc5-jfgn6"] Apr 23 17:54:49.350677 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.350664 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-8b4st" Apr 23 17:54:49.353323 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.353298 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"cluster-monitoring-operator-tls\"" Apr 23 17:54:49.353458 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.353441 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"openshift-service-ca.crt\"" Apr 23 17:54:49.353517 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.353469 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kube-root-ca.crt\"" Apr 23 17:54:49.354434 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.354418 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"cluster-monitoring-operator-dockercfg-x5d7b\"" Apr 23 17:54:49.354434 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.354430 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"telemetry-config\"" Apr 23 17:54:49.377528 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.377507 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-8894fc9bd-pvs64"] Apr 23 17:54:49.377654 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.377640 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-jfgn6" Apr 23 17:54:49.380316 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.380281 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Apr 23 17:54:49.380420 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.380330 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Apr 23 17:54:49.380420 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.380336 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Apr 23 17:54:49.380522 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.380418 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Apr 23 17:54:49.380522 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.380448 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-s2k4w\"" Apr 23 17:54:49.398737 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.398714 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-cb95c66f6-727jd"] Apr 23 17:54:49.398874 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.398837 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-8894fc9bd-pvs64" Apr 23 17:54:49.401497 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.401481 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"network-diagnostics-dockercfg-kr584\"" Apr 23 17:54:49.421976 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.421931 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress/router-default-85cf97bcfb-crk2g"] Apr 23 17:54:49.421976 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.421952 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-567k5"] Apr 23 17:54:49.421976 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.421961 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5bb94bc895-f4jk5"] Apr 23 17:54:49.421976 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.421971 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-hqlvp"] Apr 23 17:54:49.422155 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.422054 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-cb95c66f6-727jd" Apr 23 17:54:49.424830 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.424812 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"default-dockercfg-k7c2b\"" Apr 23 17:54:49.424830 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.424824 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Apr 23 17:54:49.424967 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.424899 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Apr 23 17:54:49.435314 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.435296 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a622cde-4463-4b2b-a60a-0724fdeeb5e3-serving-cert\") pod \"console-operator-9d4b6777b-phhz6\" (UID: \"5a622cde-4463-4b2b-a60a-0724fdeeb5e3\") " pod="openshift-console-operator/console-operator-9d4b6777b-phhz6" Apr 23 17:54:49.435406 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.435321 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d01e1208-1867-464a-822f-89683cda0372-tmp\") pod \"insights-operator-585dfdc468-7h7vz\" (UID: \"d01e1208-1867-464a-822f-89683cda0372\") " pod="openshift-insights/insights-operator-585dfdc468-7h7vz" Apr 23 17:54:49.435406 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.435339 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2ktn2\" (UniqueName: \"kubernetes.io/projected/d01e1208-1867-464a-822f-89683cda0372-kube-api-access-2ktn2\") pod \"insights-operator-585dfdc468-7h7vz\" (UID: \"d01e1208-1867-464a-822f-89683cda0372\") " pod="openshift-insights/insights-operator-585dfdc468-7h7vz" Apr 23 17:54:49.435406 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.435369 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a622cde-4463-4b2b-a60a-0724fdeeb5e3-config\") pod \"console-operator-9d4b6777b-phhz6\" (UID: \"5a622cde-4463-4b2b-a60a-0724fdeeb5e3\") " pod="openshift-console-operator/console-operator-9d4b6777b-phhz6" Apr 23 17:54:49.435406 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.435388 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm8ql\" (UniqueName: \"kubernetes.io/projected/9601dc49-4014-4c79-9bb2-5871bb8d36a1-kube-api-access-rm8ql\") pod \"kube-storage-version-migrator-operator-6769c5d45-567k5\" (UID: \"9601dc49-4014-4c79-9bb2-5871bb8d36a1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-567k5" Apr 23 17:54:49.435610 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.435413 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-service-ca-bundle\") pod \"router-default-85cf97bcfb-crk2g\" (UID: \"ef6bbc19-ba30-4d63-ad0f-d37109da20b7\") " pod="openshift-ingress/router-default-85cf97bcfb-crk2g" Apr 23 17:54:49.435610 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.435430 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/223b3de3-2746-4385-a15c-cba2eeb2e9ee-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-jhzwk\" (UID: \"223b3de3-2746-4385-a15c-cba2eeb2e9ee\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-jhzwk" Apr 23 17:54:49.435610 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.435447 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxm9t\" (UniqueName: \"kubernetes.io/projected/223b3de3-2746-4385-a15c-cba2eeb2e9ee-kube-api-access-bxm9t\") pod \"cluster-samples-operator-6dc5bdb6b4-jhzwk\" (UID: \"223b3de3-2746-4385-a15c-cba2eeb2e9ee\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-jhzwk" Apr 23 17:54:49.435610 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.435472 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/b39a95d3-b859-4e2d-bbef-fca1ee288a74-image-registry-private-configuration\") pod \"image-registry-5bb94bc895-f4jk5\" (UID: \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\") " pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" Apr 23 17:54:49.435610 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.435497 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b39a95d3-b859-4e2d-bbef-fca1ee288a74-registry-certificates\") pod \"image-registry-5bb94bc895-f4jk5\" (UID: \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\") " pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" Apr 23 17:54:49.435610 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.435519 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b39a95d3-b859-4e2d-bbef-fca1ee288a74-installation-pull-secrets\") pod \"image-registry-5bb94bc895-f4jk5\" (UID: \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\") " pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" Apr 23 17:54:49.435610 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.435550 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5a622cde-4463-4b2b-a60a-0724fdeeb5e3-trusted-ca\") pod \"console-operator-9d4b6777b-phhz6\" (UID: \"5a622cde-4463-4b2b-a60a-0724fdeeb5e3\") " pod="openshift-console-operator/console-operator-9d4b6777b-phhz6" Apr 23 17:54:49.435610 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.435565 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d01e1208-1867-464a-822f-89683cda0372-trusted-ca-bundle\") pod \"insights-operator-585dfdc468-7h7vz\" (UID: \"d01e1208-1867-464a-822f-89683cda0372\") " pod="openshift-insights/insights-operator-585dfdc468-7h7vz" Apr 23 17:54:49.435610 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.435582 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d01e1208-1867-464a-822f-89683cda0372-serving-cert\") pod \"insights-operator-585dfdc468-7h7vz\" (UID: \"d01e1208-1867-464a-822f-89683cda0372\") " pod="openshift-insights/insights-operator-585dfdc468-7h7vz" Apr 23 17:54:49.435610 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.435603 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9601dc49-4014-4c79-9bb2-5871bb8d36a1-serving-cert\") pod \"kube-storage-version-migrator-operator-6769c5d45-567k5\" (UID: \"9601dc49-4014-4c79-9bb2-5871bb8d36a1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-567k5" Apr 23 17:54:49.436122 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.435620 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/d01e1208-1867-464a-822f-89683cda0372-snapshots\") pod \"insights-operator-585dfdc468-7h7vz\" (UID: \"d01e1208-1867-464a-822f-89683cda0372\") " pod="openshift-insights/insights-operator-585dfdc468-7h7vz" Apr 23 17:54:49.436122 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.435637 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-default-certificate\") pod \"router-default-85cf97bcfb-crk2g\" (UID: \"ef6bbc19-ba30-4d63-ad0f-d37109da20b7\") " pod="openshift-ingress/router-default-85cf97bcfb-crk2g" Apr 23 17:54:49.436122 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.435666 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-stats-auth\") pod \"router-default-85cf97bcfb-crk2g\" (UID: \"ef6bbc19-ba30-4d63-ad0f-d37109da20b7\") " pod="openshift-ingress/router-default-85cf97bcfb-crk2g" Apr 23 17:54:49.436122 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.435696 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b39a95d3-b859-4e2d-bbef-fca1ee288a74-bound-sa-token\") pod \"image-registry-5bb94bc895-f4jk5\" (UID: \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\") " pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" Apr 23 17:54:49.436122 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.435725 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ls6t7\" (UniqueName: \"kubernetes.io/projected/5a622cde-4463-4b2b-a60a-0724fdeeb5e3-kube-api-access-ls6t7\") pod \"console-operator-9d4b6777b-phhz6\" (UID: \"5a622cde-4463-4b2b-a60a-0724fdeeb5e3\") " pod="openshift-console-operator/console-operator-9d4b6777b-phhz6" Apr 23 17:54:49.436122 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.435728 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d01e1208-1867-464a-822f-89683cda0372-tmp\") pod \"insights-operator-585dfdc468-7h7vz\" (UID: \"d01e1208-1867-464a-822f-89683cda0372\") " pod="openshift-insights/insights-operator-585dfdc468-7h7vz" Apr 23 17:54:49.436122 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.435807 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b39a95d3-b859-4e2d-bbef-fca1ee288a74-ca-trust-extracted\") pod \"image-registry-5bb94bc895-f4jk5\" (UID: \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\") " pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" Apr 23 17:54:49.436122 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.435881 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t526k\" (UniqueName: \"kubernetes.io/projected/b39a95d3-b859-4e2d-bbef-fca1ee288a74-kube-api-access-t526k\") pod \"image-registry-5bb94bc895-f4jk5\" (UID: \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\") " pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" Apr 23 17:54:49.436122 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.435916 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zf67\" (UniqueName: \"kubernetes.io/projected/926cf4a9-abea-43b7-baa6-dc9cd9430a00-kube-api-access-9zf67\") pod \"volume-data-source-validator-7c6cbb6c87-24f9z\" (UID: \"926cf4a9-abea-43b7-baa6-dc9cd9430a00\") " pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-24f9z" Apr 23 17:54:49.436122 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.435948 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-metrics-certs\") pod \"router-default-85cf97bcfb-crk2g\" (UID: \"ef6bbc19-ba30-4d63-ad0f-d37109da20b7\") " pod="openshift-ingress/router-default-85cf97bcfb-crk2g" Apr 23 17:54:49.436122 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.435980 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sd6t\" (UniqueName: \"kubernetes.io/projected/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-kube-api-access-4sd6t\") pod \"router-default-85cf97bcfb-crk2g\" (UID: \"ef6bbc19-ba30-4d63-ad0f-d37109da20b7\") " pod="openshift-ingress/router-default-85cf97bcfb-crk2g" Apr 23 17:54:49.436122 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.436006 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b39a95d3-b859-4e2d-bbef-fca1ee288a74-registry-tls\") pod \"image-registry-5bb94bc895-f4jk5\" (UID: \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\") " pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" Apr 23 17:54:49.436122 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.436033 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b39a95d3-b859-4e2d-bbef-fca1ee288a74-trusted-ca\") pod \"image-registry-5bb94bc895-f4jk5\" (UID: \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\") " pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" Apr 23 17:54:49.436122 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.436101 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d01e1208-1867-464a-822f-89683cda0372-service-ca-bundle\") pod \"insights-operator-585dfdc468-7h7vz\" (UID: \"d01e1208-1867-464a-822f-89683cda0372\") " pod="openshift-insights/insights-operator-585dfdc468-7h7vz" Apr 23 17:54:49.436122 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.436130 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9601dc49-4014-4c79-9bb2-5871bb8d36a1-config\") pod \"kube-storage-version-migrator-operator-6769c5d45-567k5\" (UID: \"9601dc49-4014-4c79-9bb2-5871bb8d36a1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-567k5" Apr 23 17:54:49.436781 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.436272 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a622cde-4463-4b2b-a60a-0724fdeeb5e3-config\") pod \"console-operator-9d4b6777b-phhz6\" (UID: \"5a622cde-4463-4b2b-a60a-0724fdeeb5e3\") " pod="openshift-console-operator/console-operator-9d4b6777b-phhz6" Apr 23 17:54:49.436781 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.436375 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d01e1208-1867-464a-822f-89683cda0372-trusted-ca-bundle\") pod \"insights-operator-585dfdc468-7h7vz\" (UID: \"d01e1208-1867-464a-822f-89683cda0372\") " pod="openshift-insights/insights-operator-585dfdc468-7h7vz" Apr 23 17:54:49.436781 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.436524 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/d01e1208-1867-464a-822f-89683cda0372-snapshots\") pod \"insights-operator-585dfdc468-7h7vz\" (UID: \"d01e1208-1867-464a-822f-89683cda0372\") " pod="openshift-insights/insights-operator-585dfdc468-7h7vz" Apr 23 17:54:49.436937 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.436799 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d01e1208-1867-464a-822f-89683cda0372-service-ca-bundle\") pod \"insights-operator-585dfdc468-7h7vz\" (UID: \"d01e1208-1867-464a-822f-89683cda0372\") " pod="openshift-insights/insights-operator-585dfdc468-7h7vz" Apr 23 17:54:49.439739 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.439720 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d01e1208-1867-464a-822f-89683cda0372-serving-cert\") pod \"insights-operator-585dfdc468-7h7vz\" (UID: \"d01e1208-1867-464a-822f-89683cda0372\") " pod="openshift-insights/insights-operator-585dfdc468-7h7vz" Apr 23 17:54:49.439807 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.439755 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a622cde-4463-4b2b-a60a-0724fdeeb5e3-serving-cert\") pod \"console-operator-9d4b6777b-phhz6\" (UID: \"5a622cde-4463-4b2b-a60a-0724fdeeb5e3\") " pod="openshift-console-operator/console-operator-9d4b6777b-phhz6" Apr 23 17:54:49.445142 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.445124 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-6txtb"] Apr 23 17:54:49.445277 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.445262 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-hqlvp" Apr 23 17:54:49.445540 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.445516 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls6t7\" (UniqueName: \"kubernetes.io/projected/5a622cde-4463-4b2b-a60a-0724fdeeb5e3-kube-api-access-ls6t7\") pod \"console-operator-9d4b6777b-phhz6\" (UID: \"5a622cde-4463-4b2b-a60a-0724fdeeb5e3\") " pod="openshift-console-operator/console-operator-9d4b6777b-phhz6" Apr 23 17:54:49.445805 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.445785 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ktn2\" (UniqueName: \"kubernetes.io/projected/d01e1208-1867-464a-822f-89683cda0372-kube-api-access-2ktn2\") pod \"insights-operator-585dfdc468-7h7vz\" (UID: \"d01e1208-1867-464a-822f-89683cda0372\") " pod="openshift-insights/insights-operator-585dfdc468-7h7vz" Apr 23 17:54:49.447742 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.447723 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Apr 23 17:54:49.447817 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.447726 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Apr 23 17:54:49.447817 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.447728 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-54wff\"" Apr 23 17:54:49.447817 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.447785 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5a622cde-4463-4b2b-a60a-0724fdeeb5e3-trusted-ca\") pod \"console-operator-9d4b6777b-phhz6\" (UID: \"5a622cde-4463-4b2b-a60a-0724fdeeb5e3\") " pod="openshift-console-operator/console-operator-9d4b6777b-phhz6" Apr 23 17:54:49.478148 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.478123 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-75587bd455-8b4st"] Apr 23 17:54:49.478148 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.478148 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-8894fc9bd-pvs64"] Apr 23 17:54:49.478261 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.478157 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-cb95c66f6-727jd"] Apr 23 17:54:49.478261 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.478164 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-d6fc45fc5-jfgn6"] Apr 23 17:54:49.478261 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.478172 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-hqlvp"] Apr 23 17:54:49.478261 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.478179 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-6txtb"] Apr 23 17:54:49.478261 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.478247 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-6txtb" Apr 23 17:54:49.481027 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.481011 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Apr 23 17:54:49.481099 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.481029 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-f4mbl\"" Apr 23 17:54:49.481099 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.481086 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Apr 23 17:54:49.481190 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.481150 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Apr 23 17:54:49.533545 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.533527 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-9d4b6777b-phhz6" Apr 23 17:54:49.538194 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.538170 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rm8ql\" (UniqueName: \"kubernetes.io/projected/9601dc49-4014-4c79-9bb2-5871bb8d36a1-kube-api-access-rm8ql\") pod \"kube-storage-version-migrator-operator-6769c5d45-567k5\" (UID: \"9601dc49-4014-4c79-9bb2-5871bb8d36a1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-567k5" Apr 23 17:54:49.538303 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.538205 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/223b3de3-2746-4385-a15c-cba2eeb2e9ee-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-jhzwk\" (UID: \"223b3de3-2746-4385-a15c-cba2eeb2e9ee\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-jhzwk" Apr 23 17:54:49.538303 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.538237 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/b39a95d3-b859-4e2d-bbef-fca1ee288a74-image-registry-private-configuration\") pod \"image-registry-5bb94bc895-f4jk5\" (UID: \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\") " pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" Apr 23 17:54:49.538303 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.538260 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/cabecf13-4b77-4125-bdb2-df08000b4d3d-telemetry-config\") pod \"cluster-monitoring-operator-75587bd455-8b4st\" (UID: \"cabecf13-4b77-4125-bdb2-df08000b4d3d\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-8b4st" Apr 23 17:54:49.538303 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.538282 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b39a95d3-b859-4e2d-bbef-fca1ee288a74-bound-sa-token\") pod \"image-registry-5bb94bc895-f4jk5\" (UID: \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\") " pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" Apr 23 17:54:49.538509 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.538307 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/e4f9f970-44a9-4e79-ac39-0cfc094cc4ca-networking-console-plugin-cert\") pod \"networking-console-plugin-cb95c66f6-727jd\" (UID: \"e4f9f970-44a9-4e79-ac39-0cfc094cc4ca\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-727jd" Apr 23 17:54:49.538509 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.538342 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phcwg\" (UniqueName: \"kubernetes.io/projected/fff84aa0-f5b3-4d5a-add6-04dc79b3bf54-kube-api-access-phcwg\") pod \"network-check-source-8894fc9bd-pvs64\" (UID: \"fff84aa0-f5b3-4d5a-add6-04dc79b3bf54\") " pod="openshift-network-diagnostics/network-check-source-8894fc9bd-pvs64" Apr 23 17:54:49.538509 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:49.538394 2574 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 23 17:54:49.538509 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:49.538479 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/223b3de3-2746-4385-a15c-cba2eeb2e9ee-samples-operator-tls podName:223b3de3-2746-4385-a15c-cba2eeb2e9ee nodeName:}" failed. No retries permitted until 2026-04-23 17:54:50.038459542 +0000 UTC m=+160.402500461 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/223b3de3-2746-4385-a15c-cba2eeb2e9ee-samples-operator-tls") pod "cluster-samples-operator-6dc5bdb6b4-jhzwk" (UID: "223b3de3-2746-4385-a15c-cba2eeb2e9ee") : secret "samples-operator-tls" not found Apr 23 17:54:49.538709 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.538498 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-metrics-certs\") pod \"router-default-85cf97bcfb-crk2g\" (UID: \"ef6bbc19-ba30-4d63-ad0f-d37109da20b7\") " pod="openshift-ingress/router-default-85cf97bcfb-crk2g" Apr 23 17:54:49.538709 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.538544 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4sd6t\" (UniqueName: \"kubernetes.io/projected/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-kube-api-access-4sd6t\") pod \"router-default-85cf97bcfb-crk2g\" (UID: \"ef6bbc19-ba30-4d63-ad0f-d37109da20b7\") " pod="openshift-ingress/router-default-85cf97bcfb-crk2g" Apr 23 17:54:49.538709 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.538569 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b39a95d3-b859-4e2d-bbef-fca1ee288a74-registry-tls\") pod \"image-registry-5bb94bc895-f4jk5\" (UID: \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\") " pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" Apr 23 17:54:49.538709 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.538613 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9601dc49-4014-4c79-9bb2-5871bb8d36a1-config\") pod \"kube-storage-version-migrator-operator-6769c5d45-567k5\" (UID: \"9601dc49-4014-4c79-9bb2-5871bb8d36a1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-567k5" Apr 23 17:54:49.538709 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:49.538643 2574 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 23 17:54:49.538709 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:49.538701 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-metrics-certs podName:ef6bbc19-ba30-4d63-ad0f-d37109da20b7 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:50.038683593 +0000 UTC m=+160.402724513 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-metrics-certs") pod "router-default-85cf97bcfb-crk2g" (UID: "ef6bbc19-ba30-4d63-ad0f-d37109da20b7") : secret "router-metrics-certs-default" not found Apr 23 17:54:49.539036 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.538751 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bxm9t\" (UniqueName: \"kubernetes.io/projected/223b3de3-2746-4385-a15c-cba2eeb2e9ee-kube-api-access-bxm9t\") pod \"cluster-samples-operator-6dc5bdb6b4-jhzwk\" (UID: \"223b3de3-2746-4385-a15c-cba2eeb2e9ee\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-jhzwk" Apr 23 17:54:49.539036 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:49.538774 2574 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 23 17:54:49.539036 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.538781 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b39a95d3-b859-4e2d-bbef-fca1ee288a74-installation-pull-secrets\") pod \"image-registry-5bb94bc895-f4jk5\" (UID: \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\") " pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" Apr 23 17:54:49.539036 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:49.538787 2574 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-5bb94bc895-f4jk5: secret "image-registry-tls" not found Apr 23 17:54:49.539036 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.538916 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9zf67\" (UniqueName: \"kubernetes.io/projected/926cf4a9-abea-43b7-baa6-dc9cd9430a00-kube-api-access-9zf67\") pod \"volume-data-source-validator-7c6cbb6c87-24f9z\" (UID: \"926cf4a9-abea-43b7-baa6-dc9cd9430a00\") " pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-24f9z" Apr 23 17:54:49.539036 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.538958 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-service-ca-bundle\") pod \"router-default-85cf97bcfb-crk2g\" (UID: \"ef6bbc19-ba30-4d63-ad0f-d37109da20b7\") " pod="openshift-ingress/router-default-85cf97bcfb-crk2g" Apr 23 17:54:49.539036 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.538990 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e21ba2c0-7cc7-4b50-ba6c-1fb814e1f50a-config\") pod \"service-ca-operator-d6fc45fc5-jfgn6\" (UID: \"e21ba2c0-7cc7-4b50-ba6c-1fb814e1f50a\") " pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-jfgn6" Apr 23 17:54:49.539036 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.539017 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b39a95d3-b859-4e2d-bbef-fca1ee288a74-registry-certificates\") pod \"image-registry-5bb94bc895-f4jk5\" (UID: \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\") " pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" Apr 23 17:54:49.539571 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.539061 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9601dc49-4014-4c79-9bb2-5871bb8d36a1-serving-cert\") pod \"kube-storage-version-migrator-operator-6769c5d45-567k5\" (UID: \"9601dc49-4014-4c79-9bb2-5871bb8d36a1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-567k5" Apr 23 17:54:49.539571 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.539088 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t526k\" (UniqueName: \"kubernetes.io/projected/b39a95d3-b859-4e2d-bbef-fca1ee288a74-kube-api-access-t526k\") pod \"image-registry-5bb94bc895-f4jk5\" (UID: \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\") " pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" Apr 23 17:54:49.539571 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.539113 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/cabecf13-4b77-4125-bdb2-df08000b4d3d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-75587bd455-8b4st\" (UID: \"cabecf13-4b77-4125-bdb2-df08000b4d3d\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-8b4st" Apr 23 17:54:49.539571 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.539152 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-default-certificate\") pod \"router-default-85cf97bcfb-crk2g\" (UID: \"ef6bbc19-ba30-4d63-ad0f-d37109da20b7\") " pod="openshift-ingress/router-default-85cf97bcfb-crk2g" Apr 23 17:54:49.539571 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.539176 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b39a95d3-b859-4e2d-bbef-fca1ee288a74-ca-trust-extracted\") pod \"image-registry-5bb94bc895-f4jk5\" (UID: \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\") " pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" Apr 23 17:54:49.539571 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:49.539211 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b39a95d3-b859-4e2d-bbef-fca1ee288a74-registry-tls podName:b39a95d3-b859-4e2d-bbef-fca1ee288a74 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:50.03919079 +0000 UTC m=+160.403231725 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/b39a95d3-b859-4e2d-bbef-fca1ee288a74-registry-tls") pod "image-registry-5bb94bc895-f4jk5" (UID: "b39a95d3-b859-4e2d-bbef-fca1ee288a74") : secret "image-registry-tls" not found Apr 23 17:54:49.539571 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.539238 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2879l\" (UniqueName: \"kubernetes.io/projected/e21ba2c0-7cc7-4b50-ba6c-1fb814e1f50a-kube-api-access-2879l\") pod \"service-ca-operator-d6fc45fc5-jfgn6\" (UID: \"e21ba2c0-7cc7-4b50-ba6c-1fb814e1f50a\") " pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-jfgn6" Apr 23 17:54:49.539571 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.539279 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-stats-auth\") pod \"router-default-85cf97bcfb-crk2g\" (UID: \"ef6bbc19-ba30-4d63-ad0f-d37109da20b7\") " pod="openshift-ingress/router-default-85cf97bcfb-crk2g" Apr 23 17:54:49.539571 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.539344 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e21ba2c0-7cc7-4b50-ba6c-1fb814e1f50a-serving-cert\") pod \"service-ca-operator-d6fc45fc5-jfgn6\" (UID: \"e21ba2c0-7cc7-4b50-ba6c-1fb814e1f50a\") " pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-jfgn6" Apr 23 17:54:49.539571 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.539376 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g24v5\" (UniqueName: \"kubernetes.io/projected/cabecf13-4b77-4125-bdb2-df08000b4d3d-kube-api-access-g24v5\") pod \"cluster-monitoring-operator-75587bd455-8b4st\" (UID: \"cabecf13-4b77-4125-bdb2-df08000b4d3d\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-8b4st" Apr 23 17:54:49.539571 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.539406 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b39a95d3-b859-4e2d-bbef-fca1ee288a74-trusted-ca\") pod \"image-registry-5bb94bc895-f4jk5\" (UID: \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\") " pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" Apr 23 17:54:49.539571 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.539421 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9601dc49-4014-4c79-9bb2-5871bb8d36a1-config\") pod \"kube-storage-version-migrator-operator-6769c5d45-567k5\" (UID: \"9601dc49-4014-4c79-9bb2-5871bb8d36a1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-567k5" Apr 23 17:54:49.539571 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.539436 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e4f9f970-44a9-4e79-ac39-0cfc094cc4ca-nginx-conf\") pod \"networking-console-plugin-cb95c66f6-727jd\" (UID: \"e4f9f970-44a9-4e79-ac39-0cfc094cc4ca\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-727jd" Apr 23 17:54:49.539571 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.539473 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b39a95d3-b859-4e2d-bbef-fca1ee288a74-ca-trust-extracted\") pod \"image-registry-5bb94bc895-f4jk5\" (UID: \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\") " pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" Apr 23 17:54:49.540395 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:49.539512 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-service-ca-bundle podName:ef6bbc19-ba30-4d63-ad0f-d37109da20b7 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:50.039495503 +0000 UTC m=+160.403536447 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-service-ca-bundle") pod "router-default-85cf97bcfb-crk2g" (UID: "ef6bbc19-ba30-4d63-ad0f-d37109da20b7") : configmap references non-existent config key: service-ca.crt Apr 23 17:54:49.540395 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.540081 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b39a95d3-b859-4e2d-bbef-fca1ee288a74-registry-certificates\") pod \"image-registry-5bb94bc895-f4jk5\" (UID: \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\") " pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" Apr 23 17:54:49.540395 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.540378 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b39a95d3-b859-4e2d-bbef-fca1ee288a74-trusted-ca\") pod \"image-registry-5bb94bc895-f4jk5\" (UID: \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\") " pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" Apr 23 17:54:49.541999 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.541973 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b39a95d3-b859-4e2d-bbef-fca1ee288a74-installation-pull-secrets\") pod \"image-registry-5bb94bc895-f4jk5\" (UID: \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\") " pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" Apr 23 17:54:49.542218 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.542195 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/b39a95d3-b859-4e2d-bbef-fca1ee288a74-image-registry-private-configuration\") pod \"image-registry-5bb94bc895-f4jk5\" (UID: \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\") " pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" Apr 23 17:54:49.542570 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.542543 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9601dc49-4014-4c79-9bb2-5871bb8d36a1-serving-cert\") pod \"kube-storage-version-migrator-operator-6769c5d45-567k5\" (UID: \"9601dc49-4014-4c79-9bb2-5871bb8d36a1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-567k5" Apr 23 17:54:49.542902 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.542825 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-default-certificate\") pod \"router-default-85cf97bcfb-crk2g\" (UID: \"ef6bbc19-ba30-4d63-ad0f-d37109da20b7\") " pod="openshift-ingress/router-default-85cf97bcfb-crk2g" Apr 23 17:54:49.543119 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.543081 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-stats-auth\") pod \"router-default-85cf97bcfb-crk2g\" (UID: \"ef6bbc19-ba30-4d63-ad0f-d37109da20b7\") " pod="openshift-ingress/router-default-85cf97bcfb-crk2g" Apr 23 17:54:49.545452 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.545435 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-585dfdc468-7h7vz" Apr 23 17:54:49.547767 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.547714 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sd6t\" (UniqueName: \"kubernetes.io/projected/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-kube-api-access-4sd6t\") pod \"router-default-85cf97bcfb-crk2g\" (UID: \"ef6bbc19-ba30-4d63-ad0f-d37109da20b7\") " pod="openshift-ingress/router-default-85cf97bcfb-crk2g" Apr 23 17:54:49.549541 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.549487 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm8ql\" (UniqueName: \"kubernetes.io/projected/9601dc49-4014-4c79-9bb2-5871bb8d36a1-kube-api-access-rm8ql\") pod \"kube-storage-version-migrator-operator-6769c5d45-567k5\" (UID: \"9601dc49-4014-4c79-9bb2-5871bb8d36a1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-567k5" Apr 23 17:54:49.550252 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.550222 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxm9t\" (UniqueName: \"kubernetes.io/projected/223b3de3-2746-4385-a15c-cba2eeb2e9ee-kube-api-access-bxm9t\") pod \"cluster-samples-operator-6dc5bdb6b4-jhzwk\" (UID: \"223b3de3-2746-4385-a15c-cba2eeb2e9ee\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-jhzwk" Apr 23 17:54:49.550497 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.550469 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b39a95d3-b859-4e2d-bbef-fca1ee288a74-bound-sa-token\") pod \"image-registry-5bb94bc895-f4jk5\" (UID: \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\") " pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" Apr 23 17:54:49.550911 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.550827 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zf67\" (UniqueName: \"kubernetes.io/projected/926cf4a9-abea-43b7-baa6-dc9cd9430a00-kube-api-access-9zf67\") pod \"volume-data-source-validator-7c6cbb6c87-24f9z\" (UID: \"926cf4a9-abea-43b7-baa6-dc9cd9430a00\") " pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-24f9z" Apr 23 17:54:49.551930 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.551911 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t526k\" (UniqueName: \"kubernetes.io/projected/b39a95d3-b859-4e2d-bbef-fca1ee288a74-kube-api-access-t526k\") pod \"image-registry-5bb94bc895-f4jk5\" (UID: \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\") " pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" Apr 23 17:54:49.618870 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.614967 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-24f9z" Apr 23 17:54:49.635443 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.635222 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-567k5" Apr 23 17:54:49.642357 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.641139 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2879l\" (UniqueName: \"kubernetes.io/projected/e21ba2c0-7cc7-4b50-ba6c-1fb814e1f50a-kube-api-access-2879l\") pod \"service-ca-operator-d6fc45fc5-jfgn6\" (UID: \"e21ba2c0-7cc7-4b50-ba6c-1fb814e1f50a\") " pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-jfgn6" Apr 23 17:54:49.642357 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.641176 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e21ba2c0-7cc7-4b50-ba6c-1fb814e1f50a-serving-cert\") pod \"service-ca-operator-d6fc45fc5-jfgn6\" (UID: \"e21ba2c0-7cc7-4b50-ba6c-1fb814e1f50a\") " pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-jfgn6" Apr 23 17:54:49.642357 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.641194 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g24v5\" (UniqueName: \"kubernetes.io/projected/cabecf13-4b77-4125-bdb2-df08000b4d3d-kube-api-access-g24v5\") pod \"cluster-monitoring-operator-75587bd455-8b4st\" (UID: \"cabecf13-4b77-4125-bdb2-df08000b4d3d\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-8b4st" Apr 23 17:54:49.642357 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.641214 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptdq7\" (UniqueName: \"kubernetes.io/projected/570f4ccf-8f66-420f-9543-207c02da2783-kube-api-access-ptdq7\") pod \"dns-default-hqlvp\" (UID: \"570f4ccf-8f66-420f-9543-207c02da2783\") " pod="openshift-dns/dns-default-hqlvp" Apr 23 17:54:49.642357 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.641247 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e4f9f970-44a9-4e79-ac39-0cfc094cc4ca-nginx-conf\") pod \"networking-console-plugin-cb95c66f6-727jd\" (UID: \"e4f9f970-44a9-4e79-ac39-0cfc094cc4ca\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-727jd" Apr 23 17:54:49.642357 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.641293 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/570f4ccf-8f66-420f-9543-207c02da2783-metrics-tls\") pod \"dns-default-hqlvp\" (UID: \"570f4ccf-8f66-420f-9543-207c02da2783\") " pod="openshift-dns/dns-default-hqlvp" Apr 23 17:54:49.642357 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.641318 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a455b3cc-b20e-46c2-9f70-3c5be09cad64-cert\") pod \"ingress-canary-6txtb\" (UID: \"a455b3cc-b20e-46c2-9f70-3c5be09cad64\") " pod="openshift-ingress-canary/ingress-canary-6txtb" Apr 23 17:54:49.642357 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.641349 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/cabecf13-4b77-4125-bdb2-df08000b4d3d-telemetry-config\") pod \"cluster-monitoring-operator-75587bd455-8b4st\" (UID: \"cabecf13-4b77-4125-bdb2-df08000b4d3d\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-8b4st" Apr 23 17:54:49.642357 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.641374 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/570f4ccf-8f66-420f-9543-207c02da2783-tmp-dir\") pod \"dns-default-hqlvp\" (UID: \"570f4ccf-8f66-420f-9543-207c02da2783\") " pod="openshift-dns/dns-default-hqlvp" Apr 23 17:54:49.642357 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.641414 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/e4f9f970-44a9-4e79-ac39-0cfc094cc4ca-networking-console-plugin-cert\") pod \"networking-console-plugin-cb95c66f6-727jd\" (UID: \"e4f9f970-44a9-4e79-ac39-0cfc094cc4ca\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-727jd" Apr 23 17:54:49.642357 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.641449 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-phcwg\" (UniqueName: \"kubernetes.io/projected/fff84aa0-f5b3-4d5a-add6-04dc79b3bf54-kube-api-access-phcwg\") pod \"network-check-source-8894fc9bd-pvs64\" (UID: \"fff84aa0-f5b3-4d5a-add6-04dc79b3bf54\") " pod="openshift-network-diagnostics/network-check-source-8894fc9bd-pvs64" Apr 23 17:54:49.642357 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.641503 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/570f4ccf-8f66-420f-9543-207c02da2783-config-volume\") pod \"dns-default-hqlvp\" (UID: \"570f4ccf-8f66-420f-9543-207c02da2783\") " pod="openshift-dns/dns-default-hqlvp" Apr 23 17:54:49.642357 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.641582 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e21ba2c0-7cc7-4b50-ba6c-1fb814e1f50a-config\") pod \"service-ca-operator-d6fc45fc5-jfgn6\" (UID: \"e21ba2c0-7cc7-4b50-ba6c-1fb814e1f50a\") " pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-jfgn6" Apr 23 17:54:49.642357 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:49.642152 2574 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 23 17:54:49.642357 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.642175 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e21ba2c0-7cc7-4b50-ba6c-1fb814e1f50a-config\") pod \"service-ca-operator-d6fc45fc5-jfgn6\" (UID: \"e21ba2c0-7cc7-4b50-ba6c-1fb814e1f50a\") " pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-jfgn6" Apr 23 17:54:49.642357 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:49.642230 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4f9f970-44a9-4e79-ac39-0cfc094cc4ca-networking-console-plugin-cert podName:e4f9f970-44a9-4e79-ac39-0cfc094cc4ca nodeName:}" failed. No retries permitted until 2026-04-23 17:54:50.142210553 +0000 UTC m=+160.506251473 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/e4f9f970-44a9-4e79-ac39-0cfc094cc4ca-networking-console-plugin-cert") pod "networking-console-plugin-cb95c66f6-727jd" (UID: "e4f9f970-44a9-4e79-ac39-0cfc094cc4ca") : secret "networking-console-plugin-cert" not found Apr 23 17:54:49.643234 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.642293 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/cabecf13-4b77-4125-bdb2-df08000b4d3d-telemetry-config\") pod \"cluster-monitoring-operator-75587bd455-8b4st\" (UID: \"cabecf13-4b77-4125-bdb2-df08000b4d3d\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-8b4st" Apr 23 17:54:49.643234 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.642389 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/cabecf13-4b77-4125-bdb2-df08000b4d3d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-75587bd455-8b4st\" (UID: \"cabecf13-4b77-4125-bdb2-df08000b4d3d\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-8b4st" Apr 23 17:54:49.643234 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:49.642471 2574 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Apr 23 17:54:49.643234 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.642487 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56rns\" (UniqueName: \"kubernetes.io/projected/a455b3cc-b20e-46c2-9f70-3c5be09cad64-kube-api-access-56rns\") pod \"ingress-canary-6txtb\" (UID: \"a455b3cc-b20e-46c2-9f70-3c5be09cad64\") " pod="openshift-ingress-canary/ingress-canary-6txtb" Apr 23 17:54:49.643234 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:49.642517 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cabecf13-4b77-4125-bdb2-df08000b4d3d-cluster-monitoring-operator-tls podName:cabecf13-4b77-4125-bdb2-df08000b4d3d nodeName:}" failed. No retries permitted until 2026-04-23 17:54:50.142501504 +0000 UTC m=+160.506542423 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/cabecf13-4b77-4125-bdb2-df08000b4d3d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-75587bd455-8b4st" (UID: "cabecf13-4b77-4125-bdb2-df08000b4d3d") : secret "cluster-monitoring-operator-tls" not found Apr 23 17:54:49.643234 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.643138 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e4f9f970-44a9-4e79-ac39-0cfc094cc4ca-nginx-conf\") pod \"networking-console-plugin-cb95c66f6-727jd\" (UID: \"e4f9f970-44a9-4e79-ac39-0cfc094cc4ca\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-727jd" Apr 23 17:54:49.645671 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.645627 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e21ba2c0-7cc7-4b50-ba6c-1fb814e1f50a-serving-cert\") pod \"service-ca-operator-d6fc45fc5-jfgn6\" (UID: \"e21ba2c0-7cc7-4b50-ba6c-1fb814e1f50a\") " pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-jfgn6" Apr 23 17:54:49.657867 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.654236 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g24v5\" (UniqueName: \"kubernetes.io/projected/cabecf13-4b77-4125-bdb2-df08000b4d3d-kube-api-access-g24v5\") pod \"cluster-monitoring-operator-75587bd455-8b4st\" (UID: \"cabecf13-4b77-4125-bdb2-df08000b4d3d\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-8b4st" Apr 23 17:54:49.657867 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.654342 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-phcwg\" (UniqueName: \"kubernetes.io/projected/fff84aa0-f5b3-4d5a-add6-04dc79b3bf54-kube-api-access-phcwg\") pod \"network-check-source-8894fc9bd-pvs64\" (UID: \"fff84aa0-f5b3-4d5a-add6-04dc79b3bf54\") " pod="openshift-network-diagnostics/network-check-source-8894fc9bd-pvs64" Apr 23 17:54:49.657867 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.654618 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2879l\" (UniqueName: \"kubernetes.io/projected/e21ba2c0-7cc7-4b50-ba6c-1fb814e1f50a-kube-api-access-2879l\") pod \"service-ca-operator-d6fc45fc5-jfgn6\" (UID: \"e21ba2c0-7cc7-4b50-ba6c-1fb814e1f50a\") " pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-jfgn6" Apr 23 17:54:49.686567 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.686514 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-jfgn6" Apr 23 17:54:49.706049 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.705956 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-8894fc9bd-pvs64" Apr 23 17:54:49.720703 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.720640 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-585dfdc468-7h7vz"] Apr 23 17:54:49.723044 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:54:49.723010 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd01e1208_1867_464a_822f_89683cda0372.slice/crio-e93f04b416abda92c92be3ba61b05efb7f81efa6348ad8f0aede62c41bcf28b4 WatchSource:0}: Error finding container e93f04b416abda92c92be3ba61b05efb7f81efa6348ad8f0aede62c41bcf28b4: Status 404 returned error can't find the container with id e93f04b416abda92c92be3ba61b05efb7f81efa6348ad8f0aede62c41bcf28b4 Apr 23 17:54:49.730465 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.727282 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-9d4b6777b-phhz6"] Apr 23 17:54:49.731018 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:54:49.730974 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a622cde_4463_4b2b_a60a_0724fdeeb5e3.slice/crio-34d248030e680bf6eae1939aeab8836094a8cf1bb98412fc118cd1e35fe7e7a4 WatchSource:0}: Error finding container 34d248030e680bf6eae1939aeab8836094a8cf1bb98412fc118cd1e35fe7e7a4: Status 404 returned error can't find the container with id 34d248030e680bf6eae1939aeab8836094a8cf1bb98412fc118cd1e35fe7e7a4 Apr 23 17:54:49.743678 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.743612 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/570f4ccf-8f66-420f-9543-207c02da2783-config-volume\") pod \"dns-default-hqlvp\" (UID: \"570f4ccf-8f66-420f-9543-207c02da2783\") " pod="openshift-dns/dns-default-hqlvp" Apr 23 17:54:49.743810 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.743711 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-56rns\" (UniqueName: \"kubernetes.io/projected/a455b3cc-b20e-46c2-9f70-3c5be09cad64-kube-api-access-56rns\") pod \"ingress-canary-6txtb\" (UID: \"a455b3cc-b20e-46c2-9f70-3c5be09cad64\") " pod="openshift-ingress-canary/ingress-canary-6txtb" Apr 23 17:54:49.743810 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.743761 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdq7\" (UniqueName: \"kubernetes.io/projected/570f4ccf-8f66-420f-9543-207c02da2783-kube-api-access-ptdq7\") pod \"dns-default-hqlvp\" (UID: \"570f4ccf-8f66-420f-9543-207c02da2783\") " pod="openshift-dns/dns-default-hqlvp" Apr 23 17:54:49.743999 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.743871 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/570f4ccf-8f66-420f-9543-207c02da2783-metrics-tls\") pod \"dns-default-hqlvp\" (UID: \"570f4ccf-8f66-420f-9543-207c02da2783\") " pod="openshift-dns/dns-default-hqlvp" Apr 23 17:54:49.743999 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.743899 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a455b3cc-b20e-46c2-9f70-3c5be09cad64-cert\") pod \"ingress-canary-6txtb\" (UID: \"a455b3cc-b20e-46c2-9f70-3c5be09cad64\") " pod="openshift-ingress-canary/ingress-canary-6txtb" Apr 23 17:54:49.743999 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.743943 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/570f4ccf-8f66-420f-9543-207c02da2783-tmp-dir\") pod \"dns-default-hqlvp\" (UID: \"570f4ccf-8f66-420f-9543-207c02da2783\") " pod="openshift-dns/dns-default-hqlvp" Apr 23 17:54:49.744445 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:49.744169 2574 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 23 17:54:49.744445 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:49.744243 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a455b3cc-b20e-46c2-9f70-3c5be09cad64-cert podName:a455b3cc-b20e-46c2-9f70-3c5be09cad64 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:50.24422543 +0000 UTC m=+160.608266349 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a455b3cc-b20e-46c2-9f70-3c5be09cad64-cert") pod "ingress-canary-6txtb" (UID: "a455b3cc-b20e-46c2-9f70-3c5be09cad64") : secret "canary-serving-cert" not found Apr 23 17:54:49.744445 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.744297 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/570f4ccf-8f66-420f-9543-207c02da2783-config-volume\") pod \"dns-default-hqlvp\" (UID: \"570f4ccf-8f66-420f-9543-207c02da2783\") " pod="openshift-dns/dns-default-hqlvp" Apr 23 17:54:49.745799 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:49.745345 2574 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 23 17:54:49.745799 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:49.745400 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/570f4ccf-8f66-420f-9543-207c02da2783-metrics-tls podName:570f4ccf-8f66-420f-9543-207c02da2783 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:50.245383388 +0000 UTC m=+160.609424313 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/570f4ccf-8f66-420f-9543-207c02da2783-metrics-tls") pod "dns-default-hqlvp" (UID: "570f4ccf-8f66-420f-9543-207c02da2783") : secret "dns-default-metrics-tls" not found Apr 23 17:54:49.748187 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.748145 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/570f4ccf-8f66-420f-9543-207c02da2783-tmp-dir\") pod \"dns-default-hqlvp\" (UID: \"570f4ccf-8f66-420f-9543-207c02da2783\") " pod="openshift-dns/dns-default-hqlvp" Apr 23 17:54:49.758759 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.758712 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-56rns\" (UniqueName: \"kubernetes.io/projected/a455b3cc-b20e-46c2-9f70-3c5be09cad64-kube-api-access-56rns\") pod \"ingress-canary-6txtb\" (UID: \"a455b3cc-b20e-46c2-9f70-3c5be09cad64\") " pod="openshift-ingress-canary/ingress-canary-6txtb" Apr 23 17:54:49.759322 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.759280 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptdq7\" (UniqueName: \"kubernetes.io/projected/570f4ccf-8f66-420f-9543-207c02da2783-kube-api-access-ptdq7\") pod \"dns-default-hqlvp\" (UID: \"570f4ccf-8f66-420f-9543-207c02da2783\") " pod="openshift-dns/dns-default-hqlvp" Apr 23 17:54:49.776896 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.776772 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-24f9z"] Apr 23 17:54:49.790436 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:54:49.790406 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod926cf4a9_abea_43b7_baa6_dc9cd9430a00.slice/crio-b9694d3dbbbfda419879ea603111fdfc47ceb7d3a14a295afa784484ac788cda WatchSource:0}: Error finding container b9694d3dbbbfda419879ea603111fdfc47ceb7d3a14a295afa784484ac788cda: Status 404 returned error can't find the container with id b9694d3dbbbfda419879ea603111fdfc47ceb7d3a14a295afa784484ac788cda Apr 23 17:54:49.797233 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.797213 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-567k5"] Apr 23 17:54:49.800065 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:54:49.800039 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9601dc49_4014_4c79_9bb2_5871bb8d36a1.slice/crio-226935b1852cf70cd9b2ca3224ccf6e3302e57f58faaf22f9946b15aa70b187a WatchSource:0}: Error finding container 226935b1852cf70cd9b2ca3224ccf6e3302e57f58faaf22f9946b15aa70b187a: Status 404 returned error can't find the container with id 226935b1852cf70cd9b2ca3224ccf6e3302e57f58faaf22f9946b15aa70b187a Apr 23 17:54:49.844573 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.844533 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-d6fc45fc5-jfgn6"] Apr 23 17:54:49.846814 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:54:49.846790 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode21ba2c0_7cc7_4b50_ba6c_1fb814e1f50a.slice/crio-b9b249d61afdae79e53b89f73656ac60d5f69b1bbf25ec6e81d6c7f84c9a6372 WatchSource:0}: Error finding container b9b249d61afdae79e53b89f73656ac60d5f69b1bbf25ec6e81d6c7f84c9a6372: Status 404 returned error can't find the container with id b9b249d61afdae79e53b89f73656ac60d5f69b1bbf25ec6e81d6c7f84c9a6372 Apr 23 17:54:49.856463 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:49.856443 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-8894fc9bd-pvs64"] Apr 23 17:54:49.858408 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:54:49.858389 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfff84aa0_f5b3_4d5a_add6_04dc79b3bf54.slice/crio-e68ed35417cd6d852412829d6ad0d3f314d3f1f3834683ffe8bb49afcc3b6005 WatchSource:0}: Error finding container e68ed35417cd6d852412829d6ad0d3f314d3f1f3834683ffe8bb49afcc3b6005: Status 404 returned error can't find the container with id e68ed35417cd6d852412829d6ad0d3f314d3f1f3834683ffe8bb49afcc3b6005 Apr 23 17:54:50.046891 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:50.046803 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/223b3de3-2746-4385-a15c-cba2eeb2e9ee-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-jhzwk\" (UID: \"223b3de3-2746-4385-a15c-cba2eeb2e9ee\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-jhzwk" Apr 23 17:54:50.047003 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:50.046893 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-metrics-certs\") pod \"router-default-85cf97bcfb-crk2g\" (UID: \"ef6bbc19-ba30-4d63-ad0f-d37109da20b7\") " pod="openshift-ingress/router-default-85cf97bcfb-crk2g" Apr 23 17:54:50.047003 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:50.046913 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b39a95d3-b859-4e2d-bbef-fca1ee288a74-registry-tls\") pod \"image-registry-5bb94bc895-f4jk5\" (UID: \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\") " pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" Apr 23 17:54:50.047003 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:50.046938 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-service-ca-bundle\") pod \"router-default-85cf97bcfb-crk2g\" (UID: \"ef6bbc19-ba30-4d63-ad0f-d37109da20b7\") " pod="openshift-ingress/router-default-85cf97bcfb-crk2g" Apr 23 17:54:50.047003 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:50.046947 2574 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 23 17:54:50.047136 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:50.047006 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/223b3de3-2746-4385-a15c-cba2eeb2e9ee-samples-operator-tls podName:223b3de3-2746-4385-a15c-cba2eeb2e9ee nodeName:}" failed. No retries permitted until 2026-04-23 17:54:51.04698643 +0000 UTC m=+161.411027351 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/223b3de3-2746-4385-a15c-cba2eeb2e9ee-samples-operator-tls") pod "cluster-samples-operator-6dc5bdb6b4-jhzwk" (UID: "223b3de3-2746-4385-a15c-cba2eeb2e9ee") : secret "samples-operator-tls" not found Apr 23 17:54:50.047136 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:50.047024 2574 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 23 17:54:50.047136 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:50.047047 2574 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 23 17:54:50.047136 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:50.047060 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-service-ca-bundle podName:ef6bbc19-ba30-4d63-ad0f-d37109da20b7 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:51.047045535 +0000 UTC m=+161.411086459 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-service-ca-bundle") pod "router-default-85cf97bcfb-crk2g" (UID: "ef6bbc19-ba30-4d63-ad0f-d37109da20b7") : configmap references non-existent config key: service-ca.crt Apr 23 17:54:50.047136 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:50.047062 2574 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-5bb94bc895-f4jk5: secret "image-registry-tls" not found Apr 23 17:54:50.047136 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:50.047078 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-metrics-certs podName:ef6bbc19-ba30-4d63-ad0f-d37109da20b7 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:51.04706839 +0000 UTC m=+161.411109312 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-metrics-certs") pod "router-default-85cf97bcfb-crk2g" (UID: "ef6bbc19-ba30-4d63-ad0f-d37109da20b7") : secret "router-metrics-certs-default" not found Apr 23 17:54:50.047136 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:50.047099 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b39a95d3-b859-4e2d-bbef-fca1ee288a74-registry-tls podName:b39a95d3-b859-4e2d-bbef-fca1ee288a74 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:51.047088756 +0000 UTC m=+161.411129676 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/b39a95d3-b859-4e2d-bbef-fca1ee288a74-registry-tls") pod "image-registry-5bb94bc895-f4jk5" (UID: "b39a95d3-b859-4e2d-bbef-fca1ee288a74") : secret "image-registry-tls" not found Apr 23 17:54:50.148344 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:50.148306 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/e4f9f970-44a9-4e79-ac39-0cfc094cc4ca-networking-console-plugin-cert\") pod \"networking-console-plugin-cb95c66f6-727jd\" (UID: \"e4f9f970-44a9-4e79-ac39-0cfc094cc4ca\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-727jd" Apr 23 17:54:50.148503 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:50.148427 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/cabecf13-4b77-4125-bdb2-df08000b4d3d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-75587bd455-8b4st\" (UID: \"cabecf13-4b77-4125-bdb2-df08000b4d3d\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-8b4st" Apr 23 17:54:50.148578 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:50.148545 2574 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 23 17:54:50.148643 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:50.148613 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4f9f970-44a9-4e79-ac39-0cfc094cc4ca-networking-console-plugin-cert podName:e4f9f970-44a9-4e79-ac39-0cfc094cc4ca nodeName:}" failed. No retries permitted until 2026-04-23 17:54:51.148594302 +0000 UTC m=+161.512635236 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/e4f9f970-44a9-4e79-ac39-0cfc094cc4ca-networking-console-plugin-cert") pod "networking-console-plugin-cb95c66f6-727jd" (UID: "e4f9f970-44a9-4e79-ac39-0cfc094cc4ca") : secret "networking-console-plugin-cert" not found Apr 23 17:54:50.148643 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:50.148633 2574 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Apr 23 17:54:50.148753 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:50.148679 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cabecf13-4b77-4125-bdb2-df08000b4d3d-cluster-monitoring-operator-tls podName:cabecf13-4b77-4125-bdb2-df08000b4d3d nodeName:}" failed. No retries permitted until 2026-04-23 17:54:51.148662314 +0000 UTC m=+161.512703234 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/cabecf13-4b77-4125-bdb2-df08000b4d3d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-75587bd455-8b4st" (UID: "cabecf13-4b77-4125-bdb2-df08000b4d3d") : secret "cluster-monitoring-operator-tls" not found Apr 23 17:54:50.250317 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:50.249619 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/570f4ccf-8f66-420f-9543-207c02da2783-metrics-tls\") pod \"dns-default-hqlvp\" (UID: \"570f4ccf-8f66-420f-9543-207c02da2783\") " pod="openshift-dns/dns-default-hqlvp" Apr 23 17:54:50.250317 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:50.249663 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a455b3cc-b20e-46c2-9f70-3c5be09cad64-cert\") pod \"ingress-canary-6txtb\" (UID: \"a455b3cc-b20e-46c2-9f70-3c5be09cad64\") " pod="openshift-ingress-canary/ingress-canary-6txtb" Apr 23 17:54:50.250317 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:50.249785 2574 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 23 17:54:50.250317 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:50.249861 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a455b3cc-b20e-46c2-9f70-3c5be09cad64-cert podName:a455b3cc-b20e-46c2-9f70-3c5be09cad64 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:51.249824079 +0000 UTC m=+161.613865005 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a455b3cc-b20e-46c2-9f70-3c5be09cad64-cert") pod "ingress-canary-6txtb" (UID: "a455b3cc-b20e-46c2-9f70-3c5be09cad64") : secret "canary-serving-cert" not found Apr 23 17:54:50.250317 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:50.250238 2574 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 23 17:54:50.250317 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:50.250286 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/570f4ccf-8f66-420f-9543-207c02da2783-metrics-tls podName:570f4ccf-8f66-420f-9543-207c02da2783 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:51.250270382 +0000 UTC m=+161.614311316 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/570f4ccf-8f66-420f-9543-207c02da2783-metrics-tls") pod "dns-default-hqlvp" (UID: "570f4ccf-8f66-420f-9543-207c02da2783") : secret "dns-default-metrics-tls" not found Apr 23 17:54:50.665447 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:50.665372 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-567k5" event={"ID":"9601dc49-4014-4c79-9bb2-5871bb8d36a1","Type":"ContainerStarted","Data":"226935b1852cf70cd9b2ca3224ccf6e3302e57f58faaf22f9946b15aa70b187a"} Apr 23 17:54:50.667406 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:50.667374 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-585dfdc468-7h7vz" event={"ID":"d01e1208-1867-464a-822f-89683cda0372","Type":"ContainerStarted","Data":"e93f04b416abda92c92be3ba61b05efb7f81efa6348ad8f0aede62c41bcf28b4"} Apr 23 17:54:50.668871 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:50.668823 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-24f9z" event={"ID":"926cf4a9-abea-43b7-baa6-dc9cd9430a00","Type":"ContainerStarted","Data":"b9694d3dbbbfda419879ea603111fdfc47ceb7d3a14a295afa784484ac788cda"} Apr 23 17:54:50.669942 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:50.669918 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-8894fc9bd-pvs64" event={"ID":"fff84aa0-f5b3-4d5a-add6-04dc79b3bf54","Type":"ContainerStarted","Data":"e68ed35417cd6d852412829d6ad0d3f314d3f1f3834683ffe8bb49afcc3b6005"} Apr 23 17:54:50.672418 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:50.672395 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-jfgn6" event={"ID":"e21ba2c0-7cc7-4b50-ba6c-1fb814e1f50a","Type":"ContainerStarted","Data":"b9b249d61afdae79e53b89f73656ac60d5f69b1bbf25ec6e81d6c7f84c9a6372"} Apr 23 17:54:50.675260 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:50.675233 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-9d4b6777b-phhz6" event={"ID":"5a622cde-4463-4b2b-a60a-0724fdeeb5e3","Type":"ContainerStarted","Data":"34d248030e680bf6eae1939aeab8836094a8cf1bb98412fc118cd1e35fe7e7a4"} Apr 23 17:54:51.056162 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:51.056125 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-service-ca-bundle\") pod \"router-default-85cf97bcfb-crk2g\" (UID: \"ef6bbc19-ba30-4d63-ad0f-d37109da20b7\") " pod="openshift-ingress/router-default-85cf97bcfb-crk2g" Apr 23 17:54:51.056337 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:51.056242 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/223b3de3-2746-4385-a15c-cba2eeb2e9ee-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-jhzwk\" (UID: \"223b3de3-2746-4385-a15c-cba2eeb2e9ee\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-jhzwk" Apr 23 17:54:51.056337 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:51.056330 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-metrics-certs\") pod \"router-default-85cf97bcfb-crk2g\" (UID: \"ef6bbc19-ba30-4d63-ad0f-d37109da20b7\") " pod="openshift-ingress/router-default-85cf97bcfb-crk2g" Apr 23 17:54:51.056469 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:51.056357 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b39a95d3-b859-4e2d-bbef-fca1ee288a74-registry-tls\") pod \"image-registry-5bb94bc895-f4jk5\" (UID: \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\") " pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" Apr 23 17:54:51.056469 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:51.056391 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-service-ca-bundle podName:ef6bbc19-ba30-4d63-ad0f-d37109da20b7 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:53.05637096 +0000 UTC m=+163.420411897 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-service-ca-bundle") pod "router-default-85cf97bcfb-crk2g" (UID: "ef6bbc19-ba30-4d63-ad0f-d37109da20b7") : configmap references non-existent config key: service-ca.crt Apr 23 17:54:51.056469 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:51.056444 2574 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 23 17:54:51.056469 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:51.056455 2574 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-5bb94bc895-f4jk5: secret "image-registry-tls" not found Apr 23 17:54:51.056669 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:51.056470 2574 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 23 17:54:51.056669 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:51.056490 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b39a95d3-b859-4e2d-bbef-fca1ee288a74-registry-tls podName:b39a95d3-b859-4e2d-bbef-fca1ee288a74 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:53.056477545 +0000 UTC m=+163.420518477 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/b39a95d3-b859-4e2d-bbef-fca1ee288a74-registry-tls") pod "image-registry-5bb94bc895-f4jk5" (UID: "b39a95d3-b859-4e2d-bbef-fca1ee288a74") : secret "image-registry-tls" not found Apr 23 17:54:51.056669 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:51.056536 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/223b3de3-2746-4385-a15c-cba2eeb2e9ee-samples-operator-tls podName:223b3de3-2746-4385-a15c-cba2eeb2e9ee nodeName:}" failed. No retries permitted until 2026-04-23 17:54:53.056518715 +0000 UTC m=+163.420559672 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/223b3de3-2746-4385-a15c-cba2eeb2e9ee-samples-operator-tls") pod "cluster-samples-operator-6dc5bdb6b4-jhzwk" (UID: "223b3de3-2746-4385-a15c-cba2eeb2e9ee") : secret "samples-operator-tls" not found Apr 23 17:54:51.056669 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:51.056541 2574 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 23 17:54:51.056669 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:51.056571 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-metrics-certs podName:ef6bbc19-ba30-4d63-ad0f-d37109da20b7 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:53.056561205 +0000 UTC m=+163.420602126 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-metrics-certs") pod "router-default-85cf97bcfb-crk2g" (UID: "ef6bbc19-ba30-4d63-ad0f-d37109da20b7") : secret "router-metrics-certs-default" not found Apr 23 17:54:51.157462 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:51.157413 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/cabecf13-4b77-4125-bdb2-df08000b4d3d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-75587bd455-8b4st\" (UID: \"cabecf13-4b77-4125-bdb2-df08000b4d3d\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-8b4st" Apr 23 17:54:51.157649 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:51.157538 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/e4f9f970-44a9-4e79-ac39-0cfc094cc4ca-networking-console-plugin-cert\") pod \"networking-console-plugin-cb95c66f6-727jd\" (UID: \"e4f9f970-44a9-4e79-ac39-0cfc094cc4ca\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-727jd" Apr 23 17:54:51.157775 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:51.157713 2574 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 23 17:54:51.157893 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:51.157780 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4f9f970-44a9-4e79-ac39-0cfc094cc4ca-networking-console-plugin-cert podName:e4f9f970-44a9-4e79-ac39-0cfc094cc4ca nodeName:}" failed. No retries permitted until 2026-04-23 17:54:53.157761087 +0000 UTC m=+163.521802009 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/e4f9f970-44a9-4e79-ac39-0cfc094cc4ca-networking-console-plugin-cert") pod "networking-console-plugin-cb95c66f6-727jd" (UID: "e4f9f970-44a9-4e79-ac39-0cfc094cc4ca") : secret "networking-console-plugin-cert" not found Apr 23 17:54:51.158485 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:51.158129 2574 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Apr 23 17:54:51.158485 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:51.158184 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cabecf13-4b77-4125-bdb2-df08000b4d3d-cluster-monitoring-operator-tls podName:cabecf13-4b77-4125-bdb2-df08000b4d3d nodeName:}" failed. No retries permitted until 2026-04-23 17:54:53.158169588 +0000 UTC m=+163.522210513 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/cabecf13-4b77-4125-bdb2-df08000b4d3d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-75587bd455-8b4st" (UID: "cabecf13-4b77-4125-bdb2-df08000b4d3d") : secret "cluster-monitoring-operator-tls" not found Apr 23 17:54:51.258872 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:51.258563 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/570f4ccf-8f66-420f-9543-207c02da2783-metrics-tls\") pod \"dns-default-hqlvp\" (UID: \"570f4ccf-8f66-420f-9543-207c02da2783\") " pod="openshift-dns/dns-default-hqlvp" Apr 23 17:54:51.258872 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:51.258611 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a455b3cc-b20e-46c2-9f70-3c5be09cad64-cert\") pod \"ingress-canary-6txtb\" (UID: \"a455b3cc-b20e-46c2-9f70-3c5be09cad64\") " pod="openshift-ingress-canary/ingress-canary-6txtb" Apr 23 17:54:51.258872 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:51.258737 2574 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 23 17:54:51.258872 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:51.258753 2574 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 23 17:54:51.258872 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:51.258795 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a455b3cc-b20e-46c2-9f70-3c5be09cad64-cert podName:a455b3cc-b20e-46c2-9f70-3c5be09cad64 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:53.258777666 +0000 UTC m=+163.622818589 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a455b3cc-b20e-46c2-9f70-3c5be09cad64-cert") pod "ingress-canary-6txtb" (UID: "a455b3cc-b20e-46c2-9f70-3c5be09cad64") : secret "canary-serving-cert" not found Apr 23 17:54:51.258872 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:51.258814 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/570f4ccf-8f66-420f-9543-207c02da2783-metrics-tls podName:570f4ccf-8f66-420f-9543-207c02da2783 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:53.25880539 +0000 UTC m=+163.622846314 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/570f4ccf-8f66-420f-9543-207c02da2783-metrics-tls") pod "dns-default-hqlvp" (UID: "570f4ccf-8f66-420f-9543-207c02da2783") : secret "dns-default-metrics-tls" not found Apr 23 17:54:53.077000 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:53.076952 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/223b3de3-2746-4385-a15c-cba2eeb2e9ee-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-jhzwk\" (UID: \"223b3de3-2746-4385-a15c-cba2eeb2e9ee\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-jhzwk" Apr 23 17:54:53.077453 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:53.077052 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-metrics-certs\") pod \"router-default-85cf97bcfb-crk2g\" (UID: \"ef6bbc19-ba30-4d63-ad0f-d37109da20b7\") " pod="openshift-ingress/router-default-85cf97bcfb-crk2g" Apr 23 17:54:53.077453 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:53.077091 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b39a95d3-b859-4e2d-bbef-fca1ee288a74-registry-tls\") pod \"image-registry-5bb94bc895-f4jk5\" (UID: \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\") " pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" Apr 23 17:54:53.077453 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:53.077100 2574 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 23 17:54:53.077453 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:53.077125 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-service-ca-bundle\") pod \"router-default-85cf97bcfb-crk2g\" (UID: \"ef6bbc19-ba30-4d63-ad0f-d37109da20b7\") " pod="openshift-ingress/router-default-85cf97bcfb-crk2g" Apr 23 17:54:53.077453 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:53.077156 2574 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 23 17:54:53.077453 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:53.077181 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/223b3de3-2746-4385-a15c-cba2eeb2e9ee-samples-operator-tls podName:223b3de3-2746-4385-a15c-cba2eeb2e9ee nodeName:}" failed. No retries permitted until 2026-04-23 17:54:57.077159948 +0000 UTC m=+167.441200890 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/223b3de3-2746-4385-a15c-cba2eeb2e9ee-samples-operator-tls") pod "cluster-samples-operator-6dc5bdb6b4-jhzwk" (UID: "223b3de3-2746-4385-a15c-cba2eeb2e9ee") : secret "samples-operator-tls" not found Apr 23 17:54:53.077453 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:53.077218 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-metrics-certs podName:ef6bbc19-ba30-4d63-ad0f-d37109da20b7 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:57.077200348 +0000 UTC m=+167.441241269 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-metrics-certs") pod "router-default-85cf97bcfb-crk2g" (UID: "ef6bbc19-ba30-4d63-ad0f-d37109da20b7") : secret "router-metrics-certs-default" not found Apr 23 17:54:53.077453 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:53.077242 2574 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 23 17:54:53.077453 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:53.077250 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-service-ca-bundle podName:ef6bbc19-ba30-4d63-ad0f-d37109da20b7 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:57.077235695 +0000 UTC m=+167.441276619 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-service-ca-bundle") pod "router-default-85cf97bcfb-crk2g" (UID: "ef6bbc19-ba30-4d63-ad0f-d37109da20b7") : configmap references non-existent config key: service-ca.crt Apr 23 17:54:53.077453 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:53.077258 2574 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-5bb94bc895-f4jk5: secret "image-registry-tls" not found Apr 23 17:54:53.077453 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:53.077300 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b39a95d3-b859-4e2d-bbef-fca1ee288a74-registry-tls podName:b39a95d3-b859-4e2d-bbef-fca1ee288a74 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:57.077286857 +0000 UTC m=+167.441327780 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/b39a95d3-b859-4e2d-bbef-fca1ee288a74-registry-tls") pod "image-registry-5bb94bc895-f4jk5" (UID: "b39a95d3-b859-4e2d-bbef-fca1ee288a74") : secret "image-registry-tls" not found Apr 23 17:54:53.178513 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:53.178476 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/e4f9f970-44a9-4e79-ac39-0cfc094cc4ca-networking-console-plugin-cert\") pod \"networking-console-plugin-cb95c66f6-727jd\" (UID: \"e4f9f970-44a9-4e79-ac39-0cfc094cc4ca\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-727jd" Apr 23 17:54:53.178672 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:53.178587 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/cabecf13-4b77-4125-bdb2-df08000b4d3d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-75587bd455-8b4st\" (UID: \"cabecf13-4b77-4125-bdb2-df08000b4d3d\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-8b4st" Apr 23 17:54:53.178672 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:53.178586 2574 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 23 17:54:53.178780 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:53.178710 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4f9f970-44a9-4e79-ac39-0cfc094cc4ca-networking-console-plugin-cert podName:e4f9f970-44a9-4e79-ac39-0cfc094cc4ca nodeName:}" failed. No retries permitted until 2026-04-23 17:54:57.178692707 +0000 UTC m=+167.542733631 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/e4f9f970-44a9-4e79-ac39-0cfc094cc4ca-networking-console-plugin-cert") pod "networking-console-plugin-cb95c66f6-727jd" (UID: "e4f9f970-44a9-4e79-ac39-0cfc094cc4ca") : secret "networking-console-plugin-cert" not found Apr 23 17:54:53.178780 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:53.178731 2574 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Apr 23 17:54:53.178913 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:53.178803 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cabecf13-4b77-4125-bdb2-df08000b4d3d-cluster-monitoring-operator-tls podName:cabecf13-4b77-4125-bdb2-df08000b4d3d nodeName:}" failed. No retries permitted until 2026-04-23 17:54:57.178784588 +0000 UTC m=+167.542825719 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/cabecf13-4b77-4125-bdb2-df08000b4d3d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-75587bd455-8b4st" (UID: "cabecf13-4b77-4125-bdb2-df08000b4d3d") : secret "cluster-monitoring-operator-tls" not found Apr 23 17:54:53.268960 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:53.268929 2574 scope.go:117] "RemoveContainer" containerID="2155eb5bb8ef913dc126d852cf2f1710ed43f1b18d89c0100bd6368cefe84deb" Apr 23 17:54:53.269164 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:53.269144 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_openshift-machine-config-operator(b86d5a8aaa7fecdf67a597e125a8b168)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal" podUID="b86d5a8aaa7fecdf67a597e125a8b168" Apr 23 17:54:53.279109 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:53.279078 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/570f4ccf-8f66-420f-9543-207c02da2783-metrics-tls\") pod \"dns-default-hqlvp\" (UID: \"570f4ccf-8f66-420f-9543-207c02da2783\") " pod="openshift-dns/dns-default-hqlvp" Apr 23 17:54:53.279218 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:53.279117 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a455b3cc-b20e-46c2-9f70-3c5be09cad64-cert\") pod \"ingress-canary-6txtb\" (UID: \"a455b3cc-b20e-46c2-9f70-3c5be09cad64\") " pod="openshift-ingress-canary/ingress-canary-6txtb" Apr 23 17:54:53.279273 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:53.279227 2574 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 23 17:54:53.279273 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:53.279249 2574 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 23 17:54:53.279357 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:53.279300 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/570f4ccf-8f66-420f-9543-207c02da2783-metrics-tls podName:570f4ccf-8f66-420f-9543-207c02da2783 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:57.279280545 +0000 UTC m=+167.643321468 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/570f4ccf-8f66-420f-9543-207c02da2783-metrics-tls") pod "dns-default-hqlvp" (UID: "570f4ccf-8f66-420f-9543-207c02da2783") : secret "dns-default-metrics-tls" not found Apr 23 17:54:53.279357 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:53.279321 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a455b3cc-b20e-46c2-9f70-3c5be09cad64-cert podName:a455b3cc-b20e-46c2-9f70-3c5be09cad64 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:57.279308858 +0000 UTC m=+167.643349777 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a455b3cc-b20e-46c2-9f70-3c5be09cad64-cert") pod "ingress-canary-6txtb" (UID: "a455b3cc-b20e-46c2-9f70-3c5be09cad64") : secret "canary-serving-cert" not found Apr 23 17:54:55.689509 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:55.689468 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-567k5" event={"ID":"9601dc49-4014-4c79-9bb2-5871bb8d36a1","Type":"ContainerStarted","Data":"befc7f4764ac5818b2382b5bad205873b3a8a363dc5eb65d77879c0318b3d0b9"} Apr 23 17:54:55.691487 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:55.691451 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-585dfdc468-7h7vz" event={"ID":"d01e1208-1867-464a-822f-89683cda0372","Type":"ContainerStarted","Data":"1c4cdf25e472c3716c4c7e074823bcde30360c9c5e175bc6db657d79e23218c8"} Apr 23 17:54:55.692794 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:55.692760 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-24f9z" event={"ID":"926cf4a9-abea-43b7-baa6-dc9cd9430a00","Type":"ContainerStarted","Data":"b2177d660f1a2d3b828ffb948779ec150ea6883a1d5af456a13b704e70990ab4"} Apr 23 17:54:55.695066 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:55.695030 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-8894fc9bd-pvs64" event={"ID":"fff84aa0-f5b3-4d5a-add6-04dc79b3bf54","Type":"ContainerStarted","Data":"8f25c6c5666143f3de0ef87615dccf7fc20d4943d771c2a0c1fe1934bb006021"} Apr 23 17:54:55.696463 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:55.696432 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-jfgn6" event={"ID":"e21ba2c0-7cc7-4b50-ba6c-1fb814e1f50a","Type":"ContainerStarted","Data":"9a800249c686791fae365233024167ddf308d243c928860446996be79883e5f4"} Apr 23 17:54:55.697979 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:55.697959 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-phhz6_5a622cde-4463-4b2b-a60a-0724fdeeb5e3/console-operator/0.log" Apr 23 17:54:55.698076 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:55.697998 2574 generic.go:358] "Generic (PLEG): container finished" podID="5a622cde-4463-4b2b-a60a-0724fdeeb5e3" containerID="74eb9080b46393faeabf4dd112a0cf98ef1e014efddbb329ace437d09fbf800a" exitCode=255 Apr 23 17:54:55.698076 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:55.698031 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-9d4b6777b-phhz6" event={"ID":"5a622cde-4463-4b2b-a60a-0724fdeeb5e3","Type":"ContainerDied","Data":"74eb9080b46393faeabf4dd112a0cf98ef1e014efddbb329ace437d09fbf800a"} Apr 23 17:54:55.698232 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:55.698216 2574 scope.go:117] "RemoveContainer" containerID="74eb9080b46393faeabf4dd112a0cf98ef1e014efddbb329ace437d09fbf800a" Apr 23 17:54:55.739723 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:55.739668 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-567k5" podStartSLOduration=44.846938518 podStartE2EDuration="49.739655211s" podCreationTimestamp="2026-04-23 17:54:06 +0000 UTC" firstStartedPulling="2026-04-23 17:54:49.802252735 +0000 UTC m=+160.166293661" lastFinishedPulling="2026-04-23 17:54:54.694969433 +0000 UTC m=+165.059010354" observedRunningTime="2026-04-23 17:54:55.738165348 +0000 UTC m=+166.102206293" watchObservedRunningTime="2026-04-23 17:54:55.739655211 +0000 UTC m=+166.103696151" Apr 23 17:54:55.762703 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:55.762638 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-jfgn6" podStartSLOduration=42.886238173 podStartE2EDuration="47.762619747s" podCreationTimestamp="2026-04-23 17:54:08 +0000 UTC" firstStartedPulling="2026-04-23 17:54:49.848597673 +0000 UTC m=+160.212638592" lastFinishedPulling="2026-04-23 17:54:54.724979244 +0000 UTC m=+165.089020166" observedRunningTime="2026-04-23 17:54:55.761891307 +0000 UTC m=+166.125932248" watchObservedRunningTime="2026-04-23 17:54:55.762619747 +0000 UTC m=+166.126660686" Apr 23 17:54:55.793999 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:55.793637 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-source-8894fc9bd-pvs64" podStartSLOduration=43.913920842 podStartE2EDuration="48.793605846s" podCreationTimestamp="2026-04-23 17:54:07 +0000 UTC" firstStartedPulling="2026-04-23 17:54:49.860246554 +0000 UTC m=+160.224287473" lastFinishedPulling="2026-04-23 17:54:54.739931547 +0000 UTC m=+165.103972477" observedRunningTime="2026-04-23 17:54:55.791346278 +0000 UTC m=+166.155387220" watchObservedRunningTime="2026-04-23 17:54:55.793605846 +0000 UTC m=+166.157646787" Apr 23 17:54:55.827628 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:55.827557 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-operator-585dfdc468-7h7vz" podStartSLOduration=45.85897249 podStartE2EDuration="50.82754225s" podCreationTimestamp="2026-04-23 17:54:05 +0000 UTC" firstStartedPulling="2026-04-23 17:54:49.726090915 +0000 UTC m=+160.090131839" lastFinishedPulling="2026-04-23 17:54:54.694660675 +0000 UTC m=+165.058701599" observedRunningTime="2026-04-23 17:54:55.821681616 +0000 UTC m=+166.185722557" watchObservedRunningTime="2026-04-23 17:54:55.82754225 +0000 UTC m=+166.191583213" Apr 23 17:54:55.853630 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:55.853585 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-24f9z" podStartSLOduration=45.957166766 podStartE2EDuration="50.853570492s" podCreationTimestamp="2026-04-23 17:54:05 +0000 UTC" firstStartedPulling="2026-04-23 17:54:49.792950684 +0000 UTC m=+160.156991602" lastFinishedPulling="2026-04-23 17:54:54.689354404 +0000 UTC m=+165.053395328" observedRunningTime="2026-04-23 17:54:55.853251415 +0000 UTC m=+166.217292356" watchObservedRunningTime="2026-04-23 17:54:55.853570492 +0000 UTC m=+166.217611433" Apr 23 17:54:56.703309 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:56.703278 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-phhz6_5a622cde-4463-4b2b-a60a-0724fdeeb5e3/console-operator/1.log" Apr 23 17:54:56.703746 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:56.703696 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-phhz6_5a622cde-4463-4b2b-a60a-0724fdeeb5e3/console-operator/0.log" Apr 23 17:54:56.703818 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:56.703742 2574 generic.go:358] "Generic (PLEG): container finished" podID="5a622cde-4463-4b2b-a60a-0724fdeeb5e3" containerID="19dd6133c16fdf0693575bdf162010fd2ed719beb2781f2a25eceacd4b76fda6" exitCode=255 Apr 23 17:54:56.703949 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:56.703924 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-9d4b6777b-phhz6" event={"ID":"5a622cde-4463-4b2b-a60a-0724fdeeb5e3","Type":"ContainerDied","Data":"19dd6133c16fdf0693575bdf162010fd2ed719beb2781f2a25eceacd4b76fda6"} Apr 23 17:54:56.704004 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:56.703975 2574 scope.go:117] "RemoveContainer" containerID="74eb9080b46393faeabf4dd112a0cf98ef1e014efddbb329ace437d09fbf800a" Apr 23 17:54:56.704223 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:56.704198 2574 scope.go:117] "RemoveContainer" containerID="19dd6133c16fdf0693575bdf162010fd2ed719beb2781f2a25eceacd4b76fda6" Apr 23 17:54:56.704458 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:56.704415 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console-operator pod=console-operator-9d4b6777b-phhz6_openshift-console-operator(5a622cde-4463-4b2b-a60a-0724fdeeb5e3)\"" pod="openshift-console-operator/console-operator-9d4b6777b-phhz6" podUID="5a622cde-4463-4b2b-a60a-0724fdeeb5e3" Apr 23 17:54:56.826113 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:56.826083 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-74bb7799d9-jggdb"] Apr 23 17:54:56.835684 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:56.835665 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-74bb7799d9-jggdb" Apr 23 17:54:56.839284 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:56.839263 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-wlkdx\"" Apr 23 17:54:56.839389 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:56.839348 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Apr 23 17:54:56.839537 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:56.839390 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Apr 23 17:54:56.840699 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:56.840680 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-74bb7799d9-jggdb"] Apr 23 17:54:56.914739 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:56.914702 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qcpm\" (UniqueName: \"kubernetes.io/projected/c068db57-9b93-4515-9608-59a3ccaa6d07-kube-api-access-6qcpm\") pod \"migrator-74bb7799d9-jggdb\" (UID: \"c068db57-9b93-4515-9608-59a3ccaa6d07\") " pod="openshift-kube-storage-version-migrator/migrator-74bb7799d9-jggdb" Apr 23 17:54:57.015438 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:57.015366 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6qcpm\" (UniqueName: \"kubernetes.io/projected/c068db57-9b93-4515-9608-59a3ccaa6d07-kube-api-access-6qcpm\") pod \"migrator-74bb7799d9-jggdb\" (UID: \"c068db57-9b93-4515-9608-59a3ccaa6d07\") " pod="openshift-kube-storage-version-migrator/migrator-74bb7799d9-jggdb" Apr 23 17:54:57.031084 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:57.031055 2574 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:54:57.040619 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:57.040594 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qcpm\" (UniqueName: \"kubernetes.io/projected/c068db57-9b93-4515-9608-59a3ccaa6d07-kube-api-access-6qcpm\") pod \"migrator-74bb7799d9-jggdb\" (UID: \"c068db57-9b93-4515-9608-59a3ccaa6d07\") " pod="openshift-kube-storage-version-migrator/migrator-74bb7799d9-jggdb" Apr 23 17:54:57.116410 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:57.116379 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-service-ca-bundle\") pod \"router-default-85cf97bcfb-crk2g\" (UID: \"ef6bbc19-ba30-4d63-ad0f-d37109da20b7\") " pod="openshift-ingress/router-default-85cf97bcfb-crk2g" Apr 23 17:54:57.116513 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:57.116482 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/223b3de3-2746-4385-a15c-cba2eeb2e9ee-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-jhzwk\" (UID: \"223b3de3-2746-4385-a15c-cba2eeb2e9ee\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-jhzwk" Apr 23 17:54:57.116560 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:57.116545 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-service-ca-bundle podName:ef6bbc19-ba30-4d63-ad0f-d37109da20b7 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:05.116527242 +0000 UTC m=+175.480568164 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-service-ca-bundle") pod "router-default-85cf97bcfb-crk2g" (UID: "ef6bbc19-ba30-4d63-ad0f-d37109da20b7") : configmap references non-existent config key: service-ca.crt Apr 23 17:54:57.116607 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:57.116593 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-metrics-certs\") pod \"router-default-85cf97bcfb-crk2g\" (UID: \"ef6bbc19-ba30-4d63-ad0f-d37109da20b7\") " pod="openshift-ingress/router-default-85cf97bcfb-crk2g" Apr 23 17:54:57.116653 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:57.116623 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b39a95d3-b859-4e2d-bbef-fca1ee288a74-registry-tls\") pod \"image-registry-5bb94bc895-f4jk5\" (UID: \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\") " pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" Apr 23 17:54:57.116653 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:57.116598 2574 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 23 17:54:57.116737 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:57.116655 2574 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 23 17:54:57.116737 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:57.116693 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-metrics-certs podName:ef6bbc19-ba30-4d63-ad0f-d37109da20b7 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:05.116681014 +0000 UTC m=+175.480721947 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-metrics-certs") pod "router-default-85cf97bcfb-crk2g" (UID: "ef6bbc19-ba30-4d63-ad0f-d37109da20b7") : secret "router-metrics-certs-default" not found Apr 23 17:54:57.116737 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:57.116708 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/223b3de3-2746-4385-a15c-cba2eeb2e9ee-samples-operator-tls podName:223b3de3-2746-4385-a15c-cba2eeb2e9ee nodeName:}" failed. No retries permitted until 2026-04-23 17:55:05.11669981 +0000 UTC m=+175.480740729 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/223b3de3-2746-4385-a15c-cba2eeb2e9ee-samples-operator-tls") pod "cluster-samples-operator-6dc5bdb6b4-jhzwk" (UID: "223b3de3-2746-4385-a15c-cba2eeb2e9ee") : secret "samples-operator-tls" not found Apr 23 17:54:57.116737 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:57.116712 2574 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 23 17:54:57.116737 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:57.116724 2574 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-5bb94bc895-f4jk5: secret "image-registry-tls" not found Apr 23 17:54:57.116980 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:57.116750 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b39a95d3-b859-4e2d-bbef-fca1ee288a74-registry-tls podName:b39a95d3-b859-4e2d-bbef-fca1ee288a74 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:05.116741865 +0000 UTC m=+175.480782783 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/b39a95d3-b859-4e2d-bbef-fca1ee288a74-registry-tls") pod "image-registry-5bb94bc895-f4jk5" (UID: "b39a95d3-b859-4e2d-bbef-fca1ee288a74") : secret "image-registry-tls" not found Apr 23 17:54:57.147531 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:57.147510 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-74bb7799d9-jggdb" Apr 23 17:54:57.217784 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:57.217747 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/e4f9f970-44a9-4e79-ac39-0cfc094cc4ca-networking-console-plugin-cert\") pod \"networking-console-plugin-cb95c66f6-727jd\" (UID: \"e4f9f970-44a9-4e79-ac39-0cfc094cc4ca\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-727jd" Apr 23 17:54:57.217912 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:57.217835 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/cabecf13-4b77-4125-bdb2-df08000b4d3d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-75587bd455-8b4st\" (UID: \"cabecf13-4b77-4125-bdb2-df08000b4d3d\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-8b4st" Apr 23 17:54:57.217969 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:57.217938 2574 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Apr 23 17:54:57.218014 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:57.218004 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cabecf13-4b77-4125-bdb2-df08000b4d3d-cluster-monitoring-operator-tls podName:cabecf13-4b77-4125-bdb2-df08000b4d3d nodeName:}" failed. No retries permitted until 2026-04-23 17:55:05.217989829 +0000 UTC m=+175.582030749 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/cabecf13-4b77-4125-bdb2-df08000b4d3d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-75587bd455-8b4st" (UID: "cabecf13-4b77-4125-bdb2-df08000b4d3d") : secret "cluster-monitoring-operator-tls" not found Apr 23 17:54:57.218064 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:57.217939 2574 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 23 17:54:57.218064 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:57.218045 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4f9f970-44a9-4e79-ac39-0cfc094cc4ca-networking-console-plugin-cert podName:e4f9f970-44a9-4e79-ac39-0cfc094cc4ca nodeName:}" failed. No retries permitted until 2026-04-23 17:55:05.218039229 +0000 UTC m=+175.582080147 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/e4f9f970-44a9-4e79-ac39-0cfc094cc4ca-networking-console-plugin-cert") pod "networking-console-plugin-cb95c66f6-727jd" (UID: "e4f9f970-44a9-4e79-ac39-0cfc094cc4ca") : secret "networking-console-plugin-cert" not found Apr 23 17:54:57.270774 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:57.270749 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-74bb7799d9-jggdb"] Apr 23 17:54:57.272815 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:54:57.272791 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc068db57_9b93_4515_9608_59a3ccaa6d07.slice/crio-8bd50b4b2a81aeaae02183879be59986491e6f1e708fccaff8c887455c5e8f5a WatchSource:0}: Error finding container 8bd50b4b2a81aeaae02183879be59986491e6f1e708fccaff8c887455c5e8f5a: Status 404 returned error can't find the container with id 8bd50b4b2a81aeaae02183879be59986491e6f1e708fccaff8c887455c5e8f5a Apr 23 17:54:57.318666 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:57.318637 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/570f4ccf-8f66-420f-9543-207c02da2783-metrics-tls\") pod \"dns-default-hqlvp\" (UID: \"570f4ccf-8f66-420f-9543-207c02da2783\") " pod="openshift-dns/dns-default-hqlvp" Apr 23 17:54:57.318767 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:57.318668 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a455b3cc-b20e-46c2-9f70-3c5be09cad64-cert\") pod \"ingress-canary-6txtb\" (UID: \"a455b3cc-b20e-46c2-9f70-3c5be09cad64\") " pod="openshift-ingress-canary/ingress-canary-6txtb" Apr 23 17:54:57.318818 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:57.318783 2574 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 23 17:54:57.318877 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:57.318836 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/570f4ccf-8f66-420f-9543-207c02da2783-metrics-tls podName:570f4ccf-8f66-420f-9543-207c02da2783 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:05.318822751 +0000 UTC m=+175.682863670 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/570f4ccf-8f66-420f-9543-207c02da2783-metrics-tls") pod "dns-default-hqlvp" (UID: "570f4ccf-8f66-420f-9543-207c02da2783") : secret "dns-default-metrics-tls" not found Apr 23 17:54:57.318877 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:57.318839 2574 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 23 17:54:57.318964 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:57.318904 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a455b3cc-b20e-46c2-9f70-3c5be09cad64-cert podName:a455b3cc-b20e-46c2-9f70-3c5be09cad64 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:05.318888635 +0000 UTC m=+175.682929554 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a455b3cc-b20e-46c2-9f70-3c5be09cad64-cert") pod "ingress-canary-6txtb" (UID: "a455b3cc-b20e-46c2-9f70-3c5be09cad64") : secret "canary-serving-cert" not found Apr 23 17:54:57.708676 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:57.708641 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-74bb7799d9-jggdb" event={"ID":"c068db57-9b93-4515-9608-59a3ccaa6d07","Type":"ContainerStarted","Data":"8bd50b4b2a81aeaae02183879be59986491e6f1e708fccaff8c887455c5e8f5a"} Apr 23 17:54:57.710373 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:57.710348 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-phhz6_5a622cde-4463-4b2b-a60a-0724fdeeb5e3/console-operator/1.log" Apr 23 17:54:57.710797 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:57.710777 2574 scope.go:117] "RemoveContainer" containerID="19dd6133c16fdf0693575bdf162010fd2ed719beb2781f2a25eceacd4b76fda6" Apr 23 17:54:57.711034 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:57.711012 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console-operator pod=console-operator-9d4b6777b-phhz6_openshift-console-operator(5a622cde-4463-4b2b-a60a-0724fdeeb5e3)\"" pod="openshift-console-operator/console-operator-9d4b6777b-phhz6" podUID="5a622cde-4463-4b2b-a60a-0724fdeeb5e3" Apr 23 17:54:58.535490 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:58.535419 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-z72z9_4f625df8-2016-4ff3-8cc7-d03314b05183/dns-node-resolver/0.log" Apr 23 17:54:58.714735 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:58.714698 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-74bb7799d9-jggdb" event={"ID":"c068db57-9b93-4515-9608-59a3ccaa6d07","Type":"ContainerStarted","Data":"f7b9d8370acc1cc9c9713ec7f677275618ac45a02393c46b24c42828c7cce409"} Apr 23 17:54:58.715094 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:58.714740 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-74bb7799d9-jggdb" event={"ID":"c068db57-9b93-4515-9608-59a3ccaa6d07","Type":"ContainerStarted","Data":"13b59f3ae4b3f6f9af9f3f2e011f1e4967dbc8347df7bc29d807d37b42ea1c8a"} Apr 23 17:54:58.750276 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:58.750244 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-865cb79987-594l8"] Apr 23 17:54:58.750430 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:58.750392 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-74bb7799d9-jggdb" podStartSLOduration=1.779560342 podStartE2EDuration="2.750379048s" podCreationTimestamp="2026-04-23 17:54:56 +0000 UTC" firstStartedPulling="2026-04-23 17:54:57.274733911 +0000 UTC m=+167.638774832" lastFinishedPulling="2026-04-23 17:54:58.245552614 +0000 UTC m=+168.609593538" observedRunningTime="2026-04-23 17:54:58.748284439 +0000 UTC m=+169.112325380" watchObservedRunningTime="2026-04-23 17:54:58.750379048 +0000 UTC m=+169.114419989" Apr 23 17:54:58.751941 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:58.751927 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-865cb79987-594l8" Apr 23 17:54:58.759750 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:58.759729 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Apr 23 17:54:58.760218 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:58.760195 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Apr 23 17:54:58.760504 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:58.760492 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Apr 23 17:54:58.760562 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:58.760547 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Apr 23 17:54:58.762969 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:58.762952 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-5wcns\"" Apr 23 17:54:58.768799 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:58.768781 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-865cb79987-594l8"] Apr 23 17:54:58.831175 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:58.831089 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shb6r\" (UniqueName: \"kubernetes.io/projected/6069d0a0-db4c-460f-aff4-5f02bfbcfd37-kube-api-access-shb6r\") pod \"service-ca-865cb79987-594l8\" (UID: \"6069d0a0-db4c-460f-aff4-5f02bfbcfd37\") " pod="openshift-service-ca/service-ca-865cb79987-594l8" Apr 23 17:54:58.831307 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:58.831266 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6069d0a0-db4c-460f-aff4-5f02bfbcfd37-signing-key\") pod \"service-ca-865cb79987-594l8\" (UID: \"6069d0a0-db4c-460f-aff4-5f02bfbcfd37\") " pod="openshift-service-ca/service-ca-865cb79987-594l8" Apr 23 17:54:58.831307 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:58.831296 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6069d0a0-db4c-460f-aff4-5f02bfbcfd37-signing-cabundle\") pod \"service-ca-865cb79987-594l8\" (UID: \"6069d0a0-db4c-460f-aff4-5f02bfbcfd37\") " pod="openshift-service-ca/service-ca-865cb79987-594l8" Apr 23 17:54:58.932005 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:58.931964 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-shb6r\" (UniqueName: \"kubernetes.io/projected/6069d0a0-db4c-460f-aff4-5f02bfbcfd37-kube-api-access-shb6r\") pod \"service-ca-865cb79987-594l8\" (UID: \"6069d0a0-db4c-460f-aff4-5f02bfbcfd37\") " pod="openshift-service-ca/service-ca-865cb79987-594l8" Apr 23 17:54:58.932170 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:58.932146 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6069d0a0-db4c-460f-aff4-5f02bfbcfd37-signing-key\") pod \"service-ca-865cb79987-594l8\" (UID: \"6069d0a0-db4c-460f-aff4-5f02bfbcfd37\") " pod="openshift-service-ca/service-ca-865cb79987-594l8" Apr 23 17:54:58.932225 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:58.932177 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6069d0a0-db4c-460f-aff4-5f02bfbcfd37-signing-cabundle\") pod \"service-ca-865cb79987-594l8\" (UID: \"6069d0a0-db4c-460f-aff4-5f02bfbcfd37\") " pod="openshift-service-ca/service-ca-865cb79987-594l8" Apr 23 17:54:58.932807 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:58.932784 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6069d0a0-db4c-460f-aff4-5f02bfbcfd37-signing-cabundle\") pod \"service-ca-865cb79987-594l8\" (UID: \"6069d0a0-db4c-460f-aff4-5f02bfbcfd37\") " pod="openshift-service-ca/service-ca-865cb79987-594l8" Apr 23 17:54:58.934670 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:58.934649 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6069d0a0-db4c-460f-aff4-5f02bfbcfd37-signing-key\") pod \"service-ca-865cb79987-594l8\" (UID: \"6069d0a0-db4c-460f-aff4-5f02bfbcfd37\") " pod="openshift-service-ca/service-ca-865cb79987-594l8" Apr 23 17:54:58.942505 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:58.942484 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-shb6r\" (UniqueName: \"kubernetes.io/projected/6069d0a0-db4c-460f-aff4-5f02bfbcfd37-kube-api-access-shb6r\") pod \"service-ca-865cb79987-594l8\" (UID: \"6069d0a0-db4c-460f-aff4-5f02bfbcfd37\") " pod="openshift-service-ca/service-ca-865cb79987-594l8" Apr 23 17:54:59.060225 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:59.060201 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-865cb79987-594l8" Apr 23 17:54:59.141301 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:59.141276 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-4767s_58265a7e-9515-43ed-8838-b59c7bc68f1a/node-ca/0.log" Apr 23 17:54:59.179226 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:59.179201 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-865cb79987-594l8"] Apr 23 17:54:59.182005 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:54:59.181982 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6069d0a0_db4c_460f_aff4_5f02bfbcfd37.slice/crio-b395f48e66e5e9bb46f4d1258726eed221c2341f15f5817049cf5585a99aa7e6 WatchSource:0}: Error finding container b395f48e66e5e9bb46f4d1258726eed221c2341f15f5817049cf5585a99aa7e6: Status 404 returned error can't find the container with id b395f48e66e5e9bb46f4d1258726eed221c2341f15f5817049cf5585a99aa7e6 Apr 23 17:54:59.534013 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:59.533985 2574 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-9d4b6777b-phhz6" Apr 23 17:54:59.534160 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:59.534020 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-9d4b6777b-phhz6" Apr 23 17:54:59.534357 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:59.534344 2574 scope.go:117] "RemoveContainer" containerID="19dd6133c16fdf0693575bdf162010fd2ed719beb2781f2a25eceacd4b76fda6" Apr 23 17:54:59.534540 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:54:59.534522 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console-operator pod=console-operator-9d4b6777b-phhz6_openshift-console-operator(5a622cde-4463-4b2b-a60a-0724fdeeb5e3)\"" pod="openshift-console-operator/console-operator-9d4b6777b-phhz6" podUID="5a622cde-4463-4b2b-a60a-0724fdeeb5e3" Apr 23 17:54:59.719647 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:59.719616 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-865cb79987-594l8" event={"ID":"6069d0a0-db4c-460f-aff4-5f02bfbcfd37","Type":"ContainerStarted","Data":"4c4a5c7dc962c7b404c83e33774fe02013e88663529be300a2e062839c29c44c"} Apr 23 17:54:59.720052 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:59.719657 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-865cb79987-594l8" event={"ID":"6069d0a0-db4c-460f-aff4-5f02bfbcfd37","Type":"ContainerStarted","Data":"b395f48e66e5e9bb46f4d1258726eed221c2341f15f5817049cf5585a99aa7e6"} Apr 23 17:54:59.744223 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:54:59.744178 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-865cb79987-594l8" podStartSLOduration=1.744161648 podStartE2EDuration="1.744161648s" podCreationTimestamp="2026-04-23 17:54:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:54:59.742261303 +0000 UTC m=+170.106302244" watchObservedRunningTime="2026-04-23 17:54:59.744161648 +0000 UTC m=+170.108202590" Apr 23 17:55:00.135887 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:00.135833 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_volume-data-source-validator-7c6cbb6c87-24f9z_926cf4a9-abea-43b7-baa6-dc9cd9430a00/volume-data-source-validator/0.log" Apr 23 17:55:01.535092 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:01.535063 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-csi-drivers_aws-ebs-csi-driver-node-zb5cp_4f4d7e96-7d49-43ba-bd2c-ee439980c9ed/csi-driver/0.log" Apr 23 17:55:01.734384 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:01.734349 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-csi-drivers_aws-ebs-csi-driver-node-zb5cp_4f4d7e96-7d49-43ba-bd2c-ee439980c9ed/csi-node-driver-registrar/0.log" Apr 23 17:55:01.934887 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:01.934840 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-csi-drivers_aws-ebs-csi-driver-node-zb5cp_4f4d7e96-7d49-43ba-bd2c-ee439980c9ed/csi-liveness-probe/0.log" Apr 23 17:55:04.269413 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:04.269385 2574 scope.go:117] "RemoveContainer" containerID="2155eb5bb8ef913dc126d852cf2f1710ed43f1b18d89c0100bd6368cefe84deb" Apr 23 17:55:04.739015 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:04.738989 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/4.log" Apr 23 17:55:04.739298 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:04.739277 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal" event={"ID":"b86d5a8aaa7fecdf67a597e125a8b168","Type":"ContainerStarted","Data":"c739c22c7ad2e509614a1995c59eb5d83cdc796303791fab082a2f8f554dd568"} Apr 23 17:55:04.765432 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:04.765387 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal" podStartSLOduration=52.765373095 podStartE2EDuration="52.765373095s" podCreationTimestamp="2026-04-23 17:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:55:04.765126659 +0000 UTC m=+175.129167600" watchObservedRunningTime="2026-04-23 17:55:04.765373095 +0000 UTC m=+175.129414036" Apr 23 17:55:05.191291 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:05.191265 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-metrics-certs\") pod \"router-default-85cf97bcfb-crk2g\" (UID: \"ef6bbc19-ba30-4d63-ad0f-d37109da20b7\") " pod="openshift-ingress/router-default-85cf97bcfb-crk2g" Apr 23 17:55:05.191441 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:05.191300 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b39a95d3-b859-4e2d-bbef-fca1ee288a74-registry-tls\") pod \"image-registry-5bb94bc895-f4jk5\" (UID: \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\") " pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" Apr 23 17:55:05.191441 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:05.191326 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-service-ca-bundle\") pod \"router-default-85cf97bcfb-crk2g\" (UID: \"ef6bbc19-ba30-4d63-ad0f-d37109da20b7\") " pod="openshift-ingress/router-default-85cf97bcfb-crk2g" Apr 23 17:55:05.191441 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:55:05.191424 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-service-ca-bundle podName:ef6bbc19-ba30-4d63-ad0f-d37109da20b7 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:21.191410879 +0000 UTC m=+191.555451797 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-service-ca-bundle") pod "router-default-85cf97bcfb-crk2g" (UID: "ef6bbc19-ba30-4d63-ad0f-d37109da20b7") : configmap references non-existent config key: service-ca.crt Apr 23 17:55:05.191586 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:05.191459 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/223b3de3-2746-4385-a15c-cba2eeb2e9ee-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-jhzwk\" (UID: \"223b3de3-2746-4385-a15c-cba2eeb2e9ee\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-jhzwk" Apr 23 17:55:05.191586 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:55:05.191525 2574 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 23 17:55:05.191679 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:55:05.191591 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-metrics-certs podName:ef6bbc19-ba30-4d63-ad0f-d37109da20b7 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:21.191574354 +0000 UTC m=+191.555615273 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-metrics-certs") pod "router-default-85cf97bcfb-crk2g" (UID: "ef6bbc19-ba30-4d63-ad0f-d37109da20b7") : secret "router-metrics-certs-default" not found Apr 23 17:55:05.193797 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:05.193777 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b39a95d3-b859-4e2d-bbef-fca1ee288a74-registry-tls\") pod \"image-registry-5bb94bc895-f4jk5\" (UID: \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\") " pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" Apr 23 17:55:05.194551 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:05.194534 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/223b3de3-2746-4385-a15c-cba2eeb2e9ee-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-jhzwk\" (UID: \"223b3de3-2746-4385-a15c-cba2eeb2e9ee\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-jhzwk" Apr 23 17:55:05.200929 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:05.200910 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" Apr 23 17:55:05.293205 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:05.293174 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/cabecf13-4b77-4125-bdb2-df08000b4d3d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-75587bd455-8b4st\" (UID: \"cabecf13-4b77-4125-bdb2-df08000b4d3d\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-8b4st" Apr 23 17:55:05.293571 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:05.293299 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/e4f9f970-44a9-4e79-ac39-0cfc094cc4ca-networking-console-plugin-cert\") pod \"networking-console-plugin-cb95c66f6-727jd\" (UID: \"e4f9f970-44a9-4e79-ac39-0cfc094cc4ca\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-727jd" Apr 23 17:55:05.293571 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:55:05.293313 2574 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Apr 23 17:55:05.293571 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:55:05.293392 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cabecf13-4b77-4125-bdb2-df08000b4d3d-cluster-monitoring-operator-tls podName:cabecf13-4b77-4125-bdb2-df08000b4d3d nodeName:}" failed. No retries permitted until 2026-04-23 17:55:21.293370191 +0000 UTC m=+191.657411116 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/cabecf13-4b77-4125-bdb2-df08000b4d3d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-75587bd455-8b4st" (UID: "cabecf13-4b77-4125-bdb2-df08000b4d3d") : secret "cluster-monitoring-operator-tls" not found Apr 23 17:55:05.293571 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:55:05.293450 2574 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 23 17:55:05.293571 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:55:05.293514 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4f9f970-44a9-4e79-ac39-0cfc094cc4ca-networking-console-plugin-cert podName:e4f9f970-44a9-4e79-ac39-0cfc094cc4ca nodeName:}" failed. No retries permitted until 2026-04-23 17:55:21.293497663 +0000 UTC m=+191.657538584 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/e4f9f970-44a9-4e79-ac39-0cfc094cc4ca-networking-console-plugin-cert") pod "networking-console-plugin-cb95c66f6-727jd" (UID: "e4f9f970-44a9-4e79-ac39-0cfc094cc4ca") : secret "networking-console-plugin-cert" not found Apr 23 17:55:05.329638 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:05.329606 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5bb94bc895-f4jk5"] Apr 23 17:55:05.332314 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:55:05.332284 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb39a95d3_b859_4e2d_bbef_fca1ee288a74.slice/crio-ef928054e581ef64558f1531bc955a50eaa7534ab592a5fd8627ba86e5de0bc4 WatchSource:0}: Error finding container ef928054e581ef64558f1531bc955a50eaa7534ab592a5fd8627ba86e5de0bc4: Status 404 returned error can't find the container with id ef928054e581ef64558f1531bc955a50eaa7534ab592a5fd8627ba86e5de0bc4 Apr 23 17:55:05.394767 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:05.394742 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/570f4ccf-8f66-420f-9543-207c02da2783-metrics-tls\") pod \"dns-default-hqlvp\" (UID: \"570f4ccf-8f66-420f-9543-207c02da2783\") " pod="openshift-dns/dns-default-hqlvp" Apr 23 17:55:05.394874 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:05.394770 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a455b3cc-b20e-46c2-9f70-3c5be09cad64-cert\") pod \"ingress-canary-6txtb\" (UID: \"a455b3cc-b20e-46c2-9f70-3c5be09cad64\") " pod="openshift-ingress-canary/ingress-canary-6txtb" Apr 23 17:55:05.396990 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:05.396966 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/570f4ccf-8f66-420f-9543-207c02da2783-metrics-tls\") pod \"dns-default-hqlvp\" (UID: \"570f4ccf-8f66-420f-9543-207c02da2783\") " pod="openshift-dns/dns-default-hqlvp" Apr 23 17:55:05.397078 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:05.397062 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a455b3cc-b20e-46c2-9f70-3c5be09cad64-cert\") pod \"ingress-canary-6txtb\" (UID: \"a455b3cc-b20e-46c2-9f70-3c5be09cad64\") " pod="openshift-ingress-canary/ingress-canary-6txtb" Apr 23 17:55:05.487256 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:05.487227 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-jhzwk" Apr 23 17:55:05.609944 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:05.609915 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-jhzwk"] Apr 23 17:55:05.659113 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:05.659085 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-hqlvp" Apr 23 17:55:05.686093 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:05.686068 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-6txtb" Apr 23 17:55:05.743997 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:05.743958 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" event={"ID":"b39a95d3-b859-4e2d-bbef-fca1ee288a74","Type":"ContainerStarted","Data":"0e2ca69060fc02aa0b53a111c5628dc7df1b0f4e7a4589fb563a25254a55f4fa"} Apr 23 17:55:05.744117 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:05.744007 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" event={"ID":"b39a95d3-b859-4e2d-bbef-fca1ee288a74","Type":"ContainerStarted","Data":"ef928054e581ef64558f1531bc955a50eaa7534ab592a5fd8627ba86e5de0bc4"} Apr 23 17:55:05.744787 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:05.744764 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" Apr 23 17:55:05.746021 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:05.745995 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-jhzwk" event={"ID":"223b3de3-2746-4385-a15c-cba2eeb2e9ee","Type":"ContainerStarted","Data":"fb98aebb5aad5c370b8c9082904f609f981d7d225d5bd6cc8de42daf77fe214c"} Apr 23 17:55:05.782376 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:05.782325 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" podStartSLOduration=60.781151634 podStartE2EDuration="1m0.781151634s" podCreationTimestamp="2026-04-23 17:54:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:55:05.780112103 +0000 UTC m=+176.144153044" watchObservedRunningTime="2026-04-23 17:55:05.781151634 +0000 UTC m=+176.145192576" Apr 23 17:55:05.831487 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:05.831456 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-hqlvp"] Apr 23 17:55:05.834774 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:55:05.834739 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod570f4ccf_8f66_420f_9543_207c02da2783.slice/crio-6f41cad14528729f74ff59b4997a334cb9669c7dd6cd12961a1963c51ed393b3 WatchSource:0}: Error finding container 6f41cad14528729f74ff59b4997a334cb9669c7dd6cd12961a1963c51ed393b3: Status 404 returned error can't find the container with id 6f41cad14528729f74ff59b4997a334cb9669c7dd6cd12961a1963c51ed393b3 Apr 23 17:55:05.836975 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:05.836916 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-6txtb"] Apr 23 17:55:05.855784 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:55:05.855761 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda455b3cc_b20e_46c2_9f70_3c5be09cad64.slice/crio-717692d83cfbb363a96c841e7ff6221008ad74dde18ac6c29e9618b91aff86f9 WatchSource:0}: Error finding container 717692d83cfbb363a96c841e7ff6221008ad74dde18ac6c29e9618b91aff86f9: Status 404 returned error can't find the container with id 717692d83cfbb363a96c841e7ff6221008ad74dde18ac6c29e9618b91aff86f9 Apr 23 17:55:06.751134 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:06.751096 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hqlvp" event={"ID":"570f4ccf-8f66-420f-9543-207c02da2783","Type":"ContainerStarted","Data":"6f41cad14528729f74ff59b4997a334cb9669c7dd6cd12961a1963c51ed393b3"} Apr 23 17:55:06.752616 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:06.752589 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-6txtb" event={"ID":"a455b3cc-b20e-46c2-9f70-3c5be09cad64","Type":"ContainerStarted","Data":"717692d83cfbb363a96c841e7ff6221008ad74dde18ac6c29e9618b91aff86f9"} Apr 23 17:55:09.761737 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:09.761701 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hqlvp" event={"ID":"570f4ccf-8f66-420f-9543-207c02da2783","Type":"ContainerStarted","Data":"478e7bd68bd7cb92e7f01e9a31312c43a77d78947a0925e82cece861507497ce"} Apr 23 17:55:09.761737 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:09.761746 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hqlvp" event={"ID":"570f4ccf-8f66-420f-9543-207c02da2783","Type":"ContainerStarted","Data":"f706a19433e4cc4f6eb682905dbf39ec308068ee5016ac20da65a35e0139db77"} Apr 23 17:55:09.762349 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:09.761765 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-hqlvp" Apr 23 17:55:09.763136 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:09.763114 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-6txtb" event={"ID":"a455b3cc-b20e-46c2-9f70-3c5be09cad64","Type":"ContainerStarted","Data":"7c219b60b047022b386ba645c966791f7683462bd38a760d798a60aab529b424"} Apr 23 17:55:09.764667 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:09.764646 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-jhzwk" event={"ID":"223b3de3-2746-4385-a15c-cba2eeb2e9ee","Type":"ContainerStarted","Data":"4814119545a9e96f16b7201d6729668e8c77017e0855f4a5da4a2c3624503aeb"} Apr 23 17:55:09.764758 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:09.764672 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-jhzwk" event={"ID":"223b3de3-2746-4385-a15c-cba2eeb2e9ee","Type":"ContainerStarted","Data":"a6da701a3fa0a6dde5af1a2ef4e984ffda1e411d20b8a9aed73c214b191fb5de"} Apr 23 17:55:09.781752 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:09.781708 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-hqlvp" podStartSLOduration=17.60103901 podStartE2EDuration="20.781693343s" podCreationTimestamp="2026-04-23 17:54:49 +0000 UTC" firstStartedPulling="2026-04-23 17:55:05.839381076 +0000 UTC m=+176.203421994" lastFinishedPulling="2026-04-23 17:55:09.020035391 +0000 UTC m=+179.384076327" observedRunningTime="2026-04-23 17:55:09.77977125 +0000 UTC m=+180.143812191" watchObservedRunningTime="2026-04-23 17:55:09.781693343 +0000 UTC m=+180.145734288" Apr 23 17:55:09.797262 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:09.797227 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-6txtb" podStartSLOduration=17.630840787 podStartE2EDuration="20.797216136s" podCreationTimestamp="2026-04-23 17:54:49 +0000 UTC" firstStartedPulling="2026-04-23 17:55:05.858015681 +0000 UTC m=+176.222056599" lastFinishedPulling="2026-04-23 17:55:09.024391029 +0000 UTC m=+179.388431948" observedRunningTime="2026-04-23 17:55:09.795993965 +0000 UTC m=+180.160034906" watchObservedRunningTime="2026-04-23 17:55:09.797216136 +0000 UTC m=+180.161257054" Apr 23 17:55:09.814274 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:09.814236 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-jhzwk" podStartSLOduration=61.466495364 podStartE2EDuration="1m4.81422475s" podCreationTimestamp="2026-04-23 17:54:05 +0000 UTC" firstStartedPulling="2026-04-23 17:55:05.672500915 +0000 UTC m=+176.036541834" lastFinishedPulling="2026-04-23 17:55:09.0202303 +0000 UTC m=+179.384271220" observedRunningTime="2026-04-23 17:55:09.813941548 +0000 UTC m=+180.177982490" watchObservedRunningTime="2026-04-23 17:55:09.81422475 +0000 UTC m=+180.178265691" Apr 23 17:55:10.647263 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:10.647237 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hhdnl" Apr 23 17:55:13.268704 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:13.268672 2574 scope.go:117] "RemoveContainer" containerID="19dd6133c16fdf0693575bdf162010fd2ed719beb2781f2a25eceacd4b76fda6" Apr 23 17:55:13.776627 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:13.776600 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-phhz6_5a622cde-4463-4b2b-a60a-0724fdeeb5e3/console-operator/1.log" Apr 23 17:55:13.776778 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:13.776665 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-9d4b6777b-phhz6" event={"ID":"5a622cde-4463-4b2b-a60a-0724fdeeb5e3","Type":"ContainerStarted","Data":"3d86cc6e4626be9b9ca14f09130afeae22e676d8ec7df116a47871a0d7dc3a88"} Apr 23 17:55:13.776972 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:13.776943 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-9d4b6777b-phhz6" Apr 23 17:55:13.797365 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:13.797321 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-9d4b6777b-phhz6" podStartSLOduration=63.835317127 podStartE2EDuration="1m8.797308284s" podCreationTimestamp="2026-04-23 17:54:05 +0000 UTC" firstStartedPulling="2026-04-23 17:54:49.733622888 +0000 UTC m=+160.097663821" lastFinishedPulling="2026-04-23 17:54:54.695614045 +0000 UTC m=+165.059654978" observedRunningTime="2026-04-23 17:55:13.796329112 +0000 UTC m=+184.160370053" watchObservedRunningTime="2026-04-23 17:55:13.797308284 +0000 UTC m=+184.161349224" Apr 23 17:55:14.519760 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:14.519732 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-9d4b6777b-phhz6" Apr 23 17:55:17.097020 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:17.096984 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43c90ba9-23a0-4be9-a89b-8ff980f1bb05-metrics-certs\") pod \"network-metrics-daemon-v8bcb\" (UID: \"43c90ba9-23a0-4be9-a89b-8ff980f1bb05\") " pod="openshift-multus/network-metrics-daemon-v8bcb" Apr 23 17:55:17.100916 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:17.100898 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 23 17:55:17.109398 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:17.109379 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43c90ba9-23a0-4be9-a89b-8ff980f1bb05-metrics-certs\") pod \"network-metrics-daemon-v8bcb\" (UID: \"43c90ba9-23a0-4be9-a89b-8ff980f1bb05\") " pod="openshift-multus/network-metrics-daemon-v8bcb" Apr 23 17:55:17.180984 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:17.180958 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-5487j\"" Apr 23 17:55:17.188493 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:17.188473 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8bcb" Apr 23 17:55:17.198376 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:17.198354 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tr8gz\" (UniqueName: \"kubernetes.io/projected/194b68f6-135d-472e-a449-ddda482b9755-kube-api-access-tr8gz\") pod \"network-check-target-vfxjl\" (UID: \"194b68f6-135d-472e-a449-ddda482b9755\") " pod="openshift-network-diagnostics/network-check-target-vfxjl" Apr 23 17:55:17.201382 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:17.201359 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr8gz\" (UniqueName: \"kubernetes.io/projected/194b68f6-135d-472e-a449-ddda482b9755-kube-api-access-tr8gz\") pod \"network-check-target-vfxjl\" (UID: \"194b68f6-135d-472e-a449-ddda482b9755\") " pod="openshift-network-diagnostics/network-check-target-vfxjl" Apr 23 17:55:17.310408 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:17.310350 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-v8bcb"] Apr 23 17:55:17.312669 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:55:17.312643 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43c90ba9_23a0_4be9_a89b_8ff980f1bb05.slice/crio-d96f0fc32c21d883def98bb584accc41bfd448e9b3039856f2e56fd12fbe0bbd WatchSource:0}: Error finding container d96f0fc32c21d883def98bb584accc41bfd448e9b3039856f2e56fd12fbe0bbd: Status 404 returned error can't find the container with id d96f0fc32c21d883def98bb584accc41bfd448e9b3039856f2e56fd12fbe0bbd Apr 23 17:55:17.484741 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:17.484714 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-gszvz\"" Apr 23 17:55:17.493321 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:17.493302 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vfxjl" Apr 23 17:55:17.609073 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:17.609045 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-vfxjl"] Apr 23 17:55:17.611764 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:55:17.611741 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod194b68f6_135d_472e_a449_ddda482b9755.slice/crio-a6f1de0fd1193518ab5ec4945eaf005429c545b44255028fec6f5a05d2d62111 WatchSource:0}: Error finding container a6f1de0fd1193518ab5ec4945eaf005429c545b44255028fec6f5a05d2d62111: Status 404 returned error can't find the container with id a6f1de0fd1193518ab5ec4945eaf005429c545b44255028fec6f5a05d2d62111 Apr 23 17:55:17.790957 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:17.790872 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-v8bcb" event={"ID":"43c90ba9-23a0-4be9-a89b-8ff980f1bb05","Type":"ContainerStarted","Data":"d96f0fc32c21d883def98bb584accc41bfd448e9b3039856f2e56fd12fbe0bbd"} Apr 23 17:55:17.792441 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:17.792415 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-vfxjl" event={"ID":"194b68f6-135d-472e-a449-ddda482b9755","Type":"ContainerStarted","Data":"6d191c1138eceaef4b89e71ce96e588da3bccfd439db86c76b6ac7b244e04974"} Apr 23 17:55:17.792441 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:17.792449 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-vfxjl" event={"ID":"194b68f6-135d-472e-a449-ddda482b9755","Type":"ContainerStarted","Data":"a6f1de0fd1193518ab5ec4945eaf005429c545b44255028fec6f5a05d2d62111"} Apr 23 17:55:17.792634 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:17.792528 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-vfxjl" Apr 23 17:55:17.820970 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:17.820926 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-target-vfxjl" podStartSLOduration=69.820913121 podStartE2EDuration="1m9.820913121s" podCreationTimestamp="2026-04-23 17:54:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:55:17.819374877 +0000 UTC m=+188.183415816" watchObservedRunningTime="2026-04-23 17:55:17.820913121 +0000 UTC m=+188.184954053" Apr 23 17:55:18.796597 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:18.796566 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-v8bcb" event={"ID":"43c90ba9-23a0-4be9-a89b-8ff980f1bb05","Type":"ContainerStarted","Data":"91ef8831076e8aeaae1f9bb410636c2ccc9adb360f87613c41bf726f71cde0cf"} Apr 23 17:55:18.796597 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:18.796600 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-v8bcb" event={"ID":"43c90ba9-23a0-4be9-a89b-8ff980f1bb05","Type":"ContainerStarted","Data":"fc0299bc012ca59cfe2ae8eac7b3a75c03f33f5c7b461d6f07040eba050af644"} Apr 23 17:55:18.821612 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:18.821571 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-v8bcb" podStartSLOduration=69.812153706 podStartE2EDuration="1m10.821559016s" podCreationTimestamp="2026-04-23 17:54:08 +0000 UTC" firstStartedPulling="2026-04-23 17:55:17.315019455 +0000 UTC m=+187.679060374" lastFinishedPulling="2026-04-23 17:55:18.324424766 +0000 UTC m=+188.688465684" observedRunningTime="2026-04-23 17:55:18.820270399 +0000 UTC m=+189.184311344" watchObservedRunningTime="2026-04-23 17:55:18.821559016 +0000 UTC m=+189.185600025" Apr 23 17:55:19.769209 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:19.769184 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-hqlvp" Apr 23 17:55:21.232202 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:21.232169 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-metrics-certs\") pod \"router-default-85cf97bcfb-crk2g\" (UID: \"ef6bbc19-ba30-4d63-ad0f-d37109da20b7\") " pod="openshift-ingress/router-default-85cf97bcfb-crk2g" Apr 23 17:55:21.232651 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:21.232219 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-service-ca-bundle\") pod \"router-default-85cf97bcfb-crk2g\" (UID: \"ef6bbc19-ba30-4d63-ad0f-d37109da20b7\") " pod="openshift-ingress/router-default-85cf97bcfb-crk2g" Apr 23 17:55:21.232750 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:21.232732 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-service-ca-bundle\") pod \"router-default-85cf97bcfb-crk2g\" (UID: \"ef6bbc19-ba30-4d63-ad0f-d37109da20b7\") " pod="openshift-ingress/router-default-85cf97bcfb-crk2g" Apr 23 17:55:21.234646 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:21.234621 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef6bbc19-ba30-4d63-ad0f-d37109da20b7-metrics-certs\") pod \"router-default-85cf97bcfb-crk2g\" (UID: \"ef6bbc19-ba30-4d63-ad0f-d37109da20b7\") " pod="openshift-ingress/router-default-85cf97bcfb-crk2g" Apr 23 17:55:21.333042 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:21.333013 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/e4f9f970-44a9-4e79-ac39-0cfc094cc4ca-networking-console-plugin-cert\") pod \"networking-console-plugin-cb95c66f6-727jd\" (UID: \"e4f9f970-44a9-4e79-ac39-0cfc094cc4ca\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-727jd" Apr 23 17:55:21.333153 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:21.333061 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/cabecf13-4b77-4125-bdb2-df08000b4d3d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-75587bd455-8b4st\" (UID: \"cabecf13-4b77-4125-bdb2-df08000b4d3d\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-8b4st" Apr 23 17:55:21.335502 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:21.335480 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/e4f9f970-44a9-4e79-ac39-0cfc094cc4ca-networking-console-plugin-cert\") pod \"networking-console-plugin-cb95c66f6-727jd\" (UID: \"e4f9f970-44a9-4e79-ac39-0cfc094cc4ca\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-727jd" Apr 23 17:55:21.335559 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:21.335493 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/cabecf13-4b77-4125-bdb2-df08000b4d3d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-75587bd455-8b4st\" (UID: \"cabecf13-4b77-4125-bdb2-df08000b4d3d\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-8b4st" Apr 23 17:55:21.361982 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:21.361958 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-pgfz6\"" Apr 23 17:55:21.369390 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:21.369374 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-85cf97bcfb-crk2g" Apr 23 17:55:21.461301 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:21.461270 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"cluster-monitoring-operator-dockercfg-x5d7b\"" Apr 23 17:55:21.469211 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:21.469185 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-8b4st" Apr 23 17:55:21.495987 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:21.495933 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress/router-default-85cf97bcfb-crk2g"] Apr 23 17:55:21.499118 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:55:21.499093 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef6bbc19_ba30_4d63_ad0f_d37109da20b7.slice/crio-a1668cdd2b9915a5f161ec31fb1bdfa12a2216566f1a89298234d37be8b03b58 WatchSource:0}: Error finding container a1668cdd2b9915a5f161ec31fb1bdfa12a2216566f1a89298234d37be8b03b58: Status 404 returned error can't find the container with id a1668cdd2b9915a5f161ec31fb1bdfa12a2216566f1a89298234d37be8b03b58 Apr 23 17:55:21.533524 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:21.533503 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"default-dockercfg-k7c2b\"" Apr 23 17:55:21.540430 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:21.540408 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-cb95c66f6-727jd" Apr 23 17:55:21.611444 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:21.611410 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-75587bd455-8b4st"] Apr 23 17:55:21.615507 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:55:21.615479 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcabecf13_4b77_4125_bdb2_df08000b4d3d.slice/crio-7f9741f6e18468829790e672995492d17a46c3c335061561a6733712b63c073e WatchSource:0}: Error finding container 7f9741f6e18468829790e672995492d17a46c3c335061561a6733712b63c073e: Status 404 returned error can't find the container with id 7f9741f6e18468829790e672995492d17a46c3c335061561a6733712b63c073e Apr 23 17:55:21.673521 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:21.673493 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-cb95c66f6-727jd"] Apr 23 17:55:21.674663 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:55:21.674633 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4f9f970_44a9_4e79_ac39_0cfc094cc4ca.slice/crio-ce604a4b721b62405d54f9f4d221ab5933717a630ba111634dea8789113b4e24 WatchSource:0}: Error finding container ce604a4b721b62405d54f9f4d221ab5933717a630ba111634dea8789113b4e24: Status 404 returned error can't find the container with id ce604a4b721b62405d54f9f4d221ab5933717a630ba111634dea8789113b4e24 Apr 23 17:55:21.806699 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:21.806605 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-85cf97bcfb-crk2g" event={"ID":"ef6bbc19-ba30-4d63-ad0f-d37109da20b7","Type":"ContainerStarted","Data":"b9434ce98de09b8790a2994b2502f4722e33cbf842e8b62aa59b4c15385acddb"} Apr 23 17:55:21.806699 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:21.806649 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-85cf97bcfb-crk2g" event={"ID":"ef6bbc19-ba30-4d63-ad0f-d37109da20b7","Type":"ContainerStarted","Data":"a1668cdd2b9915a5f161ec31fb1bdfa12a2216566f1a89298234d37be8b03b58"} Apr 23 17:55:21.807648 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:21.807612 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-cb95c66f6-727jd" event={"ID":"e4f9f970-44a9-4e79-ac39-0cfc094cc4ca","Type":"ContainerStarted","Data":"ce604a4b721b62405d54f9f4d221ab5933717a630ba111634dea8789113b4e24"} Apr 23 17:55:21.808580 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:21.808548 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-8b4st" event={"ID":"cabecf13-4b77-4125-bdb2-df08000b4d3d","Type":"ContainerStarted","Data":"7f9741f6e18468829790e672995492d17a46c3c335061561a6733712b63c073e"} Apr 23 17:55:21.830695 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:21.830634 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-85cf97bcfb-crk2g" podStartSLOduration=76.830619426 podStartE2EDuration="1m16.830619426s" podCreationTimestamp="2026-04-23 17:54:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:55:21.830100071 +0000 UTC m=+192.194141012" watchObservedRunningTime="2026-04-23 17:55:21.830619426 +0000 UTC m=+192.194660369" Apr 23 17:55:22.283898 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.283252 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-6bcc868b7-84pz9"] Apr 23 17:55:22.287730 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.286784 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6bcc868b7-84pz9" Apr 23 17:55:22.291796 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.291771 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Apr 23 17:55:22.292039 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.292022 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Apr 23 17:55:22.293183 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.293162 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-msjkc\"" Apr 23 17:55:22.304212 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.304185 2574 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-5bb94bc895-f4jk5"] Apr 23 17:55:22.310531 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.310504 2574 patch_prober.go:28] interesting pod/image-registry-5bb94bc895-f4jk5 container/registry namespace/openshift-image-registry: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 23 17:55:22.310633 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.310570 2574 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" podUID="b39a95d3-b859-4e2d-bbef-fca1ee288a74" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 17:55:22.326188 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.323971 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-6bcc868b7-84pz9"] Apr 23 17:55:22.356346 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.355866 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-55b97c5948-m4xr8"] Apr 23 17:55:22.359779 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.359718 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-55b97c5948-m4xr8" Apr 23 17:55:22.370018 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.369960 2574 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-85cf97bcfb-crk2g" Apr 23 17:55:22.372896 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.372877 2574 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-85cf97bcfb-crk2g" Apr 23 17:55:22.397462 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.397432 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-55b97c5948-m4xr8"] Apr 23 17:55:22.428747 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.428718 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-runtime-extractor-mvdnw"] Apr 23 17:55:22.434108 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.434037 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-mvdnw" Apr 23 17:55:22.437304 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.437087 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-rbac-proxy\"" Apr 23 17:55:22.437304 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.437251 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-sa-dockercfg-b7fp4\"" Apr 23 17:55:22.437756 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.437633 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-tls\"" Apr 23 17:55:22.439357 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.439293 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccn6z\" (UniqueName: \"kubernetes.io/projected/03e6e9ae-fd11-43a3-8abe-baa38a028607-kube-api-access-ccn6z\") pod \"downloads-6bcc868b7-84pz9\" (UID: \"03e6e9ae-fd11-43a3-8abe-baa38a028607\") " pod="openshift-console/downloads-6bcc868b7-84pz9" Apr 23 17:55:22.458335 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.458313 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-mvdnw"] Apr 23 17:55:22.540759 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.540676 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-trusted-ca\") pod \"image-registry-55b97c5948-m4xr8\" (UID: \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\") " pod="openshift-image-registry/image-registry-55b97c5948-m4xr8" Apr 23 17:55:22.540759 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.540717 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/fb6a5ffd-e8aa-4ab3-a7ab-8658bebc06b1-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-mvdnw\" (UID: \"fb6a5ffd-e8aa-4ab3-a7ab-8658bebc06b1\") " pod="openshift-insights/insights-runtime-extractor-mvdnw" Apr 23 17:55:22.541013 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.540770 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-image-registry-private-configuration\") pod \"image-registry-55b97c5948-m4xr8\" (UID: \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\") " pod="openshift-image-registry/image-registry-55b97c5948-m4xr8" Apr 23 17:55:22.541013 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.540839 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-bound-sa-token\") pod \"image-registry-55b97c5948-m4xr8\" (UID: \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\") " pod="openshift-image-registry/image-registry-55b97c5948-m4xr8" Apr 23 17:55:22.541013 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.540898 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/fb6a5ffd-e8aa-4ab3-a7ab-8658bebc06b1-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-mvdnw\" (UID: \"fb6a5ffd-e8aa-4ab3-a7ab-8658bebc06b1\") " pod="openshift-insights/insights-runtime-extractor-mvdnw" Apr 23 17:55:22.541013 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.540924 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/fb6a5ffd-e8aa-4ab3-a7ab-8658bebc06b1-crio-socket\") pod \"insights-runtime-extractor-mvdnw\" (UID: \"fb6a5ffd-e8aa-4ab3-a7ab-8658bebc06b1\") " pod="openshift-insights/insights-runtime-extractor-mvdnw" Apr 23 17:55:22.541013 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.540961 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt75l\" (UniqueName: \"kubernetes.io/projected/fb6a5ffd-e8aa-4ab3-a7ab-8658bebc06b1-kube-api-access-jt75l\") pod \"insights-runtime-extractor-mvdnw\" (UID: \"fb6a5ffd-e8aa-4ab3-a7ab-8658bebc06b1\") " pod="openshift-insights/insights-runtime-extractor-mvdnw" Apr 23 17:55:22.541240 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.541033 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-registry-certificates\") pod \"image-registry-55b97c5948-m4xr8\" (UID: \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\") " pod="openshift-image-registry/image-registry-55b97c5948-m4xr8" Apr 23 17:55:22.541240 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.541065 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-registry-tls\") pod \"image-registry-55b97c5948-m4xr8\" (UID: \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\") " pod="openshift-image-registry/image-registry-55b97c5948-m4xr8" Apr 23 17:55:22.541240 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.541122 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ccn6z\" (UniqueName: \"kubernetes.io/projected/03e6e9ae-fd11-43a3-8abe-baa38a028607-kube-api-access-ccn6z\") pod \"downloads-6bcc868b7-84pz9\" (UID: \"03e6e9ae-fd11-43a3-8abe-baa38a028607\") " pod="openshift-console/downloads-6bcc868b7-84pz9" Apr 23 17:55:22.541240 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.541157 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-ca-trust-extracted\") pod \"image-registry-55b97c5948-m4xr8\" (UID: \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\") " pod="openshift-image-registry/image-registry-55b97c5948-m4xr8" Apr 23 17:55:22.541240 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.541177 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-installation-pull-secrets\") pod \"image-registry-55b97c5948-m4xr8\" (UID: \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\") " pod="openshift-image-registry/image-registry-55b97c5948-m4xr8" Apr 23 17:55:22.541240 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.541199 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4nnn\" (UniqueName: \"kubernetes.io/projected/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-kube-api-access-n4nnn\") pod \"image-registry-55b97c5948-m4xr8\" (UID: \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\") " pod="openshift-image-registry/image-registry-55b97c5948-m4xr8" Apr 23 17:55:22.541492 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.541277 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/fb6a5ffd-e8aa-4ab3-a7ab-8658bebc06b1-data-volume\") pod \"insights-runtime-extractor-mvdnw\" (UID: \"fb6a5ffd-e8aa-4ab3-a7ab-8658bebc06b1\") " pod="openshift-insights/insights-runtime-extractor-mvdnw" Apr 23 17:55:22.554714 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.554682 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccn6z\" (UniqueName: \"kubernetes.io/projected/03e6e9ae-fd11-43a3-8abe-baa38a028607-kube-api-access-ccn6z\") pod \"downloads-6bcc868b7-84pz9\" (UID: \"03e6e9ae-fd11-43a3-8abe-baa38a028607\") " pod="openshift-console/downloads-6bcc868b7-84pz9" Apr 23 17:55:22.603717 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.603520 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6bcc868b7-84pz9" Apr 23 17:55:22.642017 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.641982 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-image-registry-private-configuration\") pod \"image-registry-55b97c5948-m4xr8\" (UID: \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\") " pod="openshift-image-registry/image-registry-55b97c5948-m4xr8" Apr 23 17:55:22.642189 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.642030 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-bound-sa-token\") pod \"image-registry-55b97c5948-m4xr8\" (UID: \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\") " pod="openshift-image-registry/image-registry-55b97c5948-m4xr8" Apr 23 17:55:22.642189 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.642055 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/fb6a5ffd-e8aa-4ab3-a7ab-8658bebc06b1-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-mvdnw\" (UID: \"fb6a5ffd-e8aa-4ab3-a7ab-8658bebc06b1\") " pod="openshift-insights/insights-runtime-extractor-mvdnw" Apr 23 17:55:22.642189 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.642078 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/fb6a5ffd-e8aa-4ab3-a7ab-8658bebc06b1-crio-socket\") pod \"insights-runtime-extractor-mvdnw\" (UID: \"fb6a5ffd-e8aa-4ab3-a7ab-8658bebc06b1\") " pod="openshift-insights/insights-runtime-extractor-mvdnw" Apr 23 17:55:22.642189 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.642102 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jt75l\" (UniqueName: \"kubernetes.io/projected/fb6a5ffd-e8aa-4ab3-a7ab-8658bebc06b1-kube-api-access-jt75l\") pod \"insights-runtime-extractor-mvdnw\" (UID: \"fb6a5ffd-e8aa-4ab3-a7ab-8658bebc06b1\") " pod="openshift-insights/insights-runtime-extractor-mvdnw" Apr 23 17:55:22.642189 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.642132 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-registry-certificates\") pod \"image-registry-55b97c5948-m4xr8\" (UID: \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\") " pod="openshift-image-registry/image-registry-55b97c5948-m4xr8" Apr 23 17:55:22.642189 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.642161 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-registry-tls\") pod \"image-registry-55b97c5948-m4xr8\" (UID: \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\") " pod="openshift-image-registry/image-registry-55b97c5948-m4xr8" Apr 23 17:55:22.642561 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.642254 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/fb6a5ffd-e8aa-4ab3-a7ab-8658bebc06b1-crio-socket\") pod \"insights-runtime-extractor-mvdnw\" (UID: \"fb6a5ffd-e8aa-4ab3-a7ab-8658bebc06b1\") " pod="openshift-insights/insights-runtime-extractor-mvdnw" Apr 23 17:55:22.642561 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.642307 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-ca-trust-extracted\") pod \"image-registry-55b97c5948-m4xr8\" (UID: \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\") " pod="openshift-image-registry/image-registry-55b97c5948-m4xr8" Apr 23 17:55:22.642561 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.642337 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-installation-pull-secrets\") pod \"image-registry-55b97c5948-m4xr8\" (UID: \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\") " pod="openshift-image-registry/image-registry-55b97c5948-m4xr8" Apr 23 17:55:22.642561 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.642364 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n4nnn\" (UniqueName: \"kubernetes.io/projected/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-kube-api-access-n4nnn\") pod \"image-registry-55b97c5948-m4xr8\" (UID: \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\") " pod="openshift-image-registry/image-registry-55b97c5948-m4xr8" Apr 23 17:55:22.642561 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.642408 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/fb6a5ffd-e8aa-4ab3-a7ab-8658bebc06b1-data-volume\") pod \"insights-runtime-extractor-mvdnw\" (UID: \"fb6a5ffd-e8aa-4ab3-a7ab-8658bebc06b1\") " pod="openshift-insights/insights-runtime-extractor-mvdnw" Apr 23 17:55:22.642561 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.642440 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-trusted-ca\") pod \"image-registry-55b97c5948-m4xr8\" (UID: \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\") " pod="openshift-image-registry/image-registry-55b97c5948-m4xr8" Apr 23 17:55:22.642561 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.642470 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/fb6a5ffd-e8aa-4ab3-a7ab-8658bebc06b1-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-mvdnw\" (UID: \"fb6a5ffd-e8aa-4ab3-a7ab-8658bebc06b1\") " pod="openshift-insights/insights-runtime-extractor-mvdnw" Apr 23 17:55:22.643117 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.642789 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/fb6a5ffd-e8aa-4ab3-a7ab-8658bebc06b1-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-mvdnw\" (UID: \"fb6a5ffd-e8aa-4ab3-a7ab-8658bebc06b1\") " pod="openshift-insights/insights-runtime-extractor-mvdnw" Apr 23 17:55:22.643175 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.643116 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-ca-trust-extracted\") pod \"image-registry-55b97c5948-m4xr8\" (UID: \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\") " pod="openshift-image-registry/image-registry-55b97c5948-m4xr8" Apr 23 17:55:22.643232 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.643183 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/fb6a5ffd-e8aa-4ab3-a7ab-8658bebc06b1-data-volume\") pod \"insights-runtime-extractor-mvdnw\" (UID: \"fb6a5ffd-e8aa-4ab3-a7ab-8658bebc06b1\") " pod="openshift-insights/insights-runtime-extractor-mvdnw" Apr 23 17:55:22.644134 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.643867 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-registry-certificates\") pod \"image-registry-55b97c5948-m4xr8\" (UID: \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\") " pod="openshift-image-registry/image-registry-55b97c5948-m4xr8" Apr 23 17:55:22.644621 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.644572 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-trusted-ca\") pod \"image-registry-55b97c5948-m4xr8\" (UID: \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\") " pod="openshift-image-registry/image-registry-55b97c5948-m4xr8" Apr 23 17:55:22.645221 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.645199 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-registry-tls\") pod \"image-registry-55b97c5948-m4xr8\" (UID: \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\") " pod="openshift-image-registry/image-registry-55b97c5948-m4xr8" Apr 23 17:55:22.646094 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.646071 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/fb6a5ffd-e8aa-4ab3-a7ab-8658bebc06b1-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-mvdnw\" (UID: \"fb6a5ffd-e8aa-4ab3-a7ab-8658bebc06b1\") " pod="openshift-insights/insights-runtime-extractor-mvdnw" Apr 23 17:55:22.646201 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.646078 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-image-registry-private-configuration\") pod \"image-registry-55b97c5948-m4xr8\" (UID: \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\") " pod="openshift-image-registry/image-registry-55b97c5948-m4xr8" Apr 23 17:55:22.646721 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.646700 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-installation-pull-secrets\") pod \"image-registry-55b97c5948-m4xr8\" (UID: \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\") " pod="openshift-image-registry/image-registry-55b97c5948-m4xr8" Apr 23 17:55:22.658630 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.658586 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt75l\" (UniqueName: \"kubernetes.io/projected/fb6a5ffd-e8aa-4ab3-a7ab-8658bebc06b1-kube-api-access-jt75l\") pod \"insights-runtime-extractor-mvdnw\" (UID: \"fb6a5ffd-e8aa-4ab3-a7ab-8658bebc06b1\") " pod="openshift-insights/insights-runtime-extractor-mvdnw" Apr 23 17:55:22.670155 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.670132 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-bound-sa-token\") pod \"image-registry-55b97c5948-m4xr8\" (UID: \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\") " pod="openshift-image-registry/image-registry-55b97c5948-m4xr8" Apr 23 17:55:22.670256 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.670206 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4nnn\" (UniqueName: \"kubernetes.io/projected/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-kube-api-access-n4nnn\") pod \"image-registry-55b97c5948-m4xr8\" (UID: \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\") " pod="openshift-image-registry/image-registry-55b97c5948-m4xr8" Apr 23 17:55:22.674678 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.674655 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-55b97c5948-m4xr8" Apr 23 17:55:22.751019 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.750906 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-mvdnw" Apr 23 17:55:22.777222 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.777196 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-6bcc868b7-84pz9"] Apr 23 17:55:22.813200 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.813124 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-cb95c66f6-727jd" event={"ID":"e4f9f970-44a9-4e79-ac39-0cfc094cc4ca","Type":"ContainerStarted","Data":"a937ffb9fcdd09aa6ed51de1c49f650a272e40b4454a27ad2223a9bd177dcabf"} Apr 23 17:55:22.813396 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.813377 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-85cf97bcfb-crk2g" Apr 23 17:55:22.814681 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.814659 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-85cf97bcfb-crk2g" Apr 23 17:55:22.852453 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.852401 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-cb95c66f6-727jd" podStartSLOduration=74.867728073 podStartE2EDuration="1m15.852383239s" podCreationTimestamp="2026-04-23 17:54:07 +0000 UTC" firstStartedPulling="2026-04-23 17:55:21.676502189 +0000 UTC m=+192.040543111" lastFinishedPulling="2026-04-23 17:55:22.661157352 +0000 UTC m=+193.025198277" observedRunningTime="2026-04-23 17:55:22.851318148 +0000 UTC m=+193.215359090" watchObservedRunningTime="2026-04-23 17:55:22.852383239 +0000 UTC m=+193.216424182" Apr 23 17:55:22.900448 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:22.900419 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-55b97c5948-m4xr8"] Apr 23 17:55:23.211037 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:55:23.210995 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03e6e9ae_fd11_43a3_8abe_baa38a028607.slice/crio-307690e1a69def37bf0767e6606189a297a8881b4a57551b461dddc370d1f74a WatchSource:0}: Error finding container 307690e1a69def37bf0767e6606189a297a8881b4a57551b461dddc370d1f74a: Status 404 returned error can't find the container with id 307690e1a69def37bf0767e6606189a297a8881b4a57551b461dddc370d1f74a Apr 23 17:55:23.211926 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:55:23.211804 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42ffbf1c_448d_41bd_8eae_566d6d4cb2d9.slice/crio-9bce06c82b0ea4acad6f13a69c1b687171e1d50df3f5ba5c574b4066d0727fcd WatchSource:0}: Error finding container 9bce06c82b0ea4acad6f13a69c1b687171e1d50df3f5ba5c574b4066d0727fcd: Status 404 returned error can't find the container with id 9bce06c82b0ea4acad6f13a69c1b687171e1d50df3f5ba5c574b4066d0727fcd Apr 23 17:55:23.245040 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.244726 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-65545bb479-82xqt"] Apr 23 17:55:23.248383 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.248361 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65545bb479-82xqt" Apr 23 17:55:23.254670 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.254456 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Apr 23 17:55:23.255644 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.255191 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Apr 23 17:55:23.255644 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.255497 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-52nww\"" Apr 23 17:55:23.256291 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.256266 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Apr 23 17:55:23.256614 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.256594 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Apr 23 17:55:23.258137 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.257953 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Apr 23 17:55:23.267016 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.266981 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-65545bb479-82xqt"] Apr 23 17:55:23.348988 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.348958 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e684296b-68a2-4225-9296-807a9ed43d67-console-oauth-config\") pod \"console-65545bb479-82xqt\" (UID: \"e684296b-68a2-4225-9296-807a9ed43d67\") " pod="openshift-console/console-65545bb479-82xqt" Apr 23 17:55:23.349298 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.349008 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm4v5\" (UniqueName: \"kubernetes.io/projected/e684296b-68a2-4225-9296-807a9ed43d67-kube-api-access-nm4v5\") pod \"console-65545bb479-82xqt\" (UID: \"e684296b-68a2-4225-9296-807a9ed43d67\") " pod="openshift-console/console-65545bb479-82xqt" Apr 23 17:55:23.349298 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.349130 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e684296b-68a2-4225-9296-807a9ed43d67-console-serving-cert\") pod \"console-65545bb479-82xqt\" (UID: \"e684296b-68a2-4225-9296-807a9ed43d67\") " pod="openshift-console/console-65545bb479-82xqt" Apr 23 17:55:23.349298 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.349155 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e684296b-68a2-4225-9296-807a9ed43d67-console-config\") pod \"console-65545bb479-82xqt\" (UID: \"e684296b-68a2-4225-9296-807a9ed43d67\") " pod="openshift-console/console-65545bb479-82xqt" Apr 23 17:55:23.349298 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.349196 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e684296b-68a2-4225-9296-807a9ed43d67-service-ca\") pod \"console-65545bb479-82xqt\" (UID: \"e684296b-68a2-4225-9296-807a9ed43d67\") " pod="openshift-console/console-65545bb479-82xqt" Apr 23 17:55:23.349298 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.349243 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e684296b-68a2-4225-9296-807a9ed43d67-oauth-serving-cert\") pod \"console-65545bb479-82xqt\" (UID: \"e684296b-68a2-4225-9296-807a9ed43d67\") " pod="openshift-console/console-65545bb479-82xqt" Apr 23 17:55:23.371958 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.371896 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-mvdnw"] Apr 23 17:55:23.375056 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:55:23.375013 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb6a5ffd_e8aa_4ab3_a7ab_8658bebc06b1.slice/crio-2b0b9fd9118328d2e12c312a7477d459055375b8e998731c09077c4f5350b476 WatchSource:0}: Error finding container 2b0b9fd9118328d2e12c312a7477d459055375b8e998731c09077c4f5350b476: Status 404 returned error can't find the container with id 2b0b9fd9118328d2e12c312a7477d459055375b8e998731c09077c4f5350b476 Apr 23 17:55:23.450003 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.449971 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e684296b-68a2-4225-9296-807a9ed43d67-console-oauth-config\") pod \"console-65545bb479-82xqt\" (UID: \"e684296b-68a2-4225-9296-807a9ed43d67\") " pod="openshift-console/console-65545bb479-82xqt" Apr 23 17:55:23.450120 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.450023 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nm4v5\" (UniqueName: \"kubernetes.io/projected/e684296b-68a2-4225-9296-807a9ed43d67-kube-api-access-nm4v5\") pod \"console-65545bb479-82xqt\" (UID: \"e684296b-68a2-4225-9296-807a9ed43d67\") " pod="openshift-console/console-65545bb479-82xqt" Apr 23 17:55:23.450120 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.450108 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e684296b-68a2-4225-9296-807a9ed43d67-console-serving-cert\") pod \"console-65545bb479-82xqt\" (UID: \"e684296b-68a2-4225-9296-807a9ed43d67\") " pod="openshift-console/console-65545bb479-82xqt" Apr 23 17:55:23.450255 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.450135 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e684296b-68a2-4225-9296-807a9ed43d67-console-config\") pod \"console-65545bb479-82xqt\" (UID: \"e684296b-68a2-4225-9296-807a9ed43d67\") " pod="openshift-console/console-65545bb479-82xqt" Apr 23 17:55:23.450255 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.450172 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e684296b-68a2-4225-9296-807a9ed43d67-service-ca\") pod \"console-65545bb479-82xqt\" (UID: \"e684296b-68a2-4225-9296-807a9ed43d67\") " pod="openshift-console/console-65545bb479-82xqt" Apr 23 17:55:23.450255 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.450195 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e684296b-68a2-4225-9296-807a9ed43d67-oauth-serving-cert\") pod \"console-65545bb479-82xqt\" (UID: \"e684296b-68a2-4225-9296-807a9ed43d67\") " pod="openshift-console/console-65545bb479-82xqt" Apr 23 17:55:23.452081 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.450926 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e684296b-68a2-4225-9296-807a9ed43d67-oauth-serving-cert\") pod \"console-65545bb479-82xqt\" (UID: \"e684296b-68a2-4225-9296-807a9ed43d67\") " pod="openshift-console/console-65545bb479-82xqt" Apr 23 17:55:23.452081 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.451583 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e684296b-68a2-4225-9296-807a9ed43d67-console-config\") pod \"console-65545bb479-82xqt\" (UID: \"e684296b-68a2-4225-9296-807a9ed43d67\") " pod="openshift-console/console-65545bb479-82xqt" Apr 23 17:55:23.452081 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.452034 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e684296b-68a2-4225-9296-807a9ed43d67-service-ca\") pod \"console-65545bb479-82xqt\" (UID: \"e684296b-68a2-4225-9296-807a9ed43d67\") " pod="openshift-console/console-65545bb479-82xqt" Apr 23 17:55:23.453285 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.453181 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e684296b-68a2-4225-9296-807a9ed43d67-console-oauth-config\") pod \"console-65545bb479-82xqt\" (UID: \"e684296b-68a2-4225-9296-807a9ed43d67\") " pod="openshift-console/console-65545bb479-82xqt" Apr 23 17:55:23.456318 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.456277 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e684296b-68a2-4225-9296-807a9ed43d67-console-serving-cert\") pod \"console-65545bb479-82xqt\" (UID: \"e684296b-68a2-4225-9296-807a9ed43d67\") " pod="openshift-console/console-65545bb479-82xqt" Apr 23 17:55:23.467481 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.467457 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm4v5\" (UniqueName: \"kubernetes.io/projected/e684296b-68a2-4225-9296-807a9ed43d67-kube-api-access-nm4v5\") pod \"console-65545bb479-82xqt\" (UID: \"e684296b-68a2-4225-9296-807a9ed43d67\") " pod="openshift-console/console-65545bb479-82xqt" Apr 23 17:55:23.580699 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.580665 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65545bb479-82xqt" Apr 23 17:55:23.734198 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.734161 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-65545bb479-82xqt"] Apr 23 17:55:23.738355 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:55:23.738323 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode684296b_68a2_4225_9296_807a9ed43d67.slice/crio-830e875ae54e39243640d1e33d9df4d4d82a065505d7991b889c8e0d0da4055f WatchSource:0}: Error finding container 830e875ae54e39243640d1e33d9df4d4d82a065505d7991b889c8e0d0da4055f: Status 404 returned error can't find the container with id 830e875ae54e39243640d1e33d9df4d4d82a065505d7991b889c8e0d0da4055f Apr 23 17:55:23.818782 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.818745 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-mvdnw" event={"ID":"fb6a5ffd-e8aa-4ab3-a7ab-8658bebc06b1","Type":"ContainerStarted","Data":"a185f65f0e782a28652bd04036c65144342073a8042ee45cac94651782889538"} Apr 23 17:55:23.818971 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.818793 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-mvdnw" event={"ID":"fb6a5ffd-e8aa-4ab3-a7ab-8658bebc06b1","Type":"ContainerStarted","Data":"2b0b9fd9118328d2e12c312a7477d459055375b8e998731c09077c4f5350b476"} Apr 23 17:55:23.820724 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.820684 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65545bb479-82xqt" event={"ID":"e684296b-68a2-4225-9296-807a9ed43d67","Type":"ContainerStarted","Data":"830e875ae54e39243640d1e33d9df4d4d82a065505d7991b889c8e0d0da4055f"} Apr 23 17:55:23.822622 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.822557 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-55b97c5948-m4xr8" event={"ID":"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9","Type":"ContainerStarted","Data":"795668277b3f5731bcffa1465f900c105a858c2a4c598dfb4a7699a8e9333a09"} Apr 23 17:55:23.822622 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.822594 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-55b97c5948-m4xr8" event={"ID":"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9","Type":"ContainerStarted","Data":"9bce06c82b0ea4acad6f13a69c1b687171e1d50df3f5ba5c574b4066d0727fcd"} Apr 23 17:55:23.822813 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.822712 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-55b97c5948-m4xr8" Apr 23 17:55:23.824119 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.824048 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-6bcc868b7-84pz9" event={"ID":"03e6e9ae-fd11-43a3-8abe-baa38a028607","Type":"ContainerStarted","Data":"307690e1a69def37bf0767e6606189a297a8881b4a57551b461dddc370d1f74a"} Apr 23 17:55:23.830231 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.829873 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-8b4st" event={"ID":"cabecf13-4b77-4125-bdb2-df08000b4d3d","Type":"ContainerStarted","Data":"cafe55b0ff29f3f0ebd827782df7c653dd597585ce2709c1d0c5f91ad44075ef"} Apr 23 17:55:23.860812 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.860366 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-55b97c5948-m4xr8" podStartSLOduration=77.860351039 podStartE2EDuration="1m17.860351039s" podCreationTimestamp="2026-04-23 17:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:55:23.858081966 +0000 UTC m=+194.222122907" watchObservedRunningTime="2026-04-23 17:55:23.860351039 +0000 UTC m=+194.224391978" Apr 23 17:55:23.884226 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.883941 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-8b4st" podStartSLOduration=76.222711717 podStartE2EDuration="1m17.883924982s" podCreationTimestamp="2026-04-23 17:54:06 +0000 UTC" firstStartedPulling="2026-04-23 17:55:21.61767392 +0000 UTC m=+191.981714840" lastFinishedPulling="2026-04-23 17:55:23.278887171 +0000 UTC m=+193.642928105" observedRunningTime="2026-04-23 17:55:23.882917346 +0000 UTC m=+194.246958290" watchObservedRunningTime="2026-04-23 17:55:23.883924982 +0000 UTC m=+194.247965924" Apr 23 17:55:23.901625 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.900872 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-rg2dm"] Apr 23 17:55:23.904657 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.904639 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-rg2dm" Apr 23 17:55:23.907728 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.907538 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-admission-webhook-dockercfg-28fd2\"" Apr 23 17:55:23.908101 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.907933 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-admission-webhook-tls\"" Apr 23 17:55:23.915117 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:23.915094 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-rg2dm"] Apr 23 17:55:24.057679 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:24.057643 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/b9811dcf-bfe0-485e-afae-82c020a66185-tls-certificates\") pod \"prometheus-operator-admission-webhook-57cf98b594-rg2dm\" (UID: \"b9811dcf-bfe0-485e-afae-82c020a66185\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-rg2dm" Apr 23 17:55:24.158758 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:24.158713 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/b9811dcf-bfe0-485e-afae-82c020a66185-tls-certificates\") pod \"prometheus-operator-admission-webhook-57cf98b594-rg2dm\" (UID: \"b9811dcf-bfe0-485e-afae-82c020a66185\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-rg2dm" Apr 23 17:55:24.159066 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:55:24.158887 2574 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Apr 23 17:55:24.159066 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:55:24.158960 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b9811dcf-bfe0-485e-afae-82c020a66185-tls-certificates podName:b9811dcf-bfe0-485e-afae-82c020a66185 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:24.658938935 +0000 UTC m=+195.022979861 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/b9811dcf-bfe0-485e-afae-82c020a66185-tls-certificates") pod "prometheus-operator-admission-webhook-57cf98b594-rg2dm" (UID: "b9811dcf-bfe0-485e-afae-82c020a66185") : secret "prometheus-operator-admission-webhook-tls" not found Apr 23 17:55:24.663087 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:24.663047 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/b9811dcf-bfe0-485e-afae-82c020a66185-tls-certificates\") pod \"prometheus-operator-admission-webhook-57cf98b594-rg2dm\" (UID: \"b9811dcf-bfe0-485e-afae-82c020a66185\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-rg2dm" Apr 23 17:55:24.671403 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:24.671367 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/b9811dcf-bfe0-485e-afae-82c020a66185-tls-certificates\") pod \"prometheus-operator-admission-webhook-57cf98b594-rg2dm\" (UID: \"b9811dcf-bfe0-485e-afae-82c020a66185\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-rg2dm" Apr 23 17:55:24.817619 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:24.817149 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-rg2dm" Apr 23 17:55:24.843892 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:24.842982 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-mvdnw" event={"ID":"fb6a5ffd-e8aa-4ab3-a7ab-8658bebc06b1","Type":"ContainerStarted","Data":"432d91996725e0178fa8f00319ca73528af25d30ff919c2cee1003b8f6129746"} Apr 23 17:55:24.979517 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:24.979452 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-rg2dm"] Apr 23 17:55:24.983374 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:55:24.983340 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9811dcf_bfe0_485e_afae_82c020a66185.slice/crio-4ceea52352466521d99732639543ef2cdf21b0e3d37d2def9f94ea877a30f99d WatchSource:0}: Error finding container 4ceea52352466521d99732639543ef2cdf21b0e3d37d2def9f94ea877a30f99d: Status 404 returned error can't find the container with id 4ceea52352466521d99732639543ef2cdf21b0e3d37d2def9f94ea877a30f99d Apr 23 17:55:25.847370 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:25.847333 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-rg2dm" event={"ID":"b9811dcf-bfe0-485e-afae-82c020a66185","Type":"ContainerStarted","Data":"4ceea52352466521d99732639543ef2cdf21b0e3d37d2def9f94ea877a30f99d"} Apr 23 17:55:27.856544 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:27.856507 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-mvdnw" event={"ID":"fb6a5ffd-e8aa-4ab3-a7ab-8658bebc06b1","Type":"ContainerStarted","Data":"75465ec652913d3288b60b31053d3b9a9e37602f2fb96d96f5d803d9307856b0"} Apr 23 17:55:27.858211 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:27.858180 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65545bb479-82xqt" event={"ID":"e684296b-68a2-4225-9296-807a9ed43d67","Type":"ContainerStarted","Data":"5b204257743c0ee639dfd28757c3ab85cc5a521b288251128a64fbd4b9709f74"} Apr 23 17:55:27.859732 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:27.859701 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-rg2dm" event={"ID":"b9811dcf-bfe0-485e-afae-82c020a66185","Type":"ContainerStarted","Data":"03c41937bf02b19ae44e1a9d41db7dfd1563e8d78508daed94ddf915502681d8"} Apr 23 17:55:27.859907 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:27.859888 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-rg2dm" Apr 23 17:55:27.865316 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:27.865294 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-rg2dm" Apr 23 17:55:27.880529 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:27.880491 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-runtime-extractor-mvdnw" podStartSLOduration=1.988559325 podStartE2EDuration="5.880479654s" podCreationTimestamp="2026-04-23 17:55:22 +0000 UTC" firstStartedPulling="2026-04-23 17:55:23.441562128 +0000 UTC m=+193.805603053" lastFinishedPulling="2026-04-23 17:55:27.333482448 +0000 UTC m=+197.697523382" observedRunningTime="2026-04-23 17:55:27.879224667 +0000 UTC m=+198.243265608" watchObservedRunningTime="2026-04-23 17:55:27.880479654 +0000 UTC m=+198.244520625" Apr 23 17:55:27.905235 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:27.905195 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-65545bb479-82xqt" podStartSLOduration=1.311894699 podStartE2EDuration="4.905181629s" podCreationTimestamp="2026-04-23 17:55:23 +0000 UTC" firstStartedPulling="2026-04-23 17:55:23.740358053 +0000 UTC m=+194.104398972" lastFinishedPulling="2026-04-23 17:55:27.333644979 +0000 UTC m=+197.697685902" observedRunningTime="2026-04-23 17:55:27.905086017 +0000 UTC m=+198.269126959" watchObservedRunningTime="2026-04-23 17:55:27.905181629 +0000 UTC m=+198.269222570" Apr 23 17:55:27.928791 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:27.928748 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-rg2dm" podStartSLOduration=2.5302704560000002 podStartE2EDuration="4.928737502s" podCreationTimestamp="2026-04-23 17:55:23 +0000 UTC" firstStartedPulling="2026-04-23 17:55:24.986285567 +0000 UTC m=+195.350326491" lastFinishedPulling="2026-04-23 17:55:27.384752599 +0000 UTC m=+197.748793537" observedRunningTime="2026-04-23 17:55:27.927689403 +0000 UTC m=+198.291730345" watchObservedRunningTime="2026-04-23 17:55:27.928737502 +0000 UTC m=+198.292778444" Apr 23 17:55:28.992117 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:28.991659 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-5676c8c784-k2mx9"] Apr 23 17:55:28.997059 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:28.997031 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-5676c8c784-k2mx9" Apr 23 17:55:29.000399 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:29.000345 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-tls\"" Apr 23 17:55:29.001833 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:29.001625 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-kube-rbac-proxy-config\"" Apr 23 17:55:29.001833 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:29.001682 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-dockercfg-lwq79\"" Apr 23 17:55:29.002032 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:29.001931 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"metrics-client-ca\"" Apr 23 17:55:29.004780 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:29.004760 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-5676c8c784-k2mx9"] Apr 23 17:55:29.100640 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:29.100600 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/d0291a82-c194-49a3-a786-a6fb55329b77-prometheus-operator-tls\") pod \"prometheus-operator-5676c8c784-k2mx9\" (UID: \"d0291a82-c194-49a3-a786-a6fb55329b77\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-k2mx9" Apr 23 17:55:29.100803 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:29.100669 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d0291a82-c194-49a3-a786-a6fb55329b77-metrics-client-ca\") pod \"prometheus-operator-5676c8c784-k2mx9\" (UID: \"d0291a82-c194-49a3-a786-a6fb55329b77\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-k2mx9" Apr 23 17:55:29.100803 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:29.100701 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d0291a82-c194-49a3-a786-a6fb55329b77-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5676c8c784-k2mx9\" (UID: \"d0291a82-c194-49a3-a786-a6fb55329b77\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-k2mx9" Apr 23 17:55:29.100803 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:29.100750 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwd7m\" (UniqueName: \"kubernetes.io/projected/d0291a82-c194-49a3-a786-a6fb55329b77-kube-api-access-qwd7m\") pod \"prometheus-operator-5676c8c784-k2mx9\" (UID: \"d0291a82-c194-49a3-a786-a6fb55329b77\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-k2mx9" Apr 23 17:55:29.202102 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:29.202066 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qwd7m\" (UniqueName: \"kubernetes.io/projected/d0291a82-c194-49a3-a786-a6fb55329b77-kube-api-access-qwd7m\") pod \"prometheus-operator-5676c8c784-k2mx9\" (UID: \"d0291a82-c194-49a3-a786-a6fb55329b77\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-k2mx9" Apr 23 17:55:29.202281 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:29.202148 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/d0291a82-c194-49a3-a786-a6fb55329b77-prometheus-operator-tls\") pod \"prometheus-operator-5676c8c784-k2mx9\" (UID: \"d0291a82-c194-49a3-a786-a6fb55329b77\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-k2mx9" Apr 23 17:55:29.202281 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:29.202240 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d0291a82-c194-49a3-a786-a6fb55329b77-metrics-client-ca\") pod \"prometheus-operator-5676c8c784-k2mx9\" (UID: \"d0291a82-c194-49a3-a786-a6fb55329b77\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-k2mx9" Apr 23 17:55:29.202387 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:29.202287 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d0291a82-c194-49a3-a786-a6fb55329b77-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5676c8c784-k2mx9\" (UID: \"d0291a82-c194-49a3-a786-a6fb55329b77\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-k2mx9" Apr 23 17:55:29.202387 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:55:29.202343 2574 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Apr 23 17:55:29.202486 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:55:29.202404 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0291a82-c194-49a3-a786-a6fb55329b77-prometheus-operator-tls podName:d0291a82-c194-49a3-a786-a6fb55329b77 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:29.702383781 +0000 UTC m=+200.066424719 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/d0291a82-c194-49a3-a786-a6fb55329b77-prometheus-operator-tls") pod "prometheus-operator-5676c8c784-k2mx9" (UID: "d0291a82-c194-49a3-a786-a6fb55329b77") : secret "prometheus-operator-tls" not found Apr 23 17:55:29.203201 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:29.203155 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d0291a82-c194-49a3-a786-a6fb55329b77-metrics-client-ca\") pod \"prometheus-operator-5676c8c784-k2mx9\" (UID: \"d0291a82-c194-49a3-a786-a6fb55329b77\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-k2mx9" Apr 23 17:55:29.205073 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:29.205048 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d0291a82-c194-49a3-a786-a6fb55329b77-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5676c8c784-k2mx9\" (UID: \"d0291a82-c194-49a3-a786-a6fb55329b77\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-k2mx9" Apr 23 17:55:29.215381 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:29.215336 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwd7m\" (UniqueName: \"kubernetes.io/projected/d0291a82-c194-49a3-a786-a6fb55329b77-kube-api-access-qwd7m\") pod \"prometheus-operator-5676c8c784-k2mx9\" (UID: \"d0291a82-c194-49a3-a786-a6fb55329b77\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-k2mx9" Apr 23 17:55:29.706492 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:29.706462 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/d0291a82-c194-49a3-a786-a6fb55329b77-prometheus-operator-tls\") pod \"prometheus-operator-5676c8c784-k2mx9\" (UID: \"d0291a82-c194-49a3-a786-a6fb55329b77\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-k2mx9" Apr 23 17:55:29.709333 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:29.709307 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/d0291a82-c194-49a3-a786-a6fb55329b77-prometheus-operator-tls\") pod \"prometheus-operator-5676c8c784-k2mx9\" (UID: \"d0291a82-c194-49a3-a786-a6fb55329b77\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-k2mx9" Apr 23 17:55:29.908936 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:29.908894 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-5676c8c784-k2mx9" Apr 23 17:55:30.051176 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:30.051126 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-5676c8c784-k2mx9"] Apr 23 17:55:30.053518 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:55:30.053483 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0291a82_c194_49a3_a786_a6fb55329b77.slice/crio-fefe8cbe630f97ff12908124f509b423be815685c44faaaedafdae29333938a6 WatchSource:0}: Error finding container fefe8cbe630f97ff12908124f509b423be815685c44faaaedafdae29333938a6: Status 404 returned error can't find the container with id fefe8cbe630f97ff12908124f509b423be815685c44faaaedafdae29333938a6 Apr 23 17:55:30.871866 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:30.871789 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5676c8c784-k2mx9" event={"ID":"d0291a82-c194-49a3-a786-a6fb55329b77","Type":"ContainerStarted","Data":"fefe8cbe630f97ff12908124f509b423be815685c44faaaedafdae29333938a6"} Apr 23 17:55:31.073383 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.073347 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-856d8774bc-fktd8"] Apr 23 17:55:31.077434 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.077404 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-856d8774bc-fktd8" Apr 23 17:55:31.081276 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.080674 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"open-cluster-management-image-pull-credentials\"" Apr 23 17:55:31.081276 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.080750 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"open-cluster-management-agent-addon\"/\"openshift-service-ca.crt\"" Apr 23 17:55:31.081978 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.081955 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"managed-serviceaccount-hub-kubeconfig\"" Apr 23 17:55:31.082207 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.082184 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"open-cluster-management-agent-addon\"/\"kube-root-ca.crt\"" Apr 23 17:55:31.087002 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.086983 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-856d8774bc-fktd8"] Apr 23 17:55:31.117288 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.117255 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdkt2\" (UniqueName: \"kubernetes.io/projected/982187e0-5a6e-4cf0-90e6-4cc698247373-kube-api-access-rdkt2\") pod \"managed-serviceaccount-addon-agent-856d8774bc-fktd8\" (UID: \"982187e0-5a6e-4cf0-90e6-4cc698247373\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-856d8774bc-fktd8" Apr 23 17:55:31.117408 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.117324 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/982187e0-5a6e-4cf0-90e6-4cc698247373-hub-kubeconfig\") pod \"managed-serviceaccount-addon-agent-856d8774bc-fktd8\" (UID: \"982187e0-5a6e-4cf0-90e6-4cc698247373\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-856d8774bc-fktd8" Apr 23 17:55:31.149499 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.149423 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["open-cluster-management-agent-addon/cluster-proxy-proxy-agent-55f9b9dd49-flpw6"] Apr 23 17:55:31.153274 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.153254 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-55f9b9dd49-flpw6" Apr 23 17:55:31.156445 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.156416 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-hub-kubeconfig\"" Apr 23 17:55:31.156558 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.156416 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-ca\"" Apr 23 17:55:31.160436 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.160415 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-open-cluster-management.io-proxy-agent-signer-client-cert\"" Apr 23 17:55:31.160555 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.160507 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-service-proxy-server-certificates\"" Apr 23 17:55:31.161723 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.161706 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-86d966f4cc-cp7bj"] Apr 23 17:55:31.165151 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.165134 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-86d966f4cc-cp7bj" Apr 23 17:55:31.168032 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.167830 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"work-manager-hub-kubeconfig\"" Apr 23 17:55:31.168506 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.168469 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/cluster-proxy-proxy-agent-55f9b9dd49-flpw6"] Apr 23 17:55:31.177287 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.177264 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-86d966f4cc-cp7bj"] Apr 23 17:55:31.217999 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.217969 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/63b921da-43d6-4b73-a9a9-8a3221949b04-klusterlet-config\") pod \"klusterlet-addon-workmgr-86d966f4cc-cp7bj\" (UID: \"63b921da-43d6-4b73-a9a9-8a3221949b04\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-86d966f4cc-cp7bj" Apr 23 17:55:31.218151 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.218047 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rdkt2\" (UniqueName: \"kubernetes.io/projected/982187e0-5a6e-4cf0-90e6-4cc698247373-kube-api-access-rdkt2\") pod \"managed-serviceaccount-addon-agent-856d8774bc-fktd8\" (UID: \"982187e0-5a6e-4cf0-90e6-4cc698247373\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-856d8774bc-fktd8" Apr 23 17:55:31.218151 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.218088 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/63b921da-43d6-4b73-a9a9-8a3221949b04-tmp\") pod \"klusterlet-addon-workmgr-86d966f4cc-cp7bj\" (UID: \"63b921da-43d6-4b73-a9a9-8a3221949b04\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-86d966f4cc-cp7bj" Apr 23 17:55:31.218151 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.218122 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsprv\" (UniqueName: \"kubernetes.io/projected/63b921da-43d6-4b73-a9a9-8a3221949b04-kube-api-access-xsprv\") pod \"klusterlet-addon-workmgr-86d966f4cc-cp7bj\" (UID: \"63b921da-43d6-4b73-a9a9-8a3221949b04\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-86d966f4cc-cp7bj" Apr 23 17:55:31.218839 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.218228 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/982187e0-5a6e-4cf0-90e6-4cc698247373-hub-kubeconfig\") pod \"managed-serviceaccount-addon-agent-856d8774bc-fktd8\" (UID: \"982187e0-5a6e-4cf0-90e6-4cc698247373\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-856d8774bc-fktd8" Apr 23 17:55:31.221969 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.221944 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/982187e0-5a6e-4cf0-90e6-4cc698247373-hub-kubeconfig\") pod \"managed-serviceaccount-addon-agent-856d8774bc-fktd8\" (UID: \"982187e0-5a6e-4cf0-90e6-4cc698247373\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-856d8774bc-fktd8" Apr 23 17:55:31.228804 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.228780 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdkt2\" (UniqueName: \"kubernetes.io/projected/982187e0-5a6e-4cf0-90e6-4cc698247373-kube-api-access-rdkt2\") pod \"managed-serviceaccount-addon-agent-856d8774bc-fktd8\" (UID: \"982187e0-5a6e-4cf0-90e6-4cc698247373\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-856d8774bc-fktd8" Apr 23 17:55:31.319235 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.319202 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ocpservice-ca\" (UniqueName: \"kubernetes.io/configmap/02321c17-811b-4320-bbb7-c629cf39eab1-ocpservice-ca\") pod \"cluster-proxy-proxy-agent-55f9b9dd49-flpw6\" (UID: \"02321c17-811b-4320-bbb7-c629cf39eab1\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-55f9b9dd49-flpw6" Apr 23 17:55:31.319388 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.319253 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5kw4\" (UniqueName: \"kubernetes.io/projected/02321c17-811b-4320-bbb7-c629cf39eab1-kube-api-access-r5kw4\") pod \"cluster-proxy-proxy-agent-55f9b9dd49-flpw6\" (UID: \"02321c17-811b-4320-bbb7-c629cf39eab1\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-55f9b9dd49-flpw6" Apr 23 17:55:31.319388 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.319313 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/63b921da-43d6-4b73-a9a9-8a3221949b04-tmp\") pod \"klusterlet-addon-workmgr-86d966f4cc-cp7bj\" (UID: \"63b921da-43d6-4b73-a9a9-8a3221949b04\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-86d966f4cc-cp7bj" Apr 23 17:55:31.319388 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.319334 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xsprv\" (UniqueName: \"kubernetes.io/projected/63b921da-43d6-4b73-a9a9-8a3221949b04-kube-api-access-xsprv\") pod \"klusterlet-addon-workmgr-86d966f4cc-cp7bj\" (UID: \"63b921da-43d6-4b73-a9a9-8a3221949b04\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-86d966f4cc-cp7bj" Apr 23 17:55:31.319388 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.319375 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-proxy-server-cert\" (UniqueName: \"kubernetes.io/secret/02321c17-811b-4320-bbb7-c629cf39eab1-service-proxy-server-cert\") pod \"cluster-proxy-proxy-agent-55f9b9dd49-flpw6\" (UID: \"02321c17-811b-4320-bbb7-c629cf39eab1\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-55f9b9dd49-flpw6" Apr 23 17:55:31.319580 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.319419 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hub\" (UniqueName: \"kubernetes.io/secret/02321c17-811b-4320-bbb7-c629cf39eab1-hub\") pod \"cluster-proxy-proxy-agent-55f9b9dd49-flpw6\" (UID: \"02321c17-811b-4320-bbb7-c629cf39eab1\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-55f9b9dd49-flpw6" Apr 23 17:55:31.319580 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.319476 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca\" (UniqueName: \"kubernetes.io/secret/02321c17-811b-4320-bbb7-c629cf39eab1-ca\") pod \"cluster-proxy-proxy-agent-55f9b9dd49-flpw6\" (UID: \"02321c17-811b-4320-bbb7-c629cf39eab1\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-55f9b9dd49-flpw6" Apr 23 17:55:31.319580 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.319558 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/63b921da-43d6-4b73-a9a9-8a3221949b04-klusterlet-config\") pod \"klusterlet-addon-workmgr-86d966f4cc-cp7bj\" (UID: \"63b921da-43d6-4b73-a9a9-8a3221949b04\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-86d966f4cc-cp7bj" Apr 23 17:55:31.319726 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.319623 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/02321c17-811b-4320-bbb7-c629cf39eab1-hub-kubeconfig\") pod \"cluster-proxy-proxy-agent-55f9b9dd49-flpw6\" (UID: \"02321c17-811b-4320-bbb7-c629cf39eab1\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-55f9b9dd49-flpw6" Apr 23 17:55:31.319726 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.319702 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/63b921da-43d6-4b73-a9a9-8a3221949b04-tmp\") pod \"klusterlet-addon-workmgr-86d966f4cc-cp7bj\" (UID: \"63b921da-43d6-4b73-a9a9-8a3221949b04\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-86d966f4cc-cp7bj" Apr 23 17:55:31.322363 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.322341 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/63b921da-43d6-4b73-a9a9-8a3221949b04-klusterlet-config\") pod \"klusterlet-addon-workmgr-86d966f4cc-cp7bj\" (UID: \"63b921da-43d6-4b73-a9a9-8a3221949b04\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-86d966f4cc-cp7bj" Apr 23 17:55:31.330349 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.330328 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsprv\" (UniqueName: \"kubernetes.io/projected/63b921da-43d6-4b73-a9a9-8a3221949b04-kube-api-access-xsprv\") pod \"klusterlet-addon-workmgr-86d966f4cc-cp7bj\" (UID: \"63b921da-43d6-4b73-a9a9-8a3221949b04\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-86d966f4cc-cp7bj" Apr 23 17:55:31.401250 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.401177 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-856d8774bc-fktd8" Apr 23 17:55:31.420072 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.420041 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/02321c17-811b-4320-bbb7-c629cf39eab1-hub-kubeconfig\") pod \"cluster-proxy-proxy-agent-55f9b9dd49-flpw6\" (UID: \"02321c17-811b-4320-bbb7-c629cf39eab1\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-55f9b9dd49-flpw6" Apr 23 17:55:31.420201 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.420095 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ocpservice-ca\" (UniqueName: \"kubernetes.io/configmap/02321c17-811b-4320-bbb7-c629cf39eab1-ocpservice-ca\") pod \"cluster-proxy-proxy-agent-55f9b9dd49-flpw6\" (UID: \"02321c17-811b-4320-bbb7-c629cf39eab1\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-55f9b9dd49-flpw6" Apr 23 17:55:31.420201 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.420124 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r5kw4\" (UniqueName: \"kubernetes.io/projected/02321c17-811b-4320-bbb7-c629cf39eab1-kube-api-access-r5kw4\") pod \"cluster-proxy-proxy-agent-55f9b9dd49-flpw6\" (UID: \"02321c17-811b-4320-bbb7-c629cf39eab1\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-55f9b9dd49-flpw6" Apr 23 17:55:31.420201 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.420178 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-proxy-server-cert\" (UniqueName: \"kubernetes.io/secret/02321c17-811b-4320-bbb7-c629cf39eab1-service-proxy-server-cert\") pod \"cluster-proxy-proxy-agent-55f9b9dd49-flpw6\" (UID: \"02321c17-811b-4320-bbb7-c629cf39eab1\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-55f9b9dd49-flpw6" Apr 23 17:55:31.420365 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.420205 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hub\" (UniqueName: \"kubernetes.io/secret/02321c17-811b-4320-bbb7-c629cf39eab1-hub\") pod \"cluster-proxy-proxy-agent-55f9b9dd49-flpw6\" (UID: \"02321c17-811b-4320-bbb7-c629cf39eab1\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-55f9b9dd49-flpw6" Apr 23 17:55:31.420365 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.420255 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca\" (UniqueName: \"kubernetes.io/secret/02321c17-811b-4320-bbb7-c629cf39eab1-ca\") pod \"cluster-proxy-proxy-agent-55f9b9dd49-flpw6\" (UID: \"02321c17-811b-4320-bbb7-c629cf39eab1\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-55f9b9dd49-flpw6" Apr 23 17:55:31.420886 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.420824 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ocpservice-ca\" (UniqueName: \"kubernetes.io/configmap/02321c17-811b-4320-bbb7-c629cf39eab1-ocpservice-ca\") pod \"cluster-proxy-proxy-agent-55f9b9dd49-flpw6\" (UID: \"02321c17-811b-4320-bbb7-c629cf39eab1\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-55f9b9dd49-flpw6" Apr 23 17:55:31.423289 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.423253 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca\" (UniqueName: \"kubernetes.io/secret/02321c17-811b-4320-bbb7-c629cf39eab1-ca\") pod \"cluster-proxy-proxy-agent-55f9b9dd49-flpw6\" (UID: \"02321c17-811b-4320-bbb7-c629cf39eab1\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-55f9b9dd49-flpw6" Apr 23 17:55:31.423456 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.423421 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hub\" (UniqueName: \"kubernetes.io/secret/02321c17-811b-4320-bbb7-c629cf39eab1-hub\") pod \"cluster-proxy-proxy-agent-55f9b9dd49-flpw6\" (UID: \"02321c17-811b-4320-bbb7-c629cf39eab1\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-55f9b9dd49-flpw6" Apr 23 17:55:31.423881 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.423813 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-proxy-server-cert\" (UniqueName: \"kubernetes.io/secret/02321c17-811b-4320-bbb7-c629cf39eab1-service-proxy-server-cert\") pod \"cluster-proxy-proxy-agent-55f9b9dd49-flpw6\" (UID: \"02321c17-811b-4320-bbb7-c629cf39eab1\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-55f9b9dd49-flpw6" Apr 23 17:55:31.424474 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.424271 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/02321c17-811b-4320-bbb7-c629cf39eab1-hub-kubeconfig\") pod \"cluster-proxy-proxy-agent-55f9b9dd49-flpw6\" (UID: \"02321c17-811b-4320-bbb7-c629cf39eab1\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-55f9b9dd49-flpw6" Apr 23 17:55:31.429063 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.429037 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-59d4798cc5-q8q9r"] Apr 23 17:55:31.433923 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.433904 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-59d4798cc5-q8q9r" Apr 23 17:55:31.435156 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.435114 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5kw4\" (UniqueName: \"kubernetes.io/projected/02321c17-811b-4320-bbb7-c629cf39eab1-kube-api-access-r5kw4\") pod \"cluster-proxy-proxy-agent-55f9b9dd49-flpw6\" (UID: \"02321c17-811b-4320-bbb7-c629cf39eab1\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-55f9b9dd49-flpw6" Apr 23 17:55:31.448337 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.448311 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-59d4798cc5-q8q9r"] Apr 23 17:55:31.456873 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.456771 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Apr 23 17:55:31.464338 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.464313 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-55f9b9dd49-flpw6" Apr 23 17:55:31.489941 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.489505 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-86d966f4cc-cp7bj" Apr 23 17:55:31.521870 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.521467 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/29470926-5713-4d9b-8a39-cf795d0a4226-console-serving-cert\") pod \"console-59d4798cc5-q8q9r\" (UID: \"29470926-5713-4d9b-8a39-cf795d0a4226\") " pod="openshift-console/console-59d4798cc5-q8q9r" Apr 23 17:55:31.521870 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.521531 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/29470926-5713-4d9b-8a39-cf795d0a4226-console-oauth-config\") pod \"console-59d4798cc5-q8q9r\" (UID: \"29470926-5713-4d9b-8a39-cf795d0a4226\") " pod="openshift-console/console-59d4798cc5-q8q9r" Apr 23 17:55:31.521870 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.521561 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk7h5\" (UniqueName: \"kubernetes.io/projected/29470926-5713-4d9b-8a39-cf795d0a4226-kube-api-access-zk7h5\") pod \"console-59d4798cc5-q8q9r\" (UID: \"29470926-5713-4d9b-8a39-cf795d0a4226\") " pod="openshift-console/console-59d4798cc5-q8q9r" Apr 23 17:55:31.521870 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.521596 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/29470926-5713-4d9b-8a39-cf795d0a4226-oauth-serving-cert\") pod \"console-59d4798cc5-q8q9r\" (UID: \"29470926-5713-4d9b-8a39-cf795d0a4226\") " pod="openshift-console/console-59d4798cc5-q8q9r" Apr 23 17:55:31.521870 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.521645 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/29470926-5713-4d9b-8a39-cf795d0a4226-console-config\") pod \"console-59d4798cc5-q8q9r\" (UID: \"29470926-5713-4d9b-8a39-cf795d0a4226\") " pod="openshift-console/console-59d4798cc5-q8q9r" Apr 23 17:55:31.521870 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.521672 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/29470926-5713-4d9b-8a39-cf795d0a4226-service-ca\") pod \"console-59d4798cc5-q8q9r\" (UID: \"29470926-5713-4d9b-8a39-cf795d0a4226\") " pod="openshift-console/console-59d4798cc5-q8q9r" Apr 23 17:55:31.521870 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.521702 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29470926-5713-4d9b-8a39-cf795d0a4226-trusted-ca-bundle\") pod \"console-59d4798cc5-q8q9r\" (UID: \"29470926-5713-4d9b-8a39-cf795d0a4226\") " pod="openshift-console/console-59d4798cc5-q8q9r" Apr 23 17:55:31.569307 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.569277 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-856d8774bc-fktd8"] Apr 23 17:55:31.578252 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:55:31.578218 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod982187e0_5a6e_4cf0_90e6_4cc698247373.slice/crio-83aa31a5323e1dcd15a973722df00208a1c266f983091b2e171951ff963eddfc WatchSource:0}: Error finding container 83aa31a5323e1dcd15a973722df00208a1c266f983091b2e171951ff963eddfc: Status 404 returned error can't find the container with id 83aa31a5323e1dcd15a973722df00208a1c266f983091b2e171951ff963eddfc Apr 23 17:55:31.623970 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.623427 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/29470926-5713-4d9b-8a39-cf795d0a4226-console-serving-cert\") pod \"console-59d4798cc5-q8q9r\" (UID: \"29470926-5713-4d9b-8a39-cf795d0a4226\") " pod="openshift-console/console-59d4798cc5-q8q9r" Apr 23 17:55:31.623970 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.623506 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/29470926-5713-4d9b-8a39-cf795d0a4226-console-oauth-config\") pod \"console-59d4798cc5-q8q9r\" (UID: \"29470926-5713-4d9b-8a39-cf795d0a4226\") " pod="openshift-console/console-59d4798cc5-q8q9r" Apr 23 17:55:31.623970 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.623532 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zk7h5\" (UniqueName: \"kubernetes.io/projected/29470926-5713-4d9b-8a39-cf795d0a4226-kube-api-access-zk7h5\") pod \"console-59d4798cc5-q8q9r\" (UID: \"29470926-5713-4d9b-8a39-cf795d0a4226\") " pod="openshift-console/console-59d4798cc5-q8q9r" Apr 23 17:55:31.623970 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.623566 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/29470926-5713-4d9b-8a39-cf795d0a4226-oauth-serving-cert\") pod \"console-59d4798cc5-q8q9r\" (UID: \"29470926-5713-4d9b-8a39-cf795d0a4226\") " pod="openshift-console/console-59d4798cc5-q8q9r" Apr 23 17:55:31.623970 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.623617 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/29470926-5713-4d9b-8a39-cf795d0a4226-console-config\") pod \"console-59d4798cc5-q8q9r\" (UID: \"29470926-5713-4d9b-8a39-cf795d0a4226\") " pod="openshift-console/console-59d4798cc5-q8q9r" Apr 23 17:55:31.623970 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.623651 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/29470926-5713-4d9b-8a39-cf795d0a4226-service-ca\") pod \"console-59d4798cc5-q8q9r\" (UID: \"29470926-5713-4d9b-8a39-cf795d0a4226\") " pod="openshift-console/console-59d4798cc5-q8q9r" Apr 23 17:55:31.623970 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.623700 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29470926-5713-4d9b-8a39-cf795d0a4226-trusted-ca-bundle\") pod \"console-59d4798cc5-q8q9r\" (UID: \"29470926-5713-4d9b-8a39-cf795d0a4226\") " pod="openshift-console/console-59d4798cc5-q8q9r" Apr 23 17:55:31.625396 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.624564 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/29470926-5713-4d9b-8a39-cf795d0a4226-console-config\") pod \"console-59d4798cc5-q8q9r\" (UID: \"29470926-5713-4d9b-8a39-cf795d0a4226\") " pod="openshift-console/console-59d4798cc5-q8q9r" Apr 23 17:55:31.626893 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.625587 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29470926-5713-4d9b-8a39-cf795d0a4226-trusted-ca-bundle\") pod \"console-59d4798cc5-q8q9r\" (UID: \"29470926-5713-4d9b-8a39-cf795d0a4226\") " pod="openshift-console/console-59d4798cc5-q8q9r" Apr 23 17:55:31.626893 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.626091 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/29470926-5713-4d9b-8a39-cf795d0a4226-oauth-serving-cert\") pod \"console-59d4798cc5-q8q9r\" (UID: \"29470926-5713-4d9b-8a39-cf795d0a4226\") " pod="openshift-console/console-59d4798cc5-q8q9r" Apr 23 17:55:31.626893 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.626826 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/29470926-5713-4d9b-8a39-cf795d0a4226-service-ca\") pod \"console-59d4798cc5-q8q9r\" (UID: \"29470926-5713-4d9b-8a39-cf795d0a4226\") " pod="openshift-console/console-59d4798cc5-q8q9r" Apr 23 17:55:31.634636 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.634380 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/29470926-5713-4d9b-8a39-cf795d0a4226-console-serving-cert\") pod \"console-59d4798cc5-q8q9r\" (UID: \"29470926-5713-4d9b-8a39-cf795d0a4226\") " pod="openshift-console/console-59d4798cc5-q8q9r" Apr 23 17:55:31.635160 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.634940 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/29470926-5713-4d9b-8a39-cf795d0a4226-console-oauth-config\") pod \"console-59d4798cc5-q8q9r\" (UID: \"29470926-5713-4d9b-8a39-cf795d0a4226\") " pod="openshift-console/console-59d4798cc5-q8q9r" Apr 23 17:55:31.644243 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.641945 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk7h5\" (UniqueName: \"kubernetes.io/projected/29470926-5713-4d9b-8a39-cf795d0a4226-kube-api-access-zk7h5\") pod \"console-59d4798cc5-q8q9r\" (UID: \"29470926-5713-4d9b-8a39-cf795d0a4226\") " pod="openshift-console/console-59d4798cc5-q8q9r" Apr 23 17:55:31.653926 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.653808 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/cluster-proxy-proxy-agent-55f9b9dd49-flpw6"] Apr 23 17:55:31.662193 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:55:31.662161 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02321c17_811b_4320_bbb7_c629cf39eab1.slice/crio-daabcee41281fb8053a08ae54d60f58a52349dea1e9ce4e75adfe8c8132f4aa5 WatchSource:0}: Error finding container daabcee41281fb8053a08ae54d60f58a52349dea1e9ce4e75adfe8c8132f4aa5: Status 404 returned error can't find the container with id daabcee41281fb8053a08ae54d60f58a52349dea1e9ce4e75adfe8c8132f4aa5 Apr 23 17:55:31.680200 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.679989 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-86d966f4cc-cp7bj"] Apr 23 17:55:31.681948 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:55:31.681917 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63b921da_43d6_4b73_a9a9_8a3221949b04.slice/crio-ddcfb1e27ee09752cac7622618a5d2a9c9fc4972a4d5f658f6b9a1bfc559d13f WatchSource:0}: Error finding container ddcfb1e27ee09752cac7622618a5d2a9c9fc4972a4d5f658f6b9a1bfc559d13f: Status 404 returned error can't find the container with id ddcfb1e27ee09752cac7622618a5d2a9c9fc4972a4d5f658f6b9a1bfc559d13f Apr 23 17:55:31.747148 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.747111 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-59d4798cc5-q8q9r" Apr 23 17:55:31.875911 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.875879 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-55f9b9dd49-flpw6" event={"ID":"02321c17-811b-4320-bbb7-c629cf39eab1","Type":"ContainerStarted","Data":"daabcee41281fb8053a08ae54d60f58a52349dea1e9ce4e75adfe8c8132f4aa5"} Apr 23 17:55:31.877193 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.877165 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-86d966f4cc-cp7bj" event={"ID":"63b921da-43d6-4b73-a9a9-8a3221949b04","Type":"ContainerStarted","Data":"ddcfb1e27ee09752cac7622618a5d2a9c9fc4972a4d5f658f6b9a1bfc559d13f"} Apr 23 17:55:31.878404 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.878376 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-856d8774bc-fktd8" event={"ID":"982187e0-5a6e-4cf0-90e6-4cc698247373","Type":"ContainerStarted","Data":"83aa31a5323e1dcd15a973722df00208a1c266f983091b2e171951ff963eddfc"} Apr 23 17:55:31.880201 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.880175 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5676c8c784-k2mx9" event={"ID":"d0291a82-c194-49a3-a786-a6fb55329b77","Type":"ContainerStarted","Data":"292abdb23b67f7e47ebbaa0ef6acf0118985d728369758b77530c85c9be6f2f0"} Apr 23 17:55:31.880303 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.880207 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5676c8c784-k2mx9" event={"ID":"d0291a82-c194-49a3-a786-a6fb55329b77","Type":"ContainerStarted","Data":"609d3d2701fac10a8673a0dcd1bc3b84843cbe450013e6b7122af830d9562900"} Apr 23 17:55:31.900091 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.900067 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-59d4798cc5-q8q9r"] Apr 23 17:55:31.901269 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:55:31.901243 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29470926_5713_4d9b_8a39_cf795d0a4226.slice/crio-a434f51ee5599248f3d19d927ec6d0d0bdca59a414118b280809735ca49e557a WatchSource:0}: Error finding container a434f51ee5599248f3d19d927ec6d0d0bdca59a414118b280809735ca49e557a: Status 404 returned error can't find the container with id a434f51ee5599248f3d19d927ec6d0d0bdca59a414118b280809735ca49e557a Apr 23 17:55:31.912017 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:31.911907 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-5676c8c784-k2mx9" podStartSLOduration=2.593180447 podStartE2EDuration="3.911889528s" podCreationTimestamp="2026-04-23 17:55:28 +0000 UTC" firstStartedPulling="2026-04-23 17:55:30.056163584 +0000 UTC m=+200.420204506" lastFinishedPulling="2026-04-23 17:55:31.374872654 +0000 UTC m=+201.738913587" observedRunningTime="2026-04-23 17:55:31.909983889 +0000 UTC m=+202.274024834" watchObservedRunningTime="2026-04-23 17:55:31.911889528 +0000 UTC m=+202.275930468" Apr 23 17:55:32.310685 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:32.310652 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" Apr 23 17:55:32.888964 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:32.888926 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-59d4798cc5-q8q9r" event={"ID":"29470926-5713-4d9b-8a39-cf795d0a4226","Type":"ContainerStarted","Data":"c2d751dc51754db1c4667de661079cff70eec0120bb4cd04b0f58d918b77e5b0"} Apr 23 17:55:32.889139 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:32.888972 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-59d4798cc5-q8q9r" event={"ID":"29470926-5713-4d9b-8a39-cf795d0a4226","Type":"ContainerStarted","Data":"a434f51ee5599248f3d19d927ec6d0d0bdca59a414118b280809735ca49e557a"} Apr 23 17:55:32.912228 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:32.911976 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-59d4798cc5-q8q9r" podStartSLOduration=1.911955382 podStartE2EDuration="1.911955382s" podCreationTimestamp="2026-04-23 17:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:55:32.911956134 +0000 UTC m=+203.275997118" watchObservedRunningTime="2026-04-23 17:55:32.911955382 +0000 UTC m=+203.275996323" Apr 23 17:55:33.446900 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.445080 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-69db897b98-9wvv9"] Apr 23 17:55:33.451559 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.450960 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-9d44df66c-6q7jp"] Apr 23 17:55:33.471079 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.453178 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-69db897b98-9wvv9" Apr 23 17:55:33.471079 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.457097 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"kube-state-metrics-tls\"" Apr 23 17:55:33.471079 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.458572 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"kube-state-metrics-kube-rbac-proxy-config\"" Apr 23 17:55:33.471079 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.459960 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kube-state-metrics-custom-resource-state-configmap\"" Apr 23 17:55:33.478248 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.478224 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-69db897b98-9wvv9"] Apr 23 17:55:33.478563 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.478546 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-9d44df66c-6q7jp" Apr 23 17:55:33.485770 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.485745 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-9d44df66c-6q7jp"] Apr 23 17:55:33.485909 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.485899 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-j7hjg"] Apr 23 17:55:33.488122 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.488083 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"kube-state-metrics-dockercfg-c2nlv\"" Apr 23 17:55:33.488785 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.488767 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"openshift-state-metrics-kube-rbac-proxy-config\"" Apr 23 17:55:33.489590 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.489567 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-j7hjg" Apr 23 17:55:33.490900 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.490630 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"openshift-state-metrics-dockercfg-w5wvb\"" Apr 23 17:55:33.490900 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.490807 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"openshift-state-metrics-tls\"" Apr 23 17:55:33.494027 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.492788 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-dockercfg-rzdrn\"" Apr 23 17:55:33.494027 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.492981 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-tls\"" Apr 23 17:55:33.494027 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.493148 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-kube-rbac-proxy-config\"" Apr 23 17:55:33.499397 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.499383 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"node-exporter-accelerators-collector-config\"" Apr 23 17:55:33.545829 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.545763 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/1a5bc9a8-8c44-4a50-91b5-1f0f006e4229-node-exporter-textfile\") pod \"node-exporter-j7hjg\" (UID: \"1a5bc9a8-8c44-4a50-91b5-1f0f006e4229\") " pod="openshift-monitoring/node-exporter-j7hjg" Apr 23 17:55:33.546075 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.545876 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/23abd42d-a8a4-44c2-9c7b-dd1ca477dc93-volume-directive-shadow\") pod \"kube-state-metrics-69db897b98-9wvv9\" (UID: \"23abd42d-a8a4-44c2-9c7b-dd1ca477dc93\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-9wvv9" Apr 23 17:55:33.546075 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.545914 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/1a5bc9a8-8c44-4a50-91b5-1f0f006e4229-node-exporter-wtmp\") pod \"node-exporter-j7hjg\" (UID: \"1a5bc9a8-8c44-4a50-91b5-1f0f006e4229\") " pod="openshift-monitoring/node-exporter-j7hjg" Apr 23 17:55:33.546075 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.545978 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/1a5bc9a8-8c44-4a50-91b5-1f0f006e4229-node-exporter-tls\") pod \"node-exporter-j7hjg\" (UID: \"1a5bc9a8-8c44-4a50-91b5-1f0f006e4229\") " pod="openshift-monitoring/node-exporter-j7hjg" Apr 23 17:55:33.546075 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.545994 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1a5bc9a8-8c44-4a50-91b5-1f0f006e4229-metrics-client-ca\") pod \"node-exporter-j7hjg\" (UID: \"1a5bc9a8-8c44-4a50-91b5-1f0f006e4229\") " pod="openshift-monitoring/node-exporter-j7hjg" Apr 23 17:55:33.546075 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.546041 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/23abd42d-a8a4-44c2-9c7b-dd1ca477dc93-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-69db897b98-9wvv9\" (UID: \"23abd42d-a8a4-44c2-9c7b-dd1ca477dc93\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-9wvv9" Apr 23 17:55:33.546075 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.546071 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/75a82e88-93ab-4540-b1fd-381e8e042f06-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-9d44df66c-6q7jp\" (UID: \"75a82e88-93ab-4540-b1fd-381e8e042f06\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-6q7jp" Apr 23 17:55:33.546382 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.546096 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/1a5bc9a8-8c44-4a50-91b5-1f0f006e4229-node-exporter-accelerators-collector-config\") pod \"node-exporter-j7hjg\" (UID: \"1a5bc9a8-8c44-4a50-91b5-1f0f006e4229\") " pod="openshift-monitoring/node-exporter-j7hjg" Apr 23 17:55:33.546382 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.546140 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1a5bc9a8-8c44-4a50-91b5-1f0f006e4229-sys\") pod \"node-exporter-j7hjg\" (UID: \"1a5bc9a8-8c44-4a50-91b5-1f0f006e4229\") " pod="openshift-monitoring/node-exporter-j7hjg" Apr 23 17:55:33.546382 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.546172 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc94s\" (UniqueName: \"kubernetes.io/projected/75a82e88-93ab-4540-b1fd-381e8e042f06-kube-api-access-nc94s\") pod \"openshift-state-metrics-9d44df66c-6q7jp\" (UID: \"75a82e88-93ab-4540-b1fd-381e8e042f06\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-6q7jp" Apr 23 17:55:33.546382 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.546202 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgqrn\" (UniqueName: \"kubernetes.io/projected/1a5bc9a8-8c44-4a50-91b5-1f0f006e4229-kube-api-access-rgqrn\") pod \"node-exporter-j7hjg\" (UID: \"1a5bc9a8-8c44-4a50-91b5-1f0f006e4229\") " pod="openshift-monitoring/node-exporter-j7hjg" Apr 23 17:55:33.546382 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.546237 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/23abd42d-a8a4-44c2-9c7b-dd1ca477dc93-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-69db897b98-9wvv9\" (UID: \"23abd42d-a8a4-44c2-9c7b-dd1ca477dc93\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-9wvv9" Apr 23 17:55:33.546382 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.546272 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/75a82e88-93ab-4540-b1fd-381e8e042f06-openshift-state-metrics-tls\") pod \"openshift-state-metrics-9d44df66c-6q7jp\" (UID: \"75a82e88-93ab-4540-b1fd-381e8e042f06\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-6q7jp" Apr 23 17:55:33.546382 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.546297 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/75a82e88-93ab-4540-b1fd-381e8e042f06-metrics-client-ca\") pod \"openshift-state-metrics-9d44df66c-6q7jp\" (UID: \"75a82e88-93ab-4540-b1fd-381e8e042f06\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-6q7jp" Apr 23 17:55:33.546382 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.546329 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4lcv\" (UniqueName: \"kubernetes.io/projected/23abd42d-a8a4-44c2-9c7b-dd1ca477dc93-kube-api-access-n4lcv\") pod \"kube-state-metrics-69db897b98-9wvv9\" (UID: \"23abd42d-a8a4-44c2-9c7b-dd1ca477dc93\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-9wvv9" Apr 23 17:55:33.546382 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.546362 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/1a5bc9a8-8c44-4a50-91b5-1f0f006e4229-root\") pod \"node-exporter-j7hjg\" (UID: \"1a5bc9a8-8c44-4a50-91b5-1f0f006e4229\") " pod="openshift-monitoring/node-exporter-j7hjg" Apr 23 17:55:33.547146 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.546432 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/23abd42d-a8a4-44c2-9c7b-dd1ca477dc93-metrics-client-ca\") pod \"kube-state-metrics-69db897b98-9wvv9\" (UID: \"23abd42d-a8a4-44c2-9c7b-dd1ca477dc93\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-9wvv9" Apr 23 17:55:33.547146 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.546462 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/23abd42d-a8a4-44c2-9c7b-dd1ca477dc93-kube-state-metrics-tls\") pod \"kube-state-metrics-69db897b98-9wvv9\" (UID: \"23abd42d-a8a4-44c2-9c7b-dd1ca477dc93\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-9wvv9" Apr 23 17:55:33.547146 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.546522 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1a5bc9a8-8c44-4a50-91b5-1f0f006e4229-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-j7hjg\" (UID: \"1a5bc9a8-8c44-4a50-91b5-1f0f006e4229\") " pod="openshift-monitoring/node-exporter-j7hjg" Apr 23 17:55:33.581496 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.581472 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-65545bb479-82xqt" Apr 23 17:55:33.581983 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.581965 2574 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-65545bb479-82xqt" Apr 23 17:55:33.589545 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.589364 2574 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-65545bb479-82xqt" Apr 23 17:55:33.652517 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.651904 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1a5bc9a8-8c44-4a50-91b5-1f0f006e4229-sys\") pod \"node-exporter-j7hjg\" (UID: \"1a5bc9a8-8c44-4a50-91b5-1f0f006e4229\") " pod="openshift-monitoring/node-exporter-j7hjg" Apr 23 17:55:33.652517 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.651947 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nc94s\" (UniqueName: \"kubernetes.io/projected/75a82e88-93ab-4540-b1fd-381e8e042f06-kube-api-access-nc94s\") pod \"openshift-state-metrics-9d44df66c-6q7jp\" (UID: \"75a82e88-93ab-4540-b1fd-381e8e042f06\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-6q7jp" Apr 23 17:55:33.652517 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.651977 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rgqrn\" (UniqueName: \"kubernetes.io/projected/1a5bc9a8-8c44-4a50-91b5-1f0f006e4229-kube-api-access-rgqrn\") pod \"node-exporter-j7hjg\" (UID: \"1a5bc9a8-8c44-4a50-91b5-1f0f006e4229\") " pod="openshift-monitoring/node-exporter-j7hjg" Apr 23 17:55:33.652517 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.652009 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/23abd42d-a8a4-44c2-9c7b-dd1ca477dc93-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-69db897b98-9wvv9\" (UID: \"23abd42d-a8a4-44c2-9c7b-dd1ca477dc93\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-9wvv9" Apr 23 17:55:33.652517 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.652041 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/75a82e88-93ab-4540-b1fd-381e8e042f06-openshift-state-metrics-tls\") pod \"openshift-state-metrics-9d44df66c-6q7jp\" (UID: \"75a82e88-93ab-4540-b1fd-381e8e042f06\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-6q7jp" Apr 23 17:55:33.652517 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.652065 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/75a82e88-93ab-4540-b1fd-381e8e042f06-metrics-client-ca\") pod \"openshift-state-metrics-9d44df66c-6q7jp\" (UID: \"75a82e88-93ab-4540-b1fd-381e8e042f06\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-6q7jp" Apr 23 17:55:33.652517 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.652095 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n4lcv\" (UniqueName: \"kubernetes.io/projected/23abd42d-a8a4-44c2-9c7b-dd1ca477dc93-kube-api-access-n4lcv\") pod \"kube-state-metrics-69db897b98-9wvv9\" (UID: \"23abd42d-a8a4-44c2-9c7b-dd1ca477dc93\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-9wvv9" Apr 23 17:55:33.652517 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.652149 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/1a5bc9a8-8c44-4a50-91b5-1f0f006e4229-root\") pod \"node-exporter-j7hjg\" (UID: \"1a5bc9a8-8c44-4a50-91b5-1f0f006e4229\") " pod="openshift-monitoring/node-exporter-j7hjg" Apr 23 17:55:33.652517 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.652211 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/23abd42d-a8a4-44c2-9c7b-dd1ca477dc93-metrics-client-ca\") pod \"kube-state-metrics-69db897b98-9wvv9\" (UID: \"23abd42d-a8a4-44c2-9c7b-dd1ca477dc93\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-9wvv9" Apr 23 17:55:33.652517 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.652238 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/23abd42d-a8a4-44c2-9c7b-dd1ca477dc93-kube-state-metrics-tls\") pod \"kube-state-metrics-69db897b98-9wvv9\" (UID: \"23abd42d-a8a4-44c2-9c7b-dd1ca477dc93\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-9wvv9" Apr 23 17:55:33.652517 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.652320 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1a5bc9a8-8c44-4a50-91b5-1f0f006e4229-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-j7hjg\" (UID: \"1a5bc9a8-8c44-4a50-91b5-1f0f006e4229\") " pod="openshift-monitoring/node-exporter-j7hjg" Apr 23 17:55:33.652517 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.652350 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/1a5bc9a8-8c44-4a50-91b5-1f0f006e4229-node-exporter-textfile\") pod \"node-exporter-j7hjg\" (UID: \"1a5bc9a8-8c44-4a50-91b5-1f0f006e4229\") " pod="openshift-monitoring/node-exporter-j7hjg" Apr 23 17:55:33.652517 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.652380 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/23abd42d-a8a4-44c2-9c7b-dd1ca477dc93-volume-directive-shadow\") pod \"kube-state-metrics-69db897b98-9wvv9\" (UID: \"23abd42d-a8a4-44c2-9c7b-dd1ca477dc93\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-9wvv9" Apr 23 17:55:33.652517 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.652407 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/1a5bc9a8-8c44-4a50-91b5-1f0f006e4229-node-exporter-wtmp\") pod \"node-exporter-j7hjg\" (UID: \"1a5bc9a8-8c44-4a50-91b5-1f0f006e4229\") " pod="openshift-monitoring/node-exporter-j7hjg" Apr 23 17:55:33.652517 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.652432 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/1a5bc9a8-8c44-4a50-91b5-1f0f006e4229-node-exporter-tls\") pod \"node-exporter-j7hjg\" (UID: \"1a5bc9a8-8c44-4a50-91b5-1f0f006e4229\") " pod="openshift-monitoring/node-exporter-j7hjg" Apr 23 17:55:33.652517 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.652456 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1a5bc9a8-8c44-4a50-91b5-1f0f006e4229-metrics-client-ca\") pod \"node-exporter-j7hjg\" (UID: \"1a5bc9a8-8c44-4a50-91b5-1f0f006e4229\") " pod="openshift-monitoring/node-exporter-j7hjg" Apr 23 17:55:33.653437 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.652492 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/23abd42d-a8a4-44c2-9c7b-dd1ca477dc93-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-69db897b98-9wvv9\" (UID: \"23abd42d-a8a4-44c2-9c7b-dd1ca477dc93\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-9wvv9" Apr 23 17:55:33.653437 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.652523 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/75a82e88-93ab-4540-b1fd-381e8e042f06-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-9d44df66c-6q7jp\" (UID: \"75a82e88-93ab-4540-b1fd-381e8e042f06\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-6q7jp" Apr 23 17:55:33.653437 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.652550 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/1a5bc9a8-8c44-4a50-91b5-1f0f006e4229-node-exporter-accelerators-collector-config\") pod \"node-exporter-j7hjg\" (UID: \"1a5bc9a8-8c44-4a50-91b5-1f0f006e4229\") " pod="openshift-monitoring/node-exporter-j7hjg" Apr 23 17:55:33.653437 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.653212 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/1a5bc9a8-8c44-4a50-91b5-1f0f006e4229-node-exporter-accelerators-collector-config\") pod \"node-exporter-j7hjg\" (UID: \"1a5bc9a8-8c44-4a50-91b5-1f0f006e4229\") " pod="openshift-monitoring/node-exporter-j7hjg" Apr 23 17:55:33.653437 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.653275 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1a5bc9a8-8c44-4a50-91b5-1f0f006e4229-sys\") pod \"node-exporter-j7hjg\" (UID: \"1a5bc9a8-8c44-4a50-91b5-1f0f006e4229\") " pod="openshift-monitoring/node-exporter-j7hjg" Apr 23 17:55:33.654533 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:55:33.654508 2574 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: secret "openshift-state-metrics-tls" not found Apr 23 17:55:33.654632 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:55:33.654573 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/75a82e88-93ab-4540-b1fd-381e8e042f06-openshift-state-metrics-tls podName:75a82e88-93ab-4540-b1fd-381e8e042f06 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:34.154555389 +0000 UTC m=+204.518596315 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/75a82e88-93ab-4540-b1fd-381e8e042f06-openshift-state-metrics-tls") pod "openshift-state-metrics-9d44df66c-6q7jp" (UID: "75a82e88-93ab-4540-b1fd-381e8e042f06") : secret "openshift-state-metrics-tls" not found Apr 23 17:55:33.656077 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.656051 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/23abd42d-a8a4-44c2-9c7b-dd1ca477dc93-volume-directive-shadow\") pod \"kube-state-metrics-69db897b98-9wvv9\" (UID: \"23abd42d-a8a4-44c2-9c7b-dd1ca477dc93\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-9wvv9" Apr 23 17:55:33.656614 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.656572 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/75a82e88-93ab-4540-b1fd-381e8e042f06-metrics-client-ca\") pod \"openshift-state-metrics-9d44df66c-6q7jp\" (UID: \"75a82e88-93ab-4540-b1fd-381e8e042f06\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-6q7jp" Apr 23 17:55:33.659270 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.657155 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/1a5bc9a8-8c44-4a50-91b5-1f0f006e4229-root\") pod \"node-exporter-j7hjg\" (UID: \"1a5bc9a8-8c44-4a50-91b5-1f0f006e4229\") " pod="openshift-monitoring/node-exporter-j7hjg" Apr 23 17:55:33.659270 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.657370 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/1a5bc9a8-8c44-4a50-91b5-1f0f006e4229-node-exporter-textfile\") pod \"node-exporter-j7hjg\" (UID: \"1a5bc9a8-8c44-4a50-91b5-1f0f006e4229\") " pod="openshift-monitoring/node-exporter-j7hjg" Apr 23 17:55:33.659270 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.657499 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/1a5bc9a8-8c44-4a50-91b5-1f0f006e4229-node-exporter-wtmp\") pod \"node-exporter-j7hjg\" (UID: \"1a5bc9a8-8c44-4a50-91b5-1f0f006e4229\") " pod="openshift-monitoring/node-exporter-j7hjg" Apr 23 17:55:33.659270 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.657614 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1a5bc9a8-8c44-4a50-91b5-1f0f006e4229-metrics-client-ca\") pod \"node-exporter-j7hjg\" (UID: \"1a5bc9a8-8c44-4a50-91b5-1f0f006e4229\") " pod="openshift-monitoring/node-exporter-j7hjg" Apr 23 17:55:33.659270 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:55:33.657709 2574 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Apr 23 17:55:33.659270 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:55:33.657761 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a5bc9a8-8c44-4a50-91b5-1f0f006e4229-node-exporter-tls podName:1a5bc9a8-8c44-4a50-91b5-1f0f006e4229 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:34.15774203 +0000 UTC m=+204.521782954 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/1a5bc9a8-8c44-4a50-91b5-1f0f006e4229-node-exporter-tls") pod "node-exporter-j7hjg" (UID: "1a5bc9a8-8c44-4a50-91b5-1f0f006e4229") : secret "node-exporter-tls" not found Apr 23 17:55:33.659270 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.658097 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/23abd42d-a8a4-44c2-9c7b-dd1ca477dc93-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-69db897b98-9wvv9\" (UID: \"23abd42d-a8a4-44c2-9c7b-dd1ca477dc93\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-9wvv9" Apr 23 17:55:33.659270 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.658929 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/23abd42d-a8a4-44c2-9c7b-dd1ca477dc93-metrics-client-ca\") pod \"kube-state-metrics-69db897b98-9wvv9\" (UID: \"23abd42d-a8a4-44c2-9c7b-dd1ca477dc93\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-9wvv9" Apr 23 17:55:33.668015 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.667975 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc94s\" (UniqueName: \"kubernetes.io/projected/75a82e88-93ab-4540-b1fd-381e8e042f06-kube-api-access-nc94s\") pod \"openshift-state-metrics-9d44df66c-6q7jp\" (UID: \"75a82e88-93ab-4540-b1fd-381e8e042f06\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-6q7jp" Apr 23 17:55:33.676364 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.676323 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4lcv\" (UniqueName: \"kubernetes.io/projected/23abd42d-a8a4-44c2-9c7b-dd1ca477dc93-kube-api-access-n4lcv\") pod \"kube-state-metrics-69db897b98-9wvv9\" (UID: \"23abd42d-a8a4-44c2-9c7b-dd1ca477dc93\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-9wvv9" Apr 23 17:55:33.685200 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.685174 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1a5bc9a8-8c44-4a50-91b5-1f0f006e4229-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-j7hjg\" (UID: \"1a5bc9a8-8c44-4a50-91b5-1f0f006e4229\") " pod="openshift-monitoring/node-exporter-j7hjg" Apr 23 17:55:33.686335 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.686307 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/23abd42d-a8a4-44c2-9c7b-dd1ca477dc93-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-69db897b98-9wvv9\" (UID: \"23abd42d-a8a4-44c2-9c7b-dd1ca477dc93\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-9wvv9" Apr 23 17:55:33.687042 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.686723 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/23abd42d-a8a4-44c2-9c7b-dd1ca477dc93-kube-state-metrics-tls\") pod \"kube-state-metrics-69db897b98-9wvv9\" (UID: \"23abd42d-a8a4-44c2-9c7b-dd1ca477dc93\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-9wvv9" Apr 23 17:55:33.687042 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.687005 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgqrn\" (UniqueName: \"kubernetes.io/projected/1a5bc9a8-8c44-4a50-91b5-1f0f006e4229-kube-api-access-rgqrn\") pod \"node-exporter-j7hjg\" (UID: \"1a5bc9a8-8c44-4a50-91b5-1f0f006e4229\") " pod="openshift-monitoring/node-exporter-j7hjg" Apr 23 17:55:33.695692 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.695008 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/75a82e88-93ab-4540-b1fd-381e8e042f06-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-9d44df66c-6q7jp\" (UID: \"75a82e88-93ab-4540-b1fd-381e8e042f06\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-6q7jp" Apr 23 17:55:33.802098 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.801623 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-69db897b98-9wvv9" Apr 23 17:55:33.905643 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:33.905434 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-65545bb479-82xqt" Apr 23 17:55:34.158967 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.158890 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/75a82e88-93ab-4540-b1fd-381e8e042f06-openshift-state-metrics-tls\") pod \"openshift-state-metrics-9d44df66c-6q7jp\" (UID: \"75a82e88-93ab-4540-b1fd-381e8e042f06\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-6q7jp" Apr 23 17:55:34.159132 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.159010 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/1a5bc9a8-8c44-4a50-91b5-1f0f006e4229-node-exporter-tls\") pod \"node-exporter-j7hjg\" (UID: \"1a5bc9a8-8c44-4a50-91b5-1f0f006e4229\") " pod="openshift-monitoring/node-exporter-j7hjg" Apr 23 17:55:34.161717 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.161661 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/75a82e88-93ab-4540-b1fd-381e8e042f06-openshift-state-metrics-tls\") pod \"openshift-state-metrics-9d44df66c-6q7jp\" (UID: \"75a82e88-93ab-4540-b1fd-381e8e042f06\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-6q7jp" Apr 23 17:55:34.162027 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.162003 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/1a5bc9a8-8c44-4a50-91b5-1f0f006e4229-node-exporter-tls\") pod \"node-exporter-j7hjg\" (UID: \"1a5bc9a8-8c44-4a50-91b5-1f0f006e4229\") " pod="openshift-monitoring/node-exporter-j7hjg" Apr 23 17:55:34.405378 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.405338 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-9d44df66c-6q7jp" Apr 23 17:55:34.414226 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.414163 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-j7hjg" Apr 23 17:55:34.613447 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.613304 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 23 17:55:34.618074 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.618050 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.620967 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.620931 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy\"" Apr 23 17:55:34.620967 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.620954 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-tls-assets-0\"" Apr 23 17:55:34.621146 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.620933 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy-metric\"" Apr 23 17:55:34.622997 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.621467 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"alertmanager-trusted-ca-bundle\"" Apr 23 17:55:34.622997 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.621605 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-cluster-tls-config\"" Apr 23 17:55:34.622997 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.621619 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-web-config\"" Apr 23 17:55:34.622997 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.621648 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-tls\"" Apr 23 17:55:34.622997 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.621676 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-generated\"" Apr 23 17:55:34.622997 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.621957 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-dockercfg-w8hwm\"" Apr 23 17:55:34.622997 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.622079 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy-web\"" Apr 23 17:55:34.642028 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.641995 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 23 17:55:34.664370 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.664001 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/96f385f0-80a9-4479-991e-2067a92047fd-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.664370 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.664046 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/96f385f0-80a9-4479-991e-2067a92047fd-tls-assets\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.664370 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.664072 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/96f385f0-80a9-4479-991e-2067a92047fd-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.664370 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.664099 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/96f385f0-80a9-4479-991e-2067a92047fd-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.664370 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.664127 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/96f385f0-80a9-4479-991e-2067a92047fd-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.664370 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.664161 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/96f385f0-80a9-4479-991e-2067a92047fd-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.664370 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.664191 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/96f385f0-80a9-4479-991e-2067a92047fd-cluster-tls-config\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.664370 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.664233 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/96f385f0-80a9-4479-991e-2067a92047fd-config-out\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.664370 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.664261 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l7rd\" (UniqueName: \"kubernetes.io/projected/96f385f0-80a9-4479-991e-2067a92047fd-kube-api-access-5l7rd\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.664370 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.664287 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/96f385f0-80a9-4479-991e-2067a92047fd-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.664370 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.664334 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96f385f0-80a9-4479-991e-2067a92047fd-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.665103 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.664388 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/96f385f0-80a9-4479-991e-2067a92047fd-config-volume\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.665103 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.664423 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/96f385f0-80a9-4479-991e-2067a92047fd-web-config\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.765905 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.765245 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/96f385f0-80a9-4479-991e-2067a92047fd-cluster-tls-config\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.765905 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.765283 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/96f385f0-80a9-4479-991e-2067a92047fd-config-out\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.765905 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.765304 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5l7rd\" (UniqueName: \"kubernetes.io/projected/96f385f0-80a9-4479-991e-2067a92047fd-kube-api-access-5l7rd\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.765905 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.765325 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/96f385f0-80a9-4479-991e-2067a92047fd-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.765905 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.765350 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96f385f0-80a9-4479-991e-2067a92047fd-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.765905 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.765382 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/96f385f0-80a9-4479-991e-2067a92047fd-config-volume\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.765905 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.765406 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/96f385f0-80a9-4479-991e-2067a92047fd-web-config\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.765905 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.765452 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/96f385f0-80a9-4479-991e-2067a92047fd-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.765905 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.765469 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/96f385f0-80a9-4479-991e-2067a92047fd-tls-assets\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.765905 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.765484 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/96f385f0-80a9-4479-991e-2067a92047fd-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.765905 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.765499 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/96f385f0-80a9-4479-991e-2067a92047fd-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.765905 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.765516 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/96f385f0-80a9-4479-991e-2067a92047fd-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.765905 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.765538 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/96f385f0-80a9-4479-991e-2067a92047fd-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.766814 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.766613 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96f385f0-80a9-4479-991e-2067a92047fd-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.768468 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:55:34.767005 2574 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Apr 23 17:55:34.768468 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:55:34.767088 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96f385f0-80a9-4479-991e-2067a92047fd-secret-alertmanager-main-tls podName:96f385f0-80a9-4479-991e-2067a92047fd nodeName:}" failed. No retries permitted until 2026-04-23 17:55:35.267067236 +0000 UTC m=+205.631108206 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/96f385f0-80a9-4479-991e-2067a92047fd-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "96f385f0-80a9-4479-991e-2067a92047fd") : secret "alertmanager-main-tls" not found Apr 23 17:55:34.768468 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.767913 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/96f385f0-80a9-4479-991e-2067a92047fd-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.768763 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.768720 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/96f385f0-80a9-4479-991e-2067a92047fd-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.769806 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.769763 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/96f385f0-80a9-4479-991e-2067a92047fd-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.770550 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.770509 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/96f385f0-80a9-4479-991e-2067a92047fd-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.771277 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.771223 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/96f385f0-80a9-4479-991e-2067a92047fd-cluster-tls-config\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.774280 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.774132 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/96f385f0-80a9-4479-991e-2067a92047fd-config-volume\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.774518 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.774497 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/96f385f0-80a9-4479-991e-2067a92047fd-tls-assets\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.774518 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.774507 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/96f385f0-80a9-4479-991e-2067a92047fd-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.774640 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.774552 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/96f385f0-80a9-4479-991e-2067a92047fd-web-config\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.775945 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.775917 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/96f385f0-80a9-4479-991e-2067a92047fd-config-out\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:34.779611 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:34.779544 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l7rd\" (UniqueName: \"kubernetes.io/projected/96f385f0-80a9-4479-991e-2067a92047fd-kube-api-access-5l7rd\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:35.270280 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:35.270249 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/96f385f0-80a9-4479-991e-2067a92047fd-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:35.273238 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:35.273212 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/96f385f0-80a9-4479-991e-2067a92047fd-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"96f385f0-80a9-4479-991e-2067a92047fd\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:35.532266 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:35.532044 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:36.610923 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:36.610884 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-6bff6c748f-b6mkb"] Apr 23 17:55:36.616652 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:36.616626 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-6bff6c748f-b6mkb" Apr 23 17:55:36.626667 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:36.626305 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-kube-rbac-proxy-metrics\"" Apr 23 17:55:36.626667 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:36.626321 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-kube-rbac-proxy-web\"" Apr 23 17:55:36.626667 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:36.626367 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-dockercfg-l5nrq\"" Apr 23 17:55:36.626667 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:36.626514 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-kube-rbac-proxy-rules\"" Apr 23 17:55:36.626667 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:36.626543 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-kube-rbac-proxy\"" Apr 23 17:55:36.626667 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:36.626615 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-grpc-tls-afqo0ndlq5dc9\"" Apr 23 17:55:36.629155 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:36.629132 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-tls\"" Apr 23 17:55:36.639233 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:36.639210 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-6bff6c748f-b6mkb"] Apr 23 17:55:36.686072 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:36.685911 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/662354fc-65c1-4dc1-a71f-b0640bab8b2f-secret-thanos-querier-tls\") pod \"thanos-querier-6bff6c748f-b6mkb\" (UID: \"662354fc-65c1-4dc1-a71f-b0640bab8b2f\") " pod="openshift-monitoring/thanos-querier-6bff6c748f-b6mkb" Apr 23 17:55:36.686072 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:36.686018 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/662354fc-65c1-4dc1-a71f-b0640bab8b2f-secret-grpc-tls\") pod \"thanos-querier-6bff6c748f-b6mkb\" (UID: \"662354fc-65c1-4dc1-a71f-b0640bab8b2f\") " pod="openshift-monitoring/thanos-querier-6bff6c748f-b6mkb" Apr 23 17:55:36.686258 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:36.686080 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/662354fc-65c1-4dc1-a71f-b0640bab8b2f-metrics-client-ca\") pod \"thanos-querier-6bff6c748f-b6mkb\" (UID: \"662354fc-65c1-4dc1-a71f-b0640bab8b2f\") " pod="openshift-monitoring/thanos-querier-6bff6c748f-b6mkb" Apr 23 17:55:36.686258 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:36.686112 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/662354fc-65c1-4dc1-a71f-b0640bab8b2f-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6bff6c748f-b6mkb\" (UID: \"662354fc-65c1-4dc1-a71f-b0640bab8b2f\") " pod="openshift-monitoring/thanos-querier-6bff6c748f-b6mkb" Apr 23 17:55:36.686258 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:36.686211 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njmmb\" (UniqueName: \"kubernetes.io/projected/662354fc-65c1-4dc1-a71f-b0640bab8b2f-kube-api-access-njmmb\") pod \"thanos-querier-6bff6c748f-b6mkb\" (UID: \"662354fc-65c1-4dc1-a71f-b0640bab8b2f\") " pod="openshift-monitoring/thanos-querier-6bff6c748f-b6mkb" Apr 23 17:55:36.686368 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:36.686253 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/662354fc-65c1-4dc1-a71f-b0640bab8b2f-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6bff6c748f-b6mkb\" (UID: \"662354fc-65c1-4dc1-a71f-b0640bab8b2f\") " pod="openshift-monitoring/thanos-querier-6bff6c748f-b6mkb" Apr 23 17:55:36.686368 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:36.686290 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/662354fc-65c1-4dc1-a71f-b0640bab8b2f-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6bff6c748f-b6mkb\" (UID: \"662354fc-65c1-4dc1-a71f-b0640bab8b2f\") " pod="openshift-monitoring/thanos-querier-6bff6c748f-b6mkb" Apr 23 17:55:36.686368 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:36.686328 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/662354fc-65c1-4dc1-a71f-b0640bab8b2f-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6bff6c748f-b6mkb\" (UID: \"662354fc-65c1-4dc1-a71f-b0640bab8b2f\") " pod="openshift-monitoring/thanos-querier-6bff6c748f-b6mkb" Apr 23 17:55:36.786821 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:36.786789 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/662354fc-65c1-4dc1-a71f-b0640bab8b2f-secret-grpc-tls\") pod \"thanos-querier-6bff6c748f-b6mkb\" (UID: \"662354fc-65c1-4dc1-a71f-b0640bab8b2f\") " pod="openshift-monitoring/thanos-querier-6bff6c748f-b6mkb" Apr 23 17:55:36.786982 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:36.786866 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/662354fc-65c1-4dc1-a71f-b0640bab8b2f-metrics-client-ca\") pod \"thanos-querier-6bff6c748f-b6mkb\" (UID: \"662354fc-65c1-4dc1-a71f-b0640bab8b2f\") " pod="openshift-monitoring/thanos-querier-6bff6c748f-b6mkb" Apr 23 17:55:36.786982 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:36.786900 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/662354fc-65c1-4dc1-a71f-b0640bab8b2f-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6bff6c748f-b6mkb\" (UID: \"662354fc-65c1-4dc1-a71f-b0640bab8b2f\") " pod="openshift-monitoring/thanos-querier-6bff6c748f-b6mkb" Apr 23 17:55:36.786982 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:36.786962 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-njmmb\" (UniqueName: \"kubernetes.io/projected/662354fc-65c1-4dc1-a71f-b0640bab8b2f-kube-api-access-njmmb\") pod \"thanos-querier-6bff6c748f-b6mkb\" (UID: \"662354fc-65c1-4dc1-a71f-b0640bab8b2f\") " pod="openshift-monitoring/thanos-querier-6bff6c748f-b6mkb" Apr 23 17:55:36.787132 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:36.786993 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/662354fc-65c1-4dc1-a71f-b0640bab8b2f-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6bff6c748f-b6mkb\" (UID: \"662354fc-65c1-4dc1-a71f-b0640bab8b2f\") " pod="openshift-monitoring/thanos-querier-6bff6c748f-b6mkb" Apr 23 17:55:36.787132 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:36.787023 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/662354fc-65c1-4dc1-a71f-b0640bab8b2f-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6bff6c748f-b6mkb\" (UID: \"662354fc-65c1-4dc1-a71f-b0640bab8b2f\") " pod="openshift-monitoring/thanos-querier-6bff6c748f-b6mkb" Apr 23 17:55:36.787132 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:36.787055 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/662354fc-65c1-4dc1-a71f-b0640bab8b2f-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6bff6c748f-b6mkb\" (UID: \"662354fc-65c1-4dc1-a71f-b0640bab8b2f\") " pod="openshift-monitoring/thanos-querier-6bff6c748f-b6mkb" Apr 23 17:55:36.787223 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:36.787138 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/662354fc-65c1-4dc1-a71f-b0640bab8b2f-secret-thanos-querier-tls\") pod \"thanos-querier-6bff6c748f-b6mkb\" (UID: \"662354fc-65c1-4dc1-a71f-b0640bab8b2f\") " pod="openshift-monitoring/thanos-querier-6bff6c748f-b6mkb" Apr 23 17:55:36.787674 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:36.787644 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/662354fc-65c1-4dc1-a71f-b0640bab8b2f-metrics-client-ca\") pod \"thanos-querier-6bff6c748f-b6mkb\" (UID: \"662354fc-65c1-4dc1-a71f-b0640bab8b2f\") " pod="openshift-monitoring/thanos-querier-6bff6c748f-b6mkb" Apr 23 17:55:36.790730 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:36.790680 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/662354fc-65c1-4dc1-a71f-b0640bab8b2f-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6bff6c748f-b6mkb\" (UID: \"662354fc-65c1-4dc1-a71f-b0640bab8b2f\") " pod="openshift-monitoring/thanos-querier-6bff6c748f-b6mkb" Apr 23 17:55:36.790941 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:36.790911 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/662354fc-65c1-4dc1-a71f-b0640bab8b2f-secret-thanos-querier-tls\") pod \"thanos-querier-6bff6c748f-b6mkb\" (UID: \"662354fc-65c1-4dc1-a71f-b0640bab8b2f\") " pod="openshift-monitoring/thanos-querier-6bff6c748f-b6mkb" Apr 23 17:55:36.791226 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:36.791206 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/662354fc-65c1-4dc1-a71f-b0640bab8b2f-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6bff6c748f-b6mkb\" (UID: \"662354fc-65c1-4dc1-a71f-b0640bab8b2f\") " pod="openshift-monitoring/thanos-querier-6bff6c748f-b6mkb" Apr 23 17:55:36.791921 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:36.791895 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/662354fc-65c1-4dc1-a71f-b0640bab8b2f-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6bff6c748f-b6mkb\" (UID: \"662354fc-65c1-4dc1-a71f-b0640bab8b2f\") " pod="openshift-monitoring/thanos-querier-6bff6c748f-b6mkb" Apr 23 17:55:36.792775 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:36.792735 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/662354fc-65c1-4dc1-a71f-b0640bab8b2f-secret-grpc-tls\") pod \"thanos-querier-6bff6c748f-b6mkb\" (UID: \"662354fc-65c1-4dc1-a71f-b0640bab8b2f\") " pod="openshift-monitoring/thanos-querier-6bff6c748f-b6mkb" Apr 23 17:55:36.793384 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:36.793360 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/662354fc-65c1-4dc1-a71f-b0640bab8b2f-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6bff6c748f-b6mkb\" (UID: \"662354fc-65c1-4dc1-a71f-b0640bab8b2f\") " pod="openshift-monitoring/thanos-querier-6bff6c748f-b6mkb" Apr 23 17:55:36.797731 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:36.797696 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-njmmb\" (UniqueName: \"kubernetes.io/projected/662354fc-65c1-4dc1-a71f-b0640bab8b2f-kube-api-access-njmmb\") pod \"thanos-querier-6bff6c748f-b6mkb\" (UID: \"662354fc-65c1-4dc1-a71f-b0640bab8b2f\") " pod="openshift-monitoring/thanos-querier-6bff6c748f-b6mkb" Apr 23 17:55:36.930522 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:36.930494 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-6bff6c748f-b6mkb" Apr 23 17:55:41.748158 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:41.748107 2574 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-59d4798cc5-q8q9r" Apr 23 17:55:41.748158 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:41.748162 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-59d4798cc5-q8q9r" Apr 23 17:55:41.754578 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:41.754516 2574 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-59d4798cc5-q8q9r" Apr 23 17:55:41.940628 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:41.940598 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-59d4798cc5-q8q9r" Apr 23 17:55:42.019469 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:42.019013 2574 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-console/console-65545bb479-82xqt"] Apr 23 17:55:43.046212 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:55:43.046174 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a5bc9a8_8c44_4a50_91b5_1f0f006e4229.slice/crio-b349df45411485a7d73d2593074c251639fdefbbb5f28c13cb6b7d4a9de3f819 WatchSource:0}: Error finding container b349df45411485a7d73d2593074c251639fdefbbb5f28c13cb6b7d4a9de3f819: Status 404 returned error can't find the container with id b349df45411485a7d73d2593074c251639fdefbbb5f28c13cb6b7d4a9de3f819 Apr 23 17:55:43.307916 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:55:43.307880 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96f385f0_80a9_4479_991e_2067a92047fd.slice/crio-dcb4dc41fdd00f03a23aab067582ac19a4c0bd6ddb5d8ed163a6294778f0b622 WatchSource:0}: Error finding container dcb4dc41fdd00f03a23aab067582ac19a4c0bd6ddb5d8ed163a6294778f0b622: Status 404 returned error can't find the container with id dcb4dc41fdd00f03a23aab067582ac19a4c0bd6ddb5d8ed163a6294778f0b622 Apr 23 17:55:43.308910 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:43.308581 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 23 17:55:43.326363 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:43.326284 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-69db897b98-9wvv9"] Apr 23 17:55:43.329944 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:55:43.329914 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23abd42d_a8a4_44c2_9c7b_dd1ca477dc93.slice/crio-14810276e3ead22b98120070afe747ced37dd01a8433088e8b321e46adcb4bc7 WatchSource:0}: Error finding container 14810276e3ead22b98120070afe747ced37dd01a8433088e8b321e46adcb4bc7: Status 404 returned error can't find the container with id 14810276e3ead22b98120070afe747ced37dd01a8433088e8b321e46adcb4bc7 Apr 23 17:55:43.569439 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:43.569413 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-9d44df66c-6q7jp"] Apr 23 17:55:43.572496 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:43.572464 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-6bff6c748f-b6mkb"] Apr 23 17:55:43.773995 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:55:43.773952 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod662354fc_65c1_4dc1_a71f_b0640bab8b2f.slice/crio-434dcd515f1c1f7c86e04b814f0be73e061434a0a80ebf8a5b58cfd584b46a6f WatchSource:0}: Error finding container 434dcd515f1c1f7c86e04b814f0be73e061434a0a80ebf8a5b58cfd584b46a6f: Status 404 returned error can't find the container with id 434dcd515f1c1f7c86e04b814f0be73e061434a0a80ebf8a5b58cfd584b46a6f Apr 23 17:55:43.775577 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:55:43.775547 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75a82e88_93ab_4540_b1fd_381e8e042f06.slice/crio-06192f8193e1919d187a04d10b7b697fd2e485d3e9939e0d04d010450915279d WatchSource:0}: Error finding container 06192f8193e1919d187a04d10b7b697fd2e485d3e9939e0d04d010450915279d: Status 404 returned error can't find the container with id 06192f8193e1919d187a04d10b7b697fd2e485d3e9939e0d04d010450915279d Apr 23 17:55:43.947304 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:43.946703 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-9d44df66c-6q7jp" event={"ID":"75a82e88-93ab-4540-b1fd-381e8e042f06","Type":"ContainerStarted","Data":"e218fd9cc543ce492e23bb843f6a94464639d0fa2cf884b97cee07c1b5be66e7"} Apr 23 17:55:43.947304 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:43.946746 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-9d44df66c-6q7jp" event={"ID":"75a82e88-93ab-4540-b1fd-381e8e042f06","Type":"ContainerStarted","Data":"06192f8193e1919d187a04d10b7b697fd2e485d3e9939e0d04d010450915279d"} Apr 23 17:55:43.950526 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:43.949541 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-856d8774bc-fktd8" event={"ID":"982187e0-5a6e-4cf0-90e6-4cc698247373","Type":"ContainerStarted","Data":"120c79ed5b279157187b3929138535f7b8fdf0f3dce4e40ea6a8d6d05c3acf39"} Apr 23 17:55:43.952238 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:43.952178 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6bff6c748f-b6mkb" event={"ID":"662354fc-65c1-4dc1-a71f-b0640bab8b2f","Type":"ContainerStarted","Data":"434dcd515f1c1f7c86e04b814f0be73e061434a0a80ebf8a5b58cfd584b46a6f"} Apr 23 17:55:43.954420 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:43.954357 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-69db897b98-9wvv9" event={"ID":"23abd42d-a8a4-44c2-9c7b-dd1ca477dc93","Type":"ContainerStarted","Data":"14810276e3ead22b98120070afe747ced37dd01a8433088e8b321e46adcb4bc7"} Apr 23 17:55:43.956462 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:43.956405 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"96f385f0-80a9-4479-991e-2067a92047fd","Type":"ContainerStarted","Data":"dcb4dc41fdd00f03a23aab067582ac19a4c0bd6ddb5d8ed163a6294778f0b622"} Apr 23 17:55:43.958591 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:43.958561 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-55f9b9dd49-flpw6" event={"ID":"02321c17-811b-4320-bbb7-c629cf39eab1","Type":"ContainerStarted","Data":"990253b5dad40d63df7751fd258221be8aa3f2d6115c1af91aad7acc7229b1e4"} Apr 23 17:55:43.961648 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:43.961622 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-j7hjg" event={"ID":"1a5bc9a8-8c44-4a50-91b5-1f0f006e4229","Type":"ContainerStarted","Data":"812a7ad10ca23cb2909d1b8a2709fc84b81f01398191d1588b0da761a1f508a7"} Apr 23 17:55:43.961749 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:43.961657 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-j7hjg" event={"ID":"1a5bc9a8-8c44-4a50-91b5-1f0f006e4229","Type":"ContainerStarted","Data":"b349df45411485a7d73d2593074c251639fdefbbb5f28c13cb6b7d4a9de3f819"} Apr 23 17:55:43.965566 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:43.965202 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-6bcc868b7-84pz9" event={"ID":"03e6e9ae-fd11-43a3-8abe-baa38a028607","Type":"ContainerStarted","Data":"8e5c26a35d49483adbc9662e4b3c70d12fdf7bfed673bb2da854a9dd850610e7"} Apr 23 17:55:43.965566 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:43.965345 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-6bcc868b7-84pz9" Apr 23 17:55:43.969627 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:43.969572 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-6bcc868b7-84pz9" Apr 23 17:55:44.015699 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:44.015472 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-856d8774bc-fktd8" podStartSLOduration=1.5562906079999999 podStartE2EDuration="13.015452142s" podCreationTimestamp="2026-04-23 17:55:31 +0000 UTC" firstStartedPulling="2026-04-23 17:55:31.580906601 +0000 UTC m=+201.944947526" lastFinishedPulling="2026-04-23 17:55:43.040068139 +0000 UTC m=+213.404109060" observedRunningTime="2026-04-23 17:55:43.986357781 +0000 UTC m=+214.350398723" watchObservedRunningTime="2026-04-23 17:55:44.015452142 +0000 UTC m=+214.379493084" Apr 23 17:55:44.851819 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:44.850778 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-55b97c5948-m4xr8" Apr 23 17:55:44.882665 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:44.880612 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-6bcc868b7-84pz9" podStartSLOduration=2.997463418 podStartE2EDuration="22.880589825s" podCreationTimestamp="2026-04-23 17:55:22 +0000 UTC" firstStartedPulling="2026-04-23 17:55:23.214128239 +0000 UTC m=+193.578169160" lastFinishedPulling="2026-04-23 17:55:43.097254647 +0000 UTC m=+213.461295567" observedRunningTime="2026-04-23 17:55:44.03952352 +0000 UTC m=+214.403564463" watchObservedRunningTime="2026-04-23 17:55:44.880589825 +0000 UTC m=+215.244630766" Apr 23 17:55:44.973642 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:44.973562 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-9d44df66c-6q7jp" event={"ID":"75a82e88-93ab-4540-b1fd-381e8e042f06","Type":"ContainerStarted","Data":"0d307feca727ea377c8b81ac26ee10852d0cf01e125a8e5e7875b7ff55514e4e"} Apr 23 17:55:44.978559 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:44.978033 2574 generic.go:358] "Generic (PLEG): container finished" podID="1a5bc9a8-8c44-4a50-91b5-1f0f006e4229" containerID="812a7ad10ca23cb2909d1b8a2709fc84b81f01398191d1588b0da761a1f508a7" exitCode=0 Apr 23 17:55:44.978559 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:44.978110 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-j7hjg" event={"ID":"1a5bc9a8-8c44-4a50-91b5-1f0f006e4229","Type":"ContainerDied","Data":"812a7ad10ca23cb2909d1b8a2709fc84b81f01398191d1588b0da761a1f508a7"} Apr 23 17:55:47.330325 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:47.330254 2574 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" podUID="b39a95d3-b859-4e2d-bbef-fca1ee288a74" containerName="registry" containerID="cri-o://0e2ca69060fc02aa0b53a111c5628dc7df1b0f4e7a4589fb563a25254a55f4fa" gracePeriod=30 Apr 23 17:55:47.993523 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:47.993395 2574 generic.go:358] "Generic (PLEG): container finished" podID="b39a95d3-b859-4e2d-bbef-fca1ee288a74" containerID="0e2ca69060fc02aa0b53a111c5628dc7df1b0f4e7a4589fb563a25254a55f4fa" exitCode=0 Apr 23 17:55:47.993523 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:47.993482 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" event={"ID":"b39a95d3-b859-4e2d-bbef-fca1ee288a74","Type":"ContainerDied","Data":"0e2ca69060fc02aa0b53a111c5628dc7df1b0f4e7a4589fb563a25254a55f4fa"} Apr 23 17:55:48.800388 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:48.800277 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-vfxjl" Apr 23 17:55:49.408117 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:49.408094 2574 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" Apr 23 17:55:49.539125 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:49.538726 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b39a95d3-b859-4e2d-bbef-fca1ee288a74-ca-trust-extracted\") pod \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\" (UID: \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\") " Apr 23 17:55:49.539125 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:49.538771 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b39a95d3-b859-4e2d-bbef-fca1ee288a74-registry-tls\") pod \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\" (UID: \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\") " Apr 23 17:55:49.539125 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:49.538872 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b39a95d3-b859-4e2d-bbef-fca1ee288a74-registry-certificates\") pod \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\" (UID: \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\") " Apr 23 17:55:49.539125 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:49.538911 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/b39a95d3-b859-4e2d-bbef-fca1ee288a74-image-registry-private-configuration\") pod \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\" (UID: \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\") " Apr 23 17:55:49.539125 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:49.538960 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b39a95d3-b859-4e2d-bbef-fca1ee288a74-installation-pull-secrets\") pod \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\" (UID: \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\") " Apr 23 17:55:49.539452 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:49.539250 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b39a95d3-b859-4e2d-bbef-fca1ee288a74-trusted-ca\") pod \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\" (UID: \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\") " Apr 23 17:55:49.539452 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:49.539305 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b39a95d3-b859-4e2d-bbef-fca1ee288a74-bound-sa-token\") pod \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\" (UID: \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\") " Apr 23 17:55:49.539452 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:49.539341 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t526k\" (UniqueName: \"kubernetes.io/projected/b39a95d3-b859-4e2d-bbef-fca1ee288a74-kube-api-access-t526k\") pod \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\" (UID: \"b39a95d3-b859-4e2d-bbef-fca1ee288a74\") " Apr 23 17:55:49.540672 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:49.540619 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b39a95d3-b859-4e2d-bbef-fca1ee288a74-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "b39a95d3-b859-4e2d-bbef-fca1ee288a74" (UID: "b39a95d3-b859-4e2d-bbef-fca1ee288a74"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:55:49.541210 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:49.541173 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b39a95d3-b859-4e2d-bbef-fca1ee288a74-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "b39a95d3-b859-4e2d-bbef-fca1ee288a74" (UID: "b39a95d3-b859-4e2d-bbef-fca1ee288a74"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:55:49.543057 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:49.543005 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b39a95d3-b859-4e2d-bbef-fca1ee288a74-image-registry-private-configuration" (OuterVolumeSpecName: "image-registry-private-configuration") pod "b39a95d3-b859-4e2d-bbef-fca1ee288a74" (UID: "b39a95d3-b859-4e2d-bbef-fca1ee288a74"). InnerVolumeSpecName "image-registry-private-configuration". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:55:49.543905 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:49.543140 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b39a95d3-b859-4e2d-bbef-fca1ee288a74-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "b39a95d3-b859-4e2d-bbef-fca1ee288a74" (UID: "b39a95d3-b859-4e2d-bbef-fca1ee288a74"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 17:55:49.543905 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:49.543536 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b39a95d3-b859-4e2d-bbef-fca1ee288a74-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "b39a95d3-b859-4e2d-bbef-fca1ee288a74" (UID: "b39a95d3-b859-4e2d-bbef-fca1ee288a74"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:55:49.543905 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:49.543681 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b39a95d3-b859-4e2d-bbef-fca1ee288a74-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "b39a95d3-b859-4e2d-bbef-fca1ee288a74" (UID: "b39a95d3-b859-4e2d-bbef-fca1ee288a74"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 17:55:49.544900 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:49.544721 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b39a95d3-b859-4e2d-bbef-fca1ee288a74-kube-api-access-t526k" (OuterVolumeSpecName: "kube-api-access-t526k") pod "b39a95d3-b859-4e2d-bbef-fca1ee288a74" (UID: "b39a95d3-b859-4e2d-bbef-fca1ee288a74"). InnerVolumeSpecName "kube-api-access-t526k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 17:55:49.551424 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:49.551200 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b39a95d3-b859-4e2d-bbef-fca1ee288a74-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "b39a95d3-b859-4e2d-bbef-fca1ee288a74" (UID: "b39a95d3-b859-4e2d-bbef-fca1ee288a74"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 23 17:55:49.640636 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:49.640599 2574 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b39a95d3-b859-4e2d-bbef-fca1ee288a74-registry-certificates\") on node \"ip-10-0-135-87.ec2.internal\" DevicePath \"\"" Apr 23 17:55:49.640636 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:49.640635 2574 reconciler_common.go:299] "Volume detached for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/b39a95d3-b859-4e2d-bbef-fca1ee288a74-image-registry-private-configuration\") on node \"ip-10-0-135-87.ec2.internal\" DevicePath \"\"" Apr 23 17:55:49.640883 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:49.640652 2574 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b39a95d3-b859-4e2d-bbef-fca1ee288a74-installation-pull-secrets\") on node \"ip-10-0-135-87.ec2.internal\" DevicePath \"\"" Apr 23 17:55:49.640883 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:49.640670 2574 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b39a95d3-b859-4e2d-bbef-fca1ee288a74-trusted-ca\") on node \"ip-10-0-135-87.ec2.internal\" DevicePath \"\"" Apr 23 17:55:49.640883 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:49.640684 2574 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b39a95d3-b859-4e2d-bbef-fca1ee288a74-bound-sa-token\") on node \"ip-10-0-135-87.ec2.internal\" DevicePath \"\"" Apr 23 17:55:49.640883 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:49.640695 2574 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t526k\" (UniqueName: \"kubernetes.io/projected/b39a95d3-b859-4e2d-bbef-fca1ee288a74-kube-api-access-t526k\") on node \"ip-10-0-135-87.ec2.internal\" DevicePath \"\"" Apr 23 17:55:49.640883 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:49.640706 2574 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b39a95d3-b859-4e2d-bbef-fca1ee288a74-ca-trust-extracted\") on node \"ip-10-0-135-87.ec2.internal\" DevicePath \"\"" Apr 23 17:55:49.640883 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:49.640715 2574 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b39a95d3-b859-4e2d-bbef-fca1ee288a74-registry-tls\") on node \"ip-10-0-135-87.ec2.internal\" DevicePath \"\"" Apr 23 17:55:50.005835 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:50.005638 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-j7hjg" event={"ID":"1a5bc9a8-8c44-4a50-91b5-1f0f006e4229","Type":"ContainerStarted","Data":"6785f4d7151b0ed2bb8f18de773a71eb50ac1959e57dbb218eaaa06a888144b8"} Apr 23 17:55:50.005835 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:50.005681 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-j7hjg" event={"ID":"1a5bc9a8-8c44-4a50-91b5-1f0f006e4229","Type":"ContainerStarted","Data":"2e5596afc28bd100d539b4d0970b0900a6550f6958ef8a22475486d55befcdb3"} Apr 23 17:55:50.009089 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:50.009037 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-86d966f4cc-cp7bj" event={"ID":"63b921da-43d6-4b73-a9a9-8a3221949b04","Type":"ContainerStarted","Data":"7e0c9a133b0f3e0eda111a3943793534a8a8a5d7a237108ffd1302e6a84845d3"} Apr 23 17:55:50.009664 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:50.009500 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-86d966f4cc-cp7bj" Apr 23 17:55:50.010781 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:50.010564 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" event={"ID":"b39a95d3-b859-4e2d-bbef-fca1ee288a74","Type":"ContainerDied","Data":"ef928054e581ef64558f1531bc955a50eaa7534ab592a5fd8627ba86e5de0bc4"} Apr 23 17:55:50.010781 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:50.010580 2574 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5bb94bc895-f4jk5" Apr 23 17:55:50.010781 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:50.010604 2574 scope.go:117] "RemoveContainer" containerID="0e2ca69060fc02aa0b53a111c5628dc7df1b0f4e7a4589fb563a25254a55f4fa" Apr 23 17:55:50.012192 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:50.012154 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-86d966f4cc-cp7bj" Apr 23 17:55:50.014063 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:50.013379 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-9d44df66c-6q7jp" event={"ID":"75a82e88-93ab-4540-b1fd-381e8e042f06","Type":"ContainerStarted","Data":"80f3fcf678d9d0ba8a239fa254d2c62bdc3b54f5a64bc700c2ed90c2b8286155"} Apr 23 17:55:50.016479 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:50.016414 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6bff6c748f-b6mkb" event={"ID":"662354fc-65c1-4dc1-a71f-b0640bab8b2f","Type":"ContainerStarted","Data":"3fa891d0af1de3b13c7b9de45dec1158c8a6f1110bd7e098c2ea51d82bdc3caf"} Apr 23 17:55:50.016479 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:50.016440 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6bff6c748f-b6mkb" event={"ID":"662354fc-65c1-4dc1-a71f-b0640bab8b2f","Type":"ContainerStarted","Data":"127ad2da42bcddab7417f87501d3082392374b63a1cb0c6f1dd0c3e8657fab76"} Apr 23 17:55:50.016479 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:50.016454 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6bff6c748f-b6mkb" event={"ID":"662354fc-65c1-4dc1-a71f-b0640bab8b2f","Type":"ContainerStarted","Data":"abcbffa64f3b1fd6d5f784d72d3962f7fee74b6b86874462f707671112ae4929"} Apr 23 17:55:50.029471 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:50.028507 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-69db897b98-9wvv9" event={"ID":"23abd42d-a8a4-44c2-9c7b-dd1ca477dc93","Type":"ContainerStarted","Data":"aba665ebb2a0c9415516fcad09b0faf43b50633aa8208c99e2a40091323e977b"} Apr 23 17:55:50.029471 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:50.028546 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-69db897b98-9wvv9" event={"ID":"23abd42d-a8a4-44c2-9c7b-dd1ca477dc93","Type":"ContainerStarted","Data":"d3d9b581b8f232c9368f726f8834214a16eb7622f8956979b145ece7b0dea20d"} Apr 23 17:55:50.029471 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:50.028564 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-69db897b98-9wvv9" event={"ID":"23abd42d-a8a4-44c2-9c7b-dd1ca477dc93","Type":"ContainerStarted","Data":"42939aab234040a5ab610ff122e49974b430dded4765b68c890ca5f197141ab3"} Apr 23 17:55:50.031527 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:50.031488 2574 generic.go:358] "Generic (PLEG): container finished" podID="96f385f0-80a9-4479-991e-2067a92047fd" containerID="1e8471dd4fb49e1615142dffa296730c2f5b9e8fc0dac29998fec8399a50b737" exitCode=0 Apr 23 17:55:50.031643 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:50.031563 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"96f385f0-80a9-4479-991e-2067a92047fd","Type":"ContainerDied","Data":"1e8471dd4fb49e1615142dffa296730c2f5b9e8fc0dac29998fec8399a50b737"} Apr 23 17:55:50.034031 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:50.033974 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-j7hjg" podStartSLOduration=16.285185288 podStartE2EDuration="17.033959052s" podCreationTimestamp="2026-04-23 17:55:33 +0000 UTC" firstStartedPulling="2026-04-23 17:55:43.056288756 +0000 UTC m=+213.420329681" lastFinishedPulling="2026-04-23 17:55:43.805062526 +0000 UTC m=+214.169103445" observedRunningTime="2026-04-23 17:55:50.031941714 +0000 UTC m=+220.395982666" watchObservedRunningTime="2026-04-23 17:55:50.033959052 +0000 UTC m=+220.397999994" Apr 23 17:55:50.056980 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:50.056959 2574 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-5bb94bc895-f4jk5"] Apr 23 17:55:50.066243 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:50.066221 2574 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-5bb94bc895-f4jk5"] Apr 23 17:55:50.132119 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:50.132061 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-86d966f4cc-cp7bj" podStartSLOduration=1.071843708 podStartE2EDuration="19.132046943s" podCreationTimestamp="2026-04-23 17:55:31 +0000 UTC" firstStartedPulling="2026-04-23 17:55:31.684043575 +0000 UTC m=+202.048084495" lastFinishedPulling="2026-04-23 17:55:49.744246812 +0000 UTC m=+220.108287730" observedRunningTime="2026-04-23 17:55:50.098099846 +0000 UTC m=+220.462140797" watchObservedRunningTime="2026-04-23 17:55:50.132046943 +0000 UTC m=+220.496087884" Apr 23 17:55:50.151360 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:50.151314 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-69db897b98-9wvv9" podStartSLOduration=11.223183441 podStartE2EDuration="17.15129946s" podCreationTimestamp="2026-04-23 17:55:33 +0000 UTC" firstStartedPulling="2026-04-23 17:55:43.332981705 +0000 UTC m=+213.697022626" lastFinishedPulling="2026-04-23 17:55:49.261097719 +0000 UTC m=+219.625138645" observedRunningTime="2026-04-23 17:55:50.149506565 +0000 UTC m=+220.513547516" watchObservedRunningTime="2026-04-23 17:55:50.15129946 +0000 UTC m=+220.515340413" Apr 23 17:55:50.170492 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:50.170432 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-9d44df66c-6q7jp" podStartSLOduration=11.90276616 podStartE2EDuration="17.170416646s" podCreationTimestamp="2026-04-23 17:55:33 +0000 UTC" firstStartedPulling="2026-04-23 17:55:43.992824333 +0000 UTC m=+214.356865255" lastFinishedPulling="2026-04-23 17:55:49.260474822 +0000 UTC m=+219.624515741" observedRunningTime="2026-04-23 17:55:50.168483569 +0000 UTC m=+220.532524560" watchObservedRunningTime="2026-04-23 17:55:50.170416646 +0000 UTC m=+220.534457590" Apr 23 17:55:50.276040 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:50.275967 2574 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b39a95d3-b859-4e2d-bbef-fca1ee288a74" path="/var/lib/kubelet/pods/b39a95d3-b859-4e2d-bbef-fca1ee288a74/volumes" Apr 23 17:55:53.057867 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:53.056316 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6bff6c748f-b6mkb" event={"ID":"662354fc-65c1-4dc1-a71f-b0640bab8b2f","Type":"ContainerStarted","Data":"cbde7aef42a3d16edc00b05b72625440e409ebfadc86eb3b0357a40cefc2489a"} Apr 23 17:55:53.057867 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:53.056361 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6bff6c748f-b6mkb" event={"ID":"662354fc-65c1-4dc1-a71f-b0640bab8b2f","Type":"ContainerStarted","Data":"a77dd6fd26f249913dfd986a9bfa357056f60fd7f7bcee0fa7a196a512452005"} Apr 23 17:55:53.057867 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:53.056374 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6bff6c748f-b6mkb" event={"ID":"662354fc-65c1-4dc1-a71f-b0640bab8b2f","Type":"ContainerStarted","Data":"c680df50eca3258c4dacb9b62185fe56cee4e5c1b23b63bf41267c46f4bf0363"} Apr 23 17:55:53.057867 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:53.057466 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-monitoring/thanos-querier-6bff6c748f-b6mkb" Apr 23 17:55:53.061662 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:53.061641 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"96f385f0-80a9-4479-991e-2067a92047fd","Type":"ContainerStarted","Data":"ecccc52cea0f6e23ab71d16a199d5072162e9e8baedb4115d827c09e892695a6"} Apr 23 17:55:53.061773 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:53.061667 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"96f385f0-80a9-4479-991e-2067a92047fd","Type":"ContainerStarted","Data":"4a43e170e2067e765c4f618ea857a185d7b675919043797699f5ca5c2e4d016d"} Apr 23 17:55:53.061773 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:53.061679 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"96f385f0-80a9-4479-991e-2067a92047fd","Type":"ContainerStarted","Data":"c81cb252eb2e7b9d23603bc2c8115bb76f656ab01d705f09ef39b53f85c0654d"} Apr 23 17:55:54.071309 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:54.071266 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"96f385f0-80a9-4479-991e-2067a92047fd","Type":"ContainerStarted","Data":"ff86e4d7e98919180f15680615dd3913a381b16d1a886f93bff090544ec1ed03"} Apr 23 17:55:54.071309 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:54.071315 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"96f385f0-80a9-4479-991e-2067a92047fd","Type":"ContainerStarted","Data":"96fd84bc803b0a8ed68773657d892c4d9e534b2c802a28fe705cf7132d8c2040"} Apr 23 17:55:54.071975 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:54.071328 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"96f385f0-80a9-4479-991e-2067a92047fd","Type":"ContainerStarted","Data":"fd4691ec9678e005a32834f0abaf25d0cc889224b58a2d7ce93dbb9ede677a73"} Apr 23 17:55:54.080675 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:54.080601 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-6bff6c748f-b6mkb" Apr 23 17:55:54.104391 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:54.104328 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=10.970199635 podStartE2EDuration="20.10430989s" podCreationTimestamp="2026-04-23 17:55:34 +0000 UTC" firstStartedPulling="2026-04-23 17:55:43.310586842 +0000 UTC m=+213.674627764" lastFinishedPulling="2026-04-23 17:55:52.444697086 +0000 UTC m=+222.808738019" observedRunningTime="2026-04-23 17:55:54.102409777 +0000 UTC m=+224.466450731" watchObservedRunningTime="2026-04-23 17:55:54.10430989 +0000 UTC m=+224.468350846" Apr 23 17:55:54.104559 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:54.104497 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-6bff6c748f-b6mkb" podStartSLOduration=9.942003196 podStartE2EDuration="18.104490601s" podCreationTimestamp="2026-04-23 17:55:36 +0000 UTC" firstStartedPulling="2026-04-23 17:55:43.77631248 +0000 UTC m=+214.140353405" lastFinishedPulling="2026-04-23 17:55:51.938799876 +0000 UTC m=+222.302840810" observedRunningTime="2026-04-23 17:55:53.087575637 +0000 UTC m=+223.451616579" watchObservedRunningTime="2026-04-23 17:55:54.104490601 +0000 UTC m=+224.468531552" Apr 23 17:55:56.085399 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:56.085361 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-55f9b9dd49-flpw6" event={"ID":"02321c17-811b-4320-bbb7-c629cf39eab1","Type":"ContainerStarted","Data":"1351dc640375c6e50b6406f829d57d178ee2bc767d8490b0c851f009cfe4bd62"} Apr 23 17:55:56.085836 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:56.085407 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-55f9b9dd49-flpw6" event={"ID":"02321c17-811b-4320-bbb7-c629cf39eab1","Type":"ContainerStarted","Data":"e01dd7840b5ddc939a0d8e396a905135fa387b0c95bdb5dc5b9ff174f9686a2d"} Apr 23 17:55:56.114627 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:56.114581 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-55f9b9dd49-flpw6" podStartSLOduration=1.233769577 podStartE2EDuration="25.114567127s" podCreationTimestamp="2026-04-23 17:55:31 +0000 UTC" firstStartedPulling="2026-04-23 17:55:31.666714799 +0000 UTC m=+202.030755729" lastFinishedPulling="2026-04-23 17:55:55.547512343 +0000 UTC m=+225.911553279" observedRunningTime="2026-04-23 17:55:56.111898884 +0000 UTC m=+226.475939825" watchObservedRunningTime="2026-04-23 17:55:56.114567127 +0000 UTC m=+226.478608068" Apr 23 17:55:56.275263 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:55:56.275235 2574 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-console/console-59d4798cc5-q8q9r"] Apr 23 17:56:00.138522 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:00.138491 2574 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-55b97c5948-m4xr8"] Apr 23 17:56:07.047416 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:07.047344 2574 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-console/console-65545bb479-82xqt" podUID="e684296b-68a2-4225-9296-807a9ed43d67" containerName="console" containerID="cri-o://5b204257743c0ee639dfd28757c3ab85cc5a521b288251128a64fbd4b9709f74" gracePeriod=15 Apr 23 17:56:07.326388 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:07.326367 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-65545bb479-82xqt_e684296b-68a2-4225-9296-807a9ed43d67/console/0.log" Apr 23 17:56:07.326491 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:07.326430 2574 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65545bb479-82xqt" Apr 23 17:56:07.405539 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:07.405508 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e684296b-68a2-4225-9296-807a9ed43d67-service-ca\") pod \"e684296b-68a2-4225-9296-807a9ed43d67\" (UID: \"e684296b-68a2-4225-9296-807a9ed43d67\") " Apr 23 17:56:07.405690 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:07.405584 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e684296b-68a2-4225-9296-807a9ed43d67-console-oauth-config\") pod \"e684296b-68a2-4225-9296-807a9ed43d67\" (UID: \"e684296b-68a2-4225-9296-807a9ed43d67\") " Apr 23 17:56:07.405690 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:07.405609 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e684296b-68a2-4225-9296-807a9ed43d67-console-config\") pod \"e684296b-68a2-4225-9296-807a9ed43d67\" (UID: \"e684296b-68a2-4225-9296-807a9ed43d67\") " Apr 23 17:56:07.405690 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:07.405660 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e684296b-68a2-4225-9296-807a9ed43d67-console-serving-cert\") pod \"e684296b-68a2-4225-9296-807a9ed43d67\" (UID: \"e684296b-68a2-4225-9296-807a9ed43d67\") " Apr 23 17:56:07.405821 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:07.405740 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm4v5\" (UniqueName: \"kubernetes.io/projected/e684296b-68a2-4225-9296-807a9ed43d67-kube-api-access-nm4v5\") pod \"e684296b-68a2-4225-9296-807a9ed43d67\" (UID: \"e684296b-68a2-4225-9296-807a9ed43d67\") " Apr 23 17:56:07.405821 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:07.405764 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e684296b-68a2-4225-9296-807a9ed43d67-oauth-serving-cert\") pod \"e684296b-68a2-4225-9296-807a9ed43d67\" (UID: \"e684296b-68a2-4225-9296-807a9ed43d67\") " Apr 23 17:56:07.406106 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:07.405977 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e684296b-68a2-4225-9296-807a9ed43d67-service-ca" (OuterVolumeSpecName: "service-ca") pod "e684296b-68a2-4225-9296-807a9ed43d67" (UID: "e684296b-68a2-4225-9296-807a9ed43d67"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:56:07.406237 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:07.406109 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e684296b-68a2-4225-9296-807a9ed43d67-console-config" (OuterVolumeSpecName: "console-config") pod "e684296b-68a2-4225-9296-807a9ed43d67" (UID: "e684296b-68a2-4225-9296-807a9ed43d67"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:56:07.406237 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:07.406164 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e684296b-68a2-4225-9296-807a9ed43d67-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "e684296b-68a2-4225-9296-807a9ed43d67" (UID: "e684296b-68a2-4225-9296-807a9ed43d67"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:56:07.408174 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:07.408144 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e684296b-68a2-4225-9296-807a9ed43d67-kube-api-access-nm4v5" (OuterVolumeSpecName: "kube-api-access-nm4v5") pod "e684296b-68a2-4225-9296-807a9ed43d67" (UID: "e684296b-68a2-4225-9296-807a9ed43d67"). InnerVolumeSpecName "kube-api-access-nm4v5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 17:56:07.408439 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:07.408415 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e684296b-68a2-4225-9296-807a9ed43d67-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "e684296b-68a2-4225-9296-807a9ed43d67" (UID: "e684296b-68a2-4225-9296-807a9ed43d67"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:56:07.408525 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:07.408434 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e684296b-68a2-4225-9296-807a9ed43d67-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "e684296b-68a2-4225-9296-807a9ed43d67" (UID: "e684296b-68a2-4225-9296-807a9ed43d67"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:56:07.507274 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:07.507236 2574 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nm4v5\" (UniqueName: \"kubernetes.io/projected/e684296b-68a2-4225-9296-807a9ed43d67-kube-api-access-nm4v5\") on node \"ip-10-0-135-87.ec2.internal\" DevicePath \"\"" Apr 23 17:56:07.507274 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:07.507266 2574 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e684296b-68a2-4225-9296-807a9ed43d67-oauth-serving-cert\") on node \"ip-10-0-135-87.ec2.internal\" DevicePath \"\"" Apr 23 17:56:07.507488 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:07.507281 2574 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e684296b-68a2-4225-9296-807a9ed43d67-service-ca\") on node \"ip-10-0-135-87.ec2.internal\" DevicePath \"\"" Apr 23 17:56:07.507488 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:07.507295 2574 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e684296b-68a2-4225-9296-807a9ed43d67-console-oauth-config\") on node \"ip-10-0-135-87.ec2.internal\" DevicePath \"\"" Apr 23 17:56:07.507488 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:07.507304 2574 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e684296b-68a2-4225-9296-807a9ed43d67-console-config\") on node \"ip-10-0-135-87.ec2.internal\" DevicePath \"\"" Apr 23 17:56:07.507488 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:07.507313 2574 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e684296b-68a2-4225-9296-807a9ed43d67-console-serving-cert\") on node \"ip-10-0-135-87.ec2.internal\" DevicePath \"\"" Apr 23 17:56:08.125353 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:08.125312 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-65545bb479-82xqt_e684296b-68a2-4225-9296-807a9ed43d67/console/0.log" Apr 23 17:56:08.125353 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:08.125350 2574 generic.go:358] "Generic (PLEG): container finished" podID="e684296b-68a2-4225-9296-807a9ed43d67" containerID="5b204257743c0ee639dfd28757c3ab85cc5a521b288251128a64fbd4b9709f74" exitCode=2 Apr 23 17:56:08.125790 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:08.125387 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65545bb479-82xqt" event={"ID":"e684296b-68a2-4225-9296-807a9ed43d67","Type":"ContainerDied","Data":"5b204257743c0ee639dfd28757c3ab85cc5a521b288251128a64fbd4b9709f74"} Apr 23 17:56:08.125790 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:08.125410 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65545bb479-82xqt" event={"ID":"e684296b-68a2-4225-9296-807a9ed43d67","Type":"ContainerDied","Data":"830e875ae54e39243640d1e33d9df4d4d82a065505d7991b889c8e0d0da4055f"} Apr 23 17:56:08.125790 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:08.125410 2574 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65545bb479-82xqt" Apr 23 17:56:08.125790 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:08.125422 2574 scope.go:117] "RemoveContainer" containerID="5b204257743c0ee639dfd28757c3ab85cc5a521b288251128a64fbd4b9709f74" Apr 23 17:56:08.134385 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:08.134366 2574 scope.go:117] "RemoveContainer" containerID="5b204257743c0ee639dfd28757c3ab85cc5a521b288251128a64fbd4b9709f74" Apr 23 17:56:08.134631 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:56:08.134603 2574 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b204257743c0ee639dfd28757c3ab85cc5a521b288251128a64fbd4b9709f74\": container with ID starting with 5b204257743c0ee639dfd28757c3ab85cc5a521b288251128a64fbd4b9709f74 not found: ID does not exist" containerID="5b204257743c0ee639dfd28757c3ab85cc5a521b288251128a64fbd4b9709f74" Apr 23 17:56:08.134676 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:08.134640 2574 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b204257743c0ee639dfd28757c3ab85cc5a521b288251128a64fbd4b9709f74"} err="failed to get container status \"5b204257743c0ee639dfd28757c3ab85cc5a521b288251128a64fbd4b9709f74\": rpc error: code = NotFound desc = could not find container \"5b204257743c0ee639dfd28757c3ab85cc5a521b288251128a64fbd4b9709f74\": container with ID starting with 5b204257743c0ee639dfd28757c3ab85cc5a521b288251128a64fbd4b9709f74 not found: ID does not exist" Apr 23 17:56:08.149258 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:08.149234 2574 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-console/console-65545bb479-82xqt"] Apr 23 17:56:08.153415 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:08.153391 2574 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-65545bb479-82xqt"] Apr 23 17:56:08.273364 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:08.273335 2574 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e684296b-68a2-4225-9296-807a9ed43d67" path="/var/lib/kubelet/pods/e684296b-68a2-4225-9296-807a9ed43d67/volumes" Apr 23 17:56:16.153939 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:16.153853 2574 generic.go:358] "Generic (PLEG): container finished" podID="d01e1208-1867-464a-822f-89683cda0372" containerID="1c4cdf25e472c3716c4c7e074823bcde30360c9c5e175bc6db657d79e23218c8" exitCode=0 Apr 23 17:56:16.153939 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:16.153873 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-585dfdc468-7h7vz" event={"ID":"d01e1208-1867-464a-822f-89683cda0372","Type":"ContainerDied","Data":"1c4cdf25e472c3716c4c7e074823bcde30360c9c5e175bc6db657d79e23218c8"} Apr 23 17:56:16.154424 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:16.154272 2574 scope.go:117] "RemoveContainer" containerID="1c4cdf25e472c3716c4c7e074823bcde30360c9c5e175bc6db657d79e23218c8" Apr 23 17:56:16.155429 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:16.155401 2574 generic.go:358] "Generic (PLEG): container finished" podID="e21ba2c0-7cc7-4b50-ba6c-1fb814e1f50a" containerID="9a800249c686791fae365233024167ddf308d243c928860446996be79883e5f4" exitCode=0 Apr 23 17:56:16.155534 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:16.155446 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-jfgn6" event={"ID":"e21ba2c0-7cc7-4b50-ba6c-1fb814e1f50a","Type":"ContainerDied","Data":"9a800249c686791fae365233024167ddf308d243c928860446996be79883e5f4"} Apr 23 17:56:16.155704 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:16.155692 2574 scope.go:117] "RemoveContainer" containerID="9a800249c686791fae365233024167ddf308d243c928860446996be79883e5f4" Apr 23 17:56:17.161415 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:17.161369 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-585dfdc468-7h7vz" event={"ID":"d01e1208-1867-464a-822f-89683cda0372","Type":"ContainerStarted","Data":"a1de3d8c91eb176eebda58c79ea54b5dedc75816743dea966345b107de1e51d9"} Apr 23 17:56:17.162981 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:17.162956 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-jfgn6" event={"ID":"e21ba2c0-7cc7-4b50-ba6c-1fb814e1f50a","Type":"ContainerStarted","Data":"2b10467710cb7d055f44432c50c1a4e98380d3c0aaff91747e8831c7c4f7245f"} Apr 23 17:56:21.294512 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:21.294479 2574 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-console/console-59d4798cc5-q8q9r" podUID="29470926-5713-4d9b-8a39-cf795d0a4226" containerName="console" containerID="cri-o://c2d751dc51754db1c4667de661079cff70eec0120bb4cd04b0f58d918b77e5b0" gracePeriod=15 Apr 23 17:56:21.568089 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:21.568067 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-59d4798cc5-q8q9r_29470926-5713-4d9b-8a39-cf795d0a4226/console/0.log" Apr 23 17:56:21.568196 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:21.568128 2574 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-59d4798cc5-q8q9r" Apr 23 17:56:21.620234 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:21.620203 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zk7h5\" (UniqueName: \"kubernetes.io/projected/29470926-5713-4d9b-8a39-cf795d0a4226-kube-api-access-zk7h5\") pod \"29470926-5713-4d9b-8a39-cf795d0a4226\" (UID: \"29470926-5713-4d9b-8a39-cf795d0a4226\") " Apr 23 17:56:21.620234 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:21.620239 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/29470926-5713-4d9b-8a39-cf795d0a4226-console-config\") pod \"29470926-5713-4d9b-8a39-cf795d0a4226\" (UID: \"29470926-5713-4d9b-8a39-cf795d0a4226\") " Apr 23 17:56:21.620434 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:21.620295 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/29470926-5713-4d9b-8a39-cf795d0a4226-console-serving-cert\") pod \"29470926-5713-4d9b-8a39-cf795d0a4226\" (UID: \"29470926-5713-4d9b-8a39-cf795d0a4226\") " Apr 23 17:56:21.620434 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:21.620319 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29470926-5713-4d9b-8a39-cf795d0a4226-trusted-ca-bundle\") pod \"29470926-5713-4d9b-8a39-cf795d0a4226\" (UID: \"29470926-5713-4d9b-8a39-cf795d0a4226\") " Apr 23 17:56:21.620542 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:21.620476 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/29470926-5713-4d9b-8a39-cf795d0a4226-console-oauth-config\") pod \"29470926-5713-4d9b-8a39-cf795d0a4226\" (UID: \"29470926-5713-4d9b-8a39-cf795d0a4226\") " Apr 23 17:56:21.620595 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:21.620563 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/29470926-5713-4d9b-8a39-cf795d0a4226-service-ca\") pod \"29470926-5713-4d9b-8a39-cf795d0a4226\" (UID: \"29470926-5713-4d9b-8a39-cf795d0a4226\") " Apr 23 17:56:21.620649 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:21.620606 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/29470926-5713-4d9b-8a39-cf795d0a4226-oauth-serving-cert\") pod \"29470926-5713-4d9b-8a39-cf795d0a4226\" (UID: \"29470926-5713-4d9b-8a39-cf795d0a4226\") " Apr 23 17:56:21.620728 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:21.620703 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29470926-5713-4d9b-8a39-cf795d0a4226-console-config" (OuterVolumeSpecName: "console-config") pod "29470926-5713-4d9b-8a39-cf795d0a4226" (UID: "29470926-5713-4d9b-8a39-cf795d0a4226"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:56:21.620728 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:21.620710 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29470926-5713-4d9b-8a39-cf795d0a4226-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "29470926-5713-4d9b-8a39-cf795d0a4226" (UID: "29470926-5713-4d9b-8a39-cf795d0a4226"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:56:21.620932 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:21.620913 2574 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/29470926-5713-4d9b-8a39-cf795d0a4226-console-config\") on node \"ip-10-0-135-87.ec2.internal\" DevicePath \"\"" Apr 23 17:56:21.621021 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:21.620937 2574 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29470926-5713-4d9b-8a39-cf795d0a4226-trusted-ca-bundle\") on node \"ip-10-0-135-87.ec2.internal\" DevicePath \"\"" Apr 23 17:56:21.621192 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:21.621072 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29470926-5713-4d9b-8a39-cf795d0a4226-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "29470926-5713-4d9b-8a39-cf795d0a4226" (UID: "29470926-5713-4d9b-8a39-cf795d0a4226"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:56:21.621330 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:21.621307 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29470926-5713-4d9b-8a39-cf795d0a4226-service-ca" (OuterVolumeSpecName: "service-ca") pod "29470926-5713-4d9b-8a39-cf795d0a4226" (UID: "29470926-5713-4d9b-8a39-cf795d0a4226"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:56:21.622556 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:21.622531 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29470926-5713-4d9b-8a39-cf795d0a4226-kube-api-access-zk7h5" (OuterVolumeSpecName: "kube-api-access-zk7h5") pod "29470926-5713-4d9b-8a39-cf795d0a4226" (UID: "29470926-5713-4d9b-8a39-cf795d0a4226"). InnerVolumeSpecName "kube-api-access-zk7h5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 17:56:21.622682 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:21.622655 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29470926-5713-4d9b-8a39-cf795d0a4226-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "29470926-5713-4d9b-8a39-cf795d0a4226" (UID: "29470926-5713-4d9b-8a39-cf795d0a4226"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:56:21.622738 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:21.622674 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29470926-5713-4d9b-8a39-cf795d0a4226-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "29470926-5713-4d9b-8a39-cf795d0a4226" (UID: "29470926-5713-4d9b-8a39-cf795d0a4226"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:56:21.722341 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:21.722311 2574 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/29470926-5713-4d9b-8a39-cf795d0a4226-console-oauth-config\") on node \"ip-10-0-135-87.ec2.internal\" DevicePath \"\"" Apr 23 17:56:21.722341 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:21.722338 2574 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/29470926-5713-4d9b-8a39-cf795d0a4226-service-ca\") on node \"ip-10-0-135-87.ec2.internal\" DevicePath \"\"" Apr 23 17:56:21.722341 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:21.722347 2574 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/29470926-5713-4d9b-8a39-cf795d0a4226-oauth-serving-cert\") on node \"ip-10-0-135-87.ec2.internal\" DevicePath \"\"" Apr 23 17:56:21.722534 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:21.722357 2574 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zk7h5\" (UniqueName: \"kubernetes.io/projected/29470926-5713-4d9b-8a39-cf795d0a4226-kube-api-access-zk7h5\") on node \"ip-10-0-135-87.ec2.internal\" DevicePath \"\"" Apr 23 17:56:21.722534 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:21.722366 2574 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/29470926-5713-4d9b-8a39-cf795d0a4226-console-serving-cert\") on node \"ip-10-0-135-87.ec2.internal\" DevicePath \"\"" Apr 23 17:56:22.182057 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:22.182028 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-59d4798cc5-q8q9r_29470926-5713-4d9b-8a39-cf795d0a4226/console/0.log" Apr 23 17:56:22.182224 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:22.182070 2574 generic.go:358] "Generic (PLEG): container finished" podID="29470926-5713-4d9b-8a39-cf795d0a4226" containerID="c2d751dc51754db1c4667de661079cff70eec0120bb4cd04b0f58d918b77e5b0" exitCode=2 Apr 23 17:56:22.182224 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:22.182100 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-59d4798cc5-q8q9r" event={"ID":"29470926-5713-4d9b-8a39-cf795d0a4226","Type":"ContainerDied","Data":"c2d751dc51754db1c4667de661079cff70eec0120bb4cd04b0f58d918b77e5b0"} Apr 23 17:56:22.182224 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:22.182133 2574 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-59d4798cc5-q8q9r" Apr 23 17:56:22.182224 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:22.182139 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-59d4798cc5-q8q9r" event={"ID":"29470926-5713-4d9b-8a39-cf795d0a4226","Type":"ContainerDied","Data":"a434f51ee5599248f3d19d927ec6d0d0bdca59a414118b280809735ca49e557a"} Apr 23 17:56:22.182224 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:22.182156 2574 scope.go:117] "RemoveContainer" containerID="c2d751dc51754db1c4667de661079cff70eec0120bb4cd04b0f58d918b77e5b0" Apr 23 17:56:22.191350 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:22.191332 2574 scope.go:117] "RemoveContainer" containerID="c2d751dc51754db1c4667de661079cff70eec0120bb4cd04b0f58d918b77e5b0" Apr 23 17:56:22.191606 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:56:22.191588 2574 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2d751dc51754db1c4667de661079cff70eec0120bb4cd04b0f58d918b77e5b0\": container with ID starting with c2d751dc51754db1c4667de661079cff70eec0120bb4cd04b0f58d918b77e5b0 not found: ID does not exist" containerID="c2d751dc51754db1c4667de661079cff70eec0120bb4cd04b0f58d918b77e5b0" Apr 23 17:56:22.191687 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:22.191611 2574 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2d751dc51754db1c4667de661079cff70eec0120bb4cd04b0f58d918b77e5b0"} err="failed to get container status \"c2d751dc51754db1c4667de661079cff70eec0120bb4cd04b0f58d918b77e5b0\": rpc error: code = NotFound desc = could not find container \"c2d751dc51754db1c4667de661079cff70eec0120bb4cd04b0f58d918b77e5b0\": container with ID starting with c2d751dc51754db1c4667de661079cff70eec0120bb4cd04b0f58d918b77e5b0 not found: ID does not exist" Apr 23 17:56:22.214121 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:22.214101 2574 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-console/console-59d4798cc5-q8q9r"] Apr 23 17:56:22.220597 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:22.220576 2574 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-59d4798cc5-q8q9r"] Apr 23 17:56:22.272645 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:22.272618 2574 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29470926-5713-4d9b-8a39-cf795d0a4226" path="/var/lib/kubelet/pods/29470926-5713-4d9b-8a39-cf795d0a4226/volumes" Apr 23 17:56:25.157958 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:25.157893 2574 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-image-registry/image-registry-55b97c5948-m4xr8" podUID="42ffbf1c-448d-41bd-8eae-566d6d4cb2d9" containerName="registry" containerID="cri-o://795668277b3f5731bcffa1465f900c105a858c2a4c598dfb4a7699a8e9333a09" gracePeriod=30 Apr 23 17:56:25.423578 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:25.423558 2574 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-55b97c5948-m4xr8" Apr 23 17:56:25.553333 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:25.553305 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-bound-sa-token\") pod \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\" (UID: \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\") " Apr 23 17:56:25.553467 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:25.553337 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-registry-certificates\") pod \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\" (UID: \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\") " Apr 23 17:56:25.553467 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:25.553377 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-image-registry-private-configuration\") pod \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\" (UID: \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\") " Apr 23 17:56:25.553467 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:25.553422 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-trusted-ca\") pod \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\" (UID: \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\") " Apr 23 17:56:25.553467 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:25.553444 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-ca-trust-extracted\") pod \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\" (UID: \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\") " Apr 23 17:56:25.553467 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:25.553460 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-installation-pull-secrets\") pod \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\" (UID: \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\") " Apr 23 17:56:25.553715 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:25.553487 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4nnn\" (UniqueName: \"kubernetes.io/projected/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-kube-api-access-n4nnn\") pod \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\" (UID: \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\") " Apr 23 17:56:25.553715 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:25.553526 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-registry-tls\") pod \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\" (UID: \"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9\") " Apr 23 17:56:25.553821 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:25.553802 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "42ffbf1c-448d-41bd-8eae-566d6d4cb2d9" (UID: "42ffbf1c-448d-41bd-8eae-566d6d4cb2d9"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:56:25.554069 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:25.554031 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "42ffbf1c-448d-41bd-8eae-566d6d4cb2d9" (UID: "42ffbf1c-448d-41bd-8eae-566d6d4cb2d9"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:56:25.555780 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:25.555741 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "42ffbf1c-448d-41bd-8eae-566d6d4cb2d9" (UID: "42ffbf1c-448d-41bd-8eae-566d6d4cb2d9"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 17:56:25.556142 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:25.556116 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-image-registry-private-configuration" (OuterVolumeSpecName: "image-registry-private-configuration") pod "42ffbf1c-448d-41bd-8eae-566d6d4cb2d9" (UID: "42ffbf1c-448d-41bd-8eae-566d6d4cb2d9"). InnerVolumeSpecName "image-registry-private-configuration". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:56:25.556254 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:25.556178 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "42ffbf1c-448d-41bd-8eae-566d6d4cb2d9" (UID: "42ffbf1c-448d-41bd-8eae-566d6d4cb2d9"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 17:56:25.556507 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:25.556476 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-kube-api-access-n4nnn" (OuterVolumeSpecName: "kube-api-access-n4nnn") pod "42ffbf1c-448d-41bd-8eae-566d6d4cb2d9" (UID: "42ffbf1c-448d-41bd-8eae-566d6d4cb2d9"). InnerVolumeSpecName "kube-api-access-n4nnn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 17:56:25.556577 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:25.556484 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "42ffbf1c-448d-41bd-8eae-566d6d4cb2d9" (UID: "42ffbf1c-448d-41bd-8eae-566d6d4cb2d9"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:56:25.562440 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:25.562413 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "42ffbf1c-448d-41bd-8eae-566d6d4cb2d9" (UID: "42ffbf1c-448d-41bd-8eae-566d6d4cb2d9"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 23 17:56:25.654811 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:25.654785 2574 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-trusted-ca\") on node \"ip-10-0-135-87.ec2.internal\" DevicePath \"\"" Apr 23 17:56:25.654811 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:25.654808 2574 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-ca-trust-extracted\") on node \"ip-10-0-135-87.ec2.internal\" DevicePath \"\"" Apr 23 17:56:25.654940 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:25.654818 2574 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-installation-pull-secrets\") on node \"ip-10-0-135-87.ec2.internal\" DevicePath \"\"" Apr 23 17:56:25.654940 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:25.654827 2574 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n4nnn\" (UniqueName: \"kubernetes.io/projected/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-kube-api-access-n4nnn\") on node \"ip-10-0-135-87.ec2.internal\" DevicePath \"\"" Apr 23 17:56:25.654940 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:25.654836 2574 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-registry-tls\") on node \"ip-10-0-135-87.ec2.internal\" DevicePath \"\"" Apr 23 17:56:25.654940 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:25.654866 2574 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-bound-sa-token\") on node \"ip-10-0-135-87.ec2.internal\" DevicePath \"\"" Apr 23 17:56:25.654940 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:25.654874 2574 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-registry-certificates\") on node \"ip-10-0-135-87.ec2.internal\" DevicePath \"\"" Apr 23 17:56:25.654940 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:25.654884 2574 reconciler_common.go:299] "Volume detached for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9-image-registry-private-configuration\") on node \"ip-10-0-135-87.ec2.internal\" DevicePath \"\"" Apr 23 17:56:26.198754 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:26.198716 2574 generic.go:358] "Generic (PLEG): container finished" podID="42ffbf1c-448d-41bd-8eae-566d6d4cb2d9" containerID="795668277b3f5731bcffa1465f900c105a858c2a4c598dfb4a7699a8e9333a09" exitCode=0 Apr 23 17:56:26.199224 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:26.198781 2574 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-55b97c5948-m4xr8" Apr 23 17:56:26.199224 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:26.198804 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-55b97c5948-m4xr8" event={"ID":"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9","Type":"ContainerDied","Data":"795668277b3f5731bcffa1465f900c105a858c2a4c598dfb4a7699a8e9333a09"} Apr 23 17:56:26.199224 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:26.198870 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-55b97c5948-m4xr8" event={"ID":"42ffbf1c-448d-41bd-8eae-566d6d4cb2d9","Type":"ContainerDied","Data":"9bce06c82b0ea4acad6f13a69c1b687171e1d50df3f5ba5c574b4066d0727fcd"} Apr 23 17:56:26.199224 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:26.198889 2574 scope.go:117] "RemoveContainer" containerID="795668277b3f5731bcffa1465f900c105a858c2a4c598dfb4a7699a8e9333a09" Apr 23 17:56:26.200455 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:26.200430 2574 generic.go:358] "Generic (PLEG): container finished" podID="9601dc49-4014-4c79-9bb2-5871bb8d36a1" containerID="befc7f4764ac5818b2382b5bad205873b3a8a363dc5eb65d77879c0318b3d0b9" exitCode=0 Apr 23 17:56:26.200570 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:26.200500 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-567k5" event={"ID":"9601dc49-4014-4c79-9bb2-5871bb8d36a1","Type":"ContainerDied","Data":"befc7f4764ac5818b2382b5bad205873b3a8a363dc5eb65d77879c0318b3d0b9"} Apr 23 17:56:26.200794 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:26.200778 2574 scope.go:117] "RemoveContainer" containerID="befc7f4764ac5818b2382b5bad205873b3a8a363dc5eb65d77879c0318b3d0b9" Apr 23 17:56:26.208761 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:26.208745 2574 scope.go:117] "RemoveContainer" containerID="795668277b3f5731bcffa1465f900c105a858c2a4c598dfb4a7699a8e9333a09" Apr 23 17:56:26.209095 ip-10-0-135-87 kubenswrapper[2574]: E0423 17:56:26.209073 2574 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"795668277b3f5731bcffa1465f900c105a858c2a4c598dfb4a7699a8e9333a09\": container with ID starting with 795668277b3f5731bcffa1465f900c105a858c2a4c598dfb4a7699a8e9333a09 not found: ID does not exist" containerID="795668277b3f5731bcffa1465f900c105a858c2a4c598dfb4a7699a8e9333a09" Apr 23 17:56:26.209167 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:26.209103 2574 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"795668277b3f5731bcffa1465f900c105a858c2a4c598dfb4a7699a8e9333a09"} err="failed to get container status \"795668277b3f5731bcffa1465f900c105a858c2a4c598dfb4a7699a8e9333a09\": rpc error: code = NotFound desc = could not find container \"795668277b3f5731bcffa1465f900c105a858c2a4c598dfb4a7699a8e9333a09\": container with ID starting with 795668277b3f5731bcffa1465f900c105a858c2a4c598dfb4a7699a8e9333a09 not found: ID does not exist" Apr 23 17:56:26.241526 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:26.241505 2574 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-55b97c5948-m4xr8"] Apr 23 17:56:26.251960 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:26.251935 2574 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-55b97c5948-m4xr8"] Apr 23 17:56:26.274044 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:26.274018 2574 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42ffbf1c-448d-41bd-8eae-566d6d4cb2d9" path="/var/lib/kubelet/pods/42ffbf1c-448d-41bd-8eae-566d6d4cb2d9/volumes" Apr 23 17:56:27.206462 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:56:27.206423 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-567k5" event={"ID":"9601dc49-4014-4c79-9bb2-5871bb8d36a1","Type":"ContainerStarted","Data":"9d575bf27a3b6a0230906304532ac9a144cc59f5a208aac8a696b463b0f304b4"} Apr 23 17:57:10.156443 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:57:10.156413 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-phhz6_5a622cde-4463-4b2b-a60a-0724fdeeb5e3/console-operator/1.log" Apr 23 17:57:10.157031 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:57:10.156472 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-phhz6_5a622cde-4463-4b2b-a60a-0724fdeeb5e3/console-operator/1.log" Apr 23 17:57:10.164737 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:57:10.164607 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/4.log" Apr 23 17:57:10.164737 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:57:10.164643 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/4.log" Apr 23 17:57:10.167228 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:57:10.167211 2574 kubelet.go:1628] "Image garbage collection succeeded" Apr 23 17:57:19.128502 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:57:19.128474 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kube-system/global-pull-secret-syncer-4gd2m"] Apr 23 17:57:19.131067 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:57:19.128896 2574 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b39a95d3-b859-4e2d-bbef-fca1ee288a74" containerName="registry" Apr 23 17:57:19.131067 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:57:19.128911 2574 state_mem.go:107] "Deleted CPUSet assignment" podUID="b39a95d3-b859-4e2d-bbef-fca1ee288a74" containerName="registry" Apr 23 17:57:19.131067 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:57:19.128927 2574 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="29470926-5713-4d9b-8a39-cf795d0a4226" containerName="console" Apr 23 17:57:19.131067 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:57:19.128933 2574 state_mem.go:107] "Deleted CPUSet assignment" podUID="29470926-5713-4d9b-8a39-cf795d0a4226" containerName="console" Apr 23 17:57:19.131067 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:57:19.128949 2574 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="42ffbf1c-448d-41bd-8eae-566d6d4cb2d9" containerName="registry" Apr 23 17:57:19.131067 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:57:19.128955 2574 state_mem.go:107] "Deleted CPUSet assignment" podUID="42ffbf1c-448d-41bd-8eae-566d6d4cb2d9" containerName="registry" Apr 23 17:57:19.131067 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:57:19.128980 2574 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e684296b-68a2-4225-9296-807a9ed43d67" containerName="console" Apr 23 17:57:19.131067 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:57:19.128988 2574 state_mem.go:107] "Deleted CPUSet assignment" podUID="e684296b-68a2-4225-9296-807a9ed43d67" containerName="console" Apr 23 17:57:19.131067 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:57:19.129049 2574 memory_manager.go:356] "RemoveStaleState removing state" podUID="29470926-5713-4d9b-8a39-cf795d0a4226" containerName="console" Apr 23 17:57:19.131067 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:57:19.129058 2574 memory_manager.go:356] "RemoveStaleState removing state" podUID="e684296b-68a2-4225-9296-807a9ed43d67" containerName="console" Apr 23 17:57:19.131067 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:57:19.129067 2574 memory_manager.go:356] "RemoveStaleState removing state" podUID="42ffbf1c-448d-41bd-8eae-566d6d4cb2d9" containerName="registry" Apr 23 17:57:19.131067 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:57:19.129082 2574 memory_manager.go:356] "RemoveStaleState removing state" podUID="b39a95d3-b859-4e2d-bbef-fca1ee288a74" containerName="registry" Apr 23 17:57:19.132000 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:57:19.131982 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-4gd2m" Apr 23 17:57:19.134658 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:57:19.134637 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"original-pull-secret\"" Apr 23 17:57:19.177481 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:57:19.177456 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-4gd2m"] Apr 23 17:57:19.198285 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:57:19.198263 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/a94b32d2-886c-4388-bbeb-5579baee1db0-dbus\") pod \"global-pull-secret-syncer-4gd2m\" (UID: \"a94b32d2-886c-4388-bbeb-5579baee1db0\") " pod="kube-system/global-pull-secret-syncer-4gd2m" Apr 23 17:57:19.198396 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:57:19.198341 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/a94b32d2-886c-4388-bbeb-5579baee1db0-kubelet-config\") pod \"global-pull-secret-syncer-4gd2m\" (UID: \"a94b32d2-886c-4388-bbeb-5579baee1db0\") " pod="kube-system/global-pull-secret-syncer-4gd2m" Apr 23 17:57:19.198396 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:57:19.198368 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/a94b32d2-886c-4388-bbeb-5579baee1db0-original-pull-secret\") pod \"global-pull-secret-syncer-4gd2m\" (UID: \"a94b32d2-886c-4388-bbeb-5579baee1db0\") " pod="kube-system/global-pull-secret-syncer-4gd2m" Apr 23 17:57:19.299714 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:57:19.299678 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/a94b32d2-886c-4388-bbeb-5579baee1db0-kubelet-config\") pod \"global-pull-secret-syncer-4gd2m\" (UID: \"a94b32d2-886c-4388-bbeb-5579baee1db0\") " pod="kube-system/global-pull-secret-syncer-4gd2m" Apr 23 17:57:19.299932 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:57:19.299720 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/a94b32d2-886c-4388-bbeb-5579baee1db0-original-pull-secret\") pod \"global-pull-secret-syncer-4gd2m\" (UID: \"a94b32d2-886c-4388-bbeb-5579baee1db0\") " pod="kube-system/global-pull-secret-syncer-4gd2m" Apr 23 17:57:19.299932 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:57:19.299833 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/a94b32d2-886c-4388-bbeb-5579baee1db0-kubelet-config\") pod \"global-pull-secret-syncer-4gd2m\" (UID: \"a94b32d2-886c-4388-bbeb-5579baee1db0\") " pod="kube-system/global-pull-secret-syncer-4gd2m" Apr 23 17:57:19.299932 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:57:19.299873 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/a94b32d2-886c-4388-bbeb-5579baee1db0-dbus\") pod \"global-pull-secret-syncer-4gd2m\" (UID: \"a94b32d2-886c-4388-bbeb-5579baee1db0\") " pod="kube-system/global-pull-secret-syncer-4gd2m" Apr 23 17:57:19.300073 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:57:19.300000 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/a94b32d2-886c-4388-bbeb-5579baee1db0-dbus\") pod \"global-pull-secret-syncer-4gd2m\" (UID: \"a94b32d2-886c-4388-bbeb-5579baee1db0\") " pod="kube-system/global-pull-secret-syncer-4gd2m" Apr 23 17:57:19.302194 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:57:19.302176 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/a94b32d2-886c-4388-bbeb-5579baee1db0-original-pull-secret\") pod \"global-pull-secret-syncer-4gd2m\" (UID: \"a94b32d2-886c-4388-bbeb-5579baee1db0\") " pod="kube-system/global-pull-secret-syncer-4gd2m" Apr 23 17:57:19.442074 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:57:19.442040 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-4gd2m" Apr 23 17:57:19.568263 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:57:19.568239 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-4gd2m"] Apr 23 17:57:19.570488 ip-10-0-135-87 kubenswrapper[2574]: W0423 17:57:19.570458 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda94b32d2_886c_4388_bbeb_5579baee1db0.slice/crio-1252e810e1bc93b299a6fec99d9fb49f37a0e9b163c471c115b422215fedbb2a WatchSource:0}: Error finding container 1252e810e1bc93b299a6fec99d9fb49f37a0e9b163c471c115b422215fedbb2a: Status 404 returned error can't find the container with id 1252e810e1bc93b299a6fec99d9fb49f37a0e9b163c471c115b422215fedbb2a Apr 23 17:57:19.572099 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:57:19.572084 2574 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 17:57:20.393224 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:57:20.393180 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-4gd2m" event={"ID":"a94b32d2-886c-4388-bbeb-5579baee1db0","Type":"ContainerStarted","Data":"1252e810e1bc93b299a6fec99d9fb49f37a0e9b163c471c115b422215fedbb2a"} Apr 23 17:57:23.406914 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:57:23.406818 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-4gd2m" event={"ID":"a94b32d2-886c-4388-bbeb-5579baee1db0","Type":"ContainerStarted","Data":"6b5e936724de78f4a4798692eaa396b13ab16622e2419de80f5df40d15f387ab"} Apr 23 17:57:23.426492 ip-10-0-135-87 kubenswrapper[2574]: I0423 17:57:23.426447 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/global-pull-secret-syncer-4gd2m" podStartSLOduration=1.066967845 podStartE2EDuration="4.426434957s" podCreationTimestamp="2026-04-23 17:57:19 +0000 UTC" firstStartedPulling="2026-04-23 17:57:19.572216154 +0000 UTC m=+309.936257073" lastFinishedPulling="2026-04-23 17:57:22.93168326 +0000 UTC m=+313.295724185" observedRunningTime="2026-04-23 17:57:23.425085499 +0000 UTC m=+313.789126442" watchObservedRunningTime="2026-04-23 17:57:23.426434957 +0000 UTC m=+313.790475910" Apr 23 18:02:10.189780 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:02:10.189742 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-phhz6_5a622cde-4463-4b2b-a60a-0724fdeeb5e3/console-operator/1.log" Apr 23 18:02:10.190823 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:02:10.190802 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-phhz6_5a622cde-4463-4b2b-a60a-0724fdeeb5e3/console-operator/1.log" Apr 23 18:02:10.196402 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:02:10.196383 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/4.log" Apr 23 18:02:10.197254 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:02:10.197236 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/4.log" Apr 23 18:07:10.216454 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:07:10.216422 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-phhz6_5a622cde-4463-4b2b-a60a-0724fdeeb5e3/console-operator/1.log" Apr 23 18:07:10.218550 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:07:10.218528 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-phhz6_5a622cde-4463-4b2b-a60a-0724fdeeb5e3/console-operator/1.log" Apr 23 18:07:10.222601 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:07:10.222580 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/4.log" Apr 23 18:07:10.224276 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:07:10.224258 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/4.log" Apr 23 18:12:10.241474 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:12:10.241440 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-phhz6_5a622cde-4463-4b2b-a60a-0724fdeeb5e3/console-operator/1.log" Apr 23 18:12:10.248394 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:12:10.248371 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-phhz6_5a622cde-4463-4b2b-a60a-0724fdeeb5e3/console-operator/1.log" Apr 23 18:12:10.250635 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:12:10.250614 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/4.log" Apr 23 18:12:10.254170 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:12:10.254152 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/4.log" Apr 23 18:17:10.271354 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:17:10.271322 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-phhz6_5a622cde-4463-4b2b-a60a-0724fdeeb5e3/console-operator/1.log" Apr 23 18:17:10.278302 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:17:10.278277 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-phhz6_5a622cde-4463-4b2b-a60a-0724fdeeb5e3/console-operator/1.log" Apr 23 18:17:10.278429 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:17:10.278329 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/4.log" Apr 23 18:17:10.283971 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:17:10.283945 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/4.log" Apr 23 18:22:10.301536 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:22:10.301497 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-phhz6_5a622cde-4463-4b2b-a60a-0724fdeeb5e3/console-operator/1.log" Apr 23 18:22:10.305321 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:22:10.305301 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-phhz6_5a622cde-4463-4b2b-a60a-0724fdeeb5e3/console-operator/1.log" Apr 23 18:22:10.307503 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:22:10.307484 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/4.log" Apr 23 18:22:10.311049 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:22:10.311030 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/4.log" Apr 23 18:27:10.337775 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:27:10.337735 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-phhz6_5a622cde-4463-4b2b-a60a-0724fdeeb5e3/console-operator/1.log" Apr 23 18:27:10.343039 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:27:10.343015 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-phhz6_5a622cde-4463-4b2b-a60a-0724fdeeb5e3/console-operator/1.log" Apr 23 18:27:10.344316 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:27:10.344298 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/4.log" Apr 23 18:27:10.348726 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:27:10.348709 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/4.log" Apr 23 18:32:10.366324 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:32:10.366294 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-phhz6_5a622cde-4463-4b2b-a60a-0724fdeeb5e3/console-operator/1.log" Apr 23 18:32:10.372837 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:32:10.372814 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/4.log" Apr 23 18:32:10.373757 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:32:10.373737 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-phhz6_5a622cde-4463-4b2b-a60a-0724fdeeb5e3/console-operator/1.log" Apr 23 18:32:10.379979 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:32:10.379962 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/4.log" Apr 23 18:37:10.400194 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:37:10.400111 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-phhz6_5a622cde-4463-4b2b-a60a-0724fdeeb5e3/console-operator/1.log" Apr 23 18:37:10.406182 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:37:10.406162 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/4.log" Apr 23 18:37:10.408373 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:37:10.408353 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-phhz6_5a622cde-4463-4b2b-a60a-0724fdeeb5e3/console-operator/1.log" Apr 23 18:37:10.413887 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:37:10.413870 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/4.log" Apr 23 18:42:10.426523 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:42:10.426496 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-phhz6_5a622cde-4463-4b2b-a60a-0724fdeeb5e3/console-operator/1.log" Apr 23 18:42:10.432142 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:42:10.432119 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/4.log" Apr 23 18:42:10.434602 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:42:10.434583 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-phhz6_5a622cde-4463-4b2b-a60a-0724fdeeb5e3/console-operator/1.log" Apr 23 18:42:10.440144 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:42:10.440127 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/4.log" Apr 23 18:47:10.452175 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:47:10.452146 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-phhz6_5a622cde-4463-4b2b-a60a-0724fdeeb5e3/console-operator/1.log" Apr 23 18:47:10.458343 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:47:10.458327 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/4.log" Apr 23 18:47:10.462050 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:47:10.462035 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-phhz6_5a622cde-4463-4b2b-a60a-0724fdeeb5e3/console-operator/1.log" Apr 23 18:47:10.467440 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:47:10.467425 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/4.log" Apr 23 18:52:10.479415 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:52:10.479308 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-phhz6_5a622cde-4463-4b2b-a60a-0724fdeeb5e3/console-operator/1.log" Apr 23 18:52:10.486417 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:52:10.485195 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/4.log" Apr 23 18:52:10.489714 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:52:10.489693 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-phhz6_5a622cde-4463-4b2b-a60a-0724fdeeb5e3/console-operator/1.log" Apr 23 18:52:10.495545 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:52:10.495529 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/4.log" Apr 23 18:57:10.505008 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:57:10.504909 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-phhz6_5a622cde-4463-4b2b-a60a-0724fdeeb5e3/console-operator/1.log" Apr 23 18:57:10.511023 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:57:10.511002 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/4.log" Apr 23 18:57:10.517225 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:57:10.517204 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-phhz6_5a622cde-4463-4b2b-a60a-0724fdeeb5e3/console-operator/1.log" Apr 23 18:57:10.522738 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:57:10.522721 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-135-87.ec2.internal_b86d5a8aaa7fecdf67a597e125a8b168/kube-rbac-proxy-crio/4.log" Apr 23 18:57:59.030996 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:57:59.030901 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_global-pull-secret-syncer-4gd2m_a94b32d2-886c-4388-bbeb-5579baee1db0/global-pull-secret-syncer/0.log" Apr 23 18:57:59.262957 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:57:59.262932 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_konnectivity-agent-v2msm_b4678728-6bf6-4a08-98fc-620935708987/konnectivity-agent/0.log" Apr 23 18:57:59.393070 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:57:59.392978 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_kube-apiserver-proxy-ip-10-0-135-87.ec2.internal_53b72ef69aad199cf5c99ac6ebdc0a72/haproxy/0.log" Apr 23 18:58:02.685212 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:02.685170 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_96f385f0-80a9-4479-991e-2067a92047fd/alertmanager/0.log" Apr 23 18:58:02.716765 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:02.716738 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_96f385f0-80a9-4479-991e-2067a92047fd/config-reloader/0.log" Apr 23 18:58:02.745581 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:02.745561 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_96f385f0-80a9-4479-991e-2067a92047fd/kube-rbac-proxy-web/0.log" Apr 23 18:58:02.776632 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:02.776608 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_96f385f0-80a9-4479-991e-2067a92047fd/kube-rbac-proxy/0.log" Apr 23 18:58:02.805088 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:02.805068 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_96f385f0-80a9-4479-991e-2067a92047fd/kube-rbac-proxy-metric/0.log" Apr 23 18:58:02.830487 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:02.830465 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_96f385f0-80a9-4479-991e-2067a92047fd/prom-label-proxy/0.log" Apr 23 18:58:02.856171 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:02.856155 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_96f385f0-80a9-4479-991e-2067a92047fd/init-config-reloader/0.log" Apr 23 18:58:02.915321 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:02.915302 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_cluster-monitoring-operator-75587bd455-8b4st_cabecf13-4b77-4125-bdb2-df08000b4d3d/cluster-monitoring-operator/0.log" Apr 23 18:58:02.941249 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:02.941195 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-69db897b98-9wvv9_23abd42d-a8a4-44c2-9c7b-dd1ca477dc93/kube-state-metrics/0.log" Apr 23 18:58:02.967870 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:02.967826 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-69db897b98-9wvv9_23abd42d-a8a4-44c2-9c7b-dd1ca477dc93/kube-rbac-proxy-main/0.log" Apr 23 18:58:02.995234 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:02.995212 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-69db897b98-9wvv9_23abd42d-a8a4-44c2-9c7b-dd1ca477dc93/kube-rbac-proxy-self/0.log" Apr 23 18:58:03.277716 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:03.277632 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-j7hjg_1a5bc9a8-8c44-4a50-91b5-1f0f006e4229/node-exporter/0.log" Apr 23 18:58:03.301178 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:03.301155 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-j7hjg_1a5bc9a8-8c44-4a50-91b5-1f0f006e4229/kube-rbac-proxy/0.log" Apr 23 18:58:03.326646 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:03.326619 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-j7hjg_1a5bc9a8-8c44-4a50-91b5-1f0f006e4229/init-textfile/0.log" Apr 23 18:58:03.356381 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:03.356352 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-9d44df66c-6q7jp_75a82e88-93ab-4540-b1fd-381e8e042f06/kube-rbac-proxy-main/0.log" Apr 23 18:58:03.381577 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:03.381547 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-9d44df66c-6q7jp_75a82e88-93ab-4540-b1fd-381e8e042f06/kube-rbac-proxy-self/0.log" Apr 23 18:58:03.409213 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:03.409192 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-9d44df66c-6q7jp_75a82e88-93ab-4540-b1fd-381e8e042f06/openshift-state-metrics/0.log" Apr 23 18:58:03.665865 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:03.665766 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-5676c8c784-k2mx9_d0291a82-c194-49a3-a786-a6fb55329b77/prometheus-operator/0.log" Apr 23 18:58:03.690891 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:03.690866 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-5676c8c784-k2mx9_d0291a82-c194-49a3-a786-a6fb55329b77/kube-rbac-proxy/0.log" Apr 23 18:58:03.722393 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:03.722371 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-admission-webhook-57cf98b594-rg2dm_b9811dcf-bfe0-485e-afae-82c020a66185/prometheus-operator-admission-webhook/0.log" Apr 23 18:58:03.870020 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:03.870001 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-6bff6c748f-b6mkb_662354fc-65c1-4dc1-a71f-b0640bab8b2f/thanos-query/0.log" Apr 23 18:58:03.910544 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:03.910525 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-6bff6c748f-b6mkb_662354fc-65c1-4dc1-a71f-b0640bab8b2f/kube-rbac-proxy-web/0.log" Apr 23 18:58:03.944241 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:03.944218 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-6bff6c748f-b6mkb_662354fc-65c1-4dc1-a71f-b0640bab8b2f/kube-rbac-proxy/0.log" Apr 23 18:58:03.973487 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:03.973468 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-6bff6c748f-b6mkb_662354fc-65c1-4dc1-a71f-b0640bab8b2f/prom-label-proxy/0.log" Apr 23 18:58:04.006031 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:04.006008 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-6bff6c748f-b6mkb_662354fc-65c1-4dc1-a71f-b0640bab8b2f/kube-rbac-proxy-rules/0.log" Apr 23 18:58:04.039560 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:04.039539 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-6bff6c748f-b6mkb_662354fc-65c1-4dc1-a71f-b0640bab8b2f/kube-rbac-proxy-metrics/0.log" Apr 23 18:58:05.170369 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:05.170336 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-console_networking-console-plugin-cb95c66f6-727jd_e4f9f970-44a9-4e79-ac39-0cfc094cc4ca/networking-console-plugin/0.log" Apr 23 18:58:05.631178 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:05.631144 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-phhz6_5a622cde-4463-4b2b-a60a-0724fdeeb5e3/console-operator/1.log" Apr 23 18:58:05.639267 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:05.639240 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-phhz6_5a622cde-4463-4b2b-a60a-0724fdeeb5e3/console-operator/2.log" Apr 23 18:58:06.116824 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:06.116791 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_downloads-6bcc868b7-84pz9_03e6e9ae-fd11-43a3-8abe-baa38a028607/download-server/0.log" Apr 23 18:58:06.129376 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:06.129353 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pzxs4/perf-node-gather-daemonset-tn5k7"] Apr 23 18:58:06.133164 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:06.133146 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-tn5k7" Apr 23 18:58:06.135662 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:06.135641 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-pzxs4\"/\"openshift-service-ca.crt\"" Apr 23 18:58:06.136898 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:06.136878 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-pzxs4\"/\"default-dockercfg-92lwm\"" Apr 23 18:58:06.136898 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:06.136888 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-pzxs4\"/\"kube-root-ca.crt\"" Apr 23 18:58:06.142285 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:06.142260 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pzxs4/perf-node-gather-daemonset-tn5k7"] Apr 23 18:58:06.197026 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:06.197003 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwjfr\" (UniqueName: \"kubernetes.io/projected/543b9754-e0c1-46c0-b20b-74b9895f4ddc-kube-api-access-wwjfr\") pod \"perf-node-gather-daemonset-tn5k7\" (UID: \"543b9754-e0c1-46c0-b20b-74b9895f4ddc\") " pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-tn5k7" Apr 23 18:58:06.197359 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:06.197042 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/543b9754-e0c1-46c0-b20b-74b9895f4ddc-sys\") pod \"perf-node-gather-daemonset-tn5k7\" (UID: \"543b9754-e0c1-46c0-b20b-74b9895f4ddc\") " pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-tn5k7" Apr 23 18:58:06.197359 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:06.197060 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/543b9754-e0c1-46c0-b20b-74b9895f4ddc-proc\") pod \"perf-node-gather-daemonset-tn5k7\" (UID: \"543b9754-e0c1-46c0-b20b-74b9895f4ddc\") " pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-tn5k7" Apr 23 18:58:06.197359 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:06.197174 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/543b9754-e0c1-46c0-b20b-74b9895f4ddc-lib-modules\") pod \"perf-node-gather-daemonset-tn5k7\" (UID: \"543b9754-e0c1-46c0-b20b-74b9895f4ddc\") " pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-tn5k7" Apr 23 18:58:06.197359 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:06.197220 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/543b9754-e0c1-46c0-b20b-74b9895f4ddc-podres\") pod \"perf-node-gather-daemonset-tn5k7\" (UID: \"543b9754-e0c1-46c0-b20b-74b9895f4ddc\") " pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-tn5k7" Apr 23 18:58:06.298460 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:06.298435 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wwjfr\" (UniqueName: \"kubernetes.io/projected/543b9754-e0c1-46c0-b20b-74b9895f4ddc-kube-api-access-wwjfr\") pod \"perf-node-gather-daemonset-tn5k7\" (UID: \"543b9754-e0c1-46c0-b20b-74b9895f4ddc\") " pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-tn5k7" Apr 23 18:58:06.298571 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:06.298468 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/543b9754-e0c1-46c0-b20b-74b9895f4ddc-sys\") pod \"perf-node-gather-daemonset-tn5k7\" (UID: \"543b9754-e0c1-46c0-b20b-74b9895f4ddc\") " pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-tn5k7" Apr 23 18:58:06.298571 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:06.298484 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/543b9754-e0c1-46c0-b20b-74b9895f4ddc-proc\") pod \"perf-node-gather-daemonset-tn5k7\" (UID: \"543b9754-e0c1-46c0-b20b-74b9895f4ddc\") " pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-tn5k7" Apr 23 18:58:06.298571 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:06.298544 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/543b9754-e0c1-46c0-b20b-74b9895f4ddc-proc\") pod \"perf-node-gather-daemonset-tn5k7\" (UID: \"543b9754-e0c1-46c0-b20b-74b9895f4ddc\") " pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-tn5k7" Apr 23 18:58:06.298571 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:06.298565 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/543b9754-e0c1-46c0-b20b-74b9895f4ddc-sys\") pod \"perf-node-gather-daemonset-tn5k7\" (UID: \"543b9754-e0c1-46c0-b20b-74b9895f4ddc\") " pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-tn5k7" Apr 23 18:58:06.298784 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:06.298607 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/543b9754-e0c1-46c0-b20b-74b9895f4ddc-lib-modules\") pod \"perf-node-gather-daemonset-tn5k7\" (UID: \"543b9754-e0c1-46c0-b20b-74b9895f4ddc\") " pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-tn5k7" Apr 23 18:58:06.298784 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:06.298641 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/543b9754-e0c1-46c0-b20b-74b9895f4ddc-podres\") pod \"perf-node-gather-daemonset-tn5k7\" (UID: \"543b9754-e0c1-46c0-b20b-74b9895f4ddc\") " pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-tn5k7" Apr 23 18:58:06.298784 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:06.298758 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/543b9754-e0c1-46c0-b20b-74b9895f4ddc-podres\") pod \"perf-node-gather-daemonset-tn5k7\" (UID: \"543b9754-e0c1-46c0-b20b-74b9895f4ddc\") " pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-tn5k7" Apr 23 18:58:06.298784 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:06.298767 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/543b9754-e0c1-46c0-b20b-74b9895f4ddc-lib-modules\") pod \"perf-node-gather-daemonset-tn5k7\" (UID: \"543b9754-e0c1-46c0-b20b-74b9895f4ddc\") " pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-tn5k7" Apr 23 18:58:06.309157 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:06.309133 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwjfr\" (UniqueName: \"kubernetes.io/projected/543b9754-e0c1-46c0-b20b-74b9895f4ddc-kube-api-access-wwjfr\") pod \"perf-node-gather-daemonset-tn5k7\" (UID: \"543b9754-e0c1-46c0-b20b-74b9895f4ddc\") " pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-tn5k7" Apr 23 18:58:06.444315 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:06.444294 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-tn5k7" Apr 23 18:58:06.570268 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:06.570225 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pzxs4/perf-node-gather-daemonset-tn5k7"] Apr 23 18:58:06.573697 ip-10-0-135-87 kubenswrapper[2574]: W0423 18:58:06.573668 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod543b9754_e0c1_46c0_b20b_74b9895f4ddc.slice/crio-30afa9e9382fcf4c8224bb9acf1a7d4ca0504aed3dfe5e97875a465c8bf04c96 WatchSource:0}: Error finding container 30afa9e9382fcf4c8224bb9acf1a7d4ca0504aed3dfe5e97875a465c8bf04c96: Status 404 returned error can't find the container with id 30afa9e9382fcf4c8224bb9acf1a7d4ca0504aed3dfe5e97875a465c8bf04c96 Apr 23 18:58:06.575335 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:06.575319 2574 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 18:58:06.598719 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:06.598698 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_volume-data-source-validator-7c6cbb6c87-24f9z_926cf4a9-abea-43b7-baa6-dc9cd9430a00/volume-data-source-validator/0.log" Apr 23 18:58:07.089786 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:07.089752 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-tn5k7" event={"ID":"543b9754-e0c1-46c0-b20b-74b9895f4ddc","Type":"ContainerStarted","Data":"44057fd57567b699c37c8055239372a7d90ba2210ae1bfe6a30c3f60ed12f734"} Apr 23 18:58:07.089786 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:07.089788 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-tn5k7" event={"ID":"543b9754-e0c1-46c0-b20b-74b9895f4ddc","Type":"ContainerStarted","Data":"30afa9e9382fcf4c8224bb9acf1a7d4ca0504aed3dfe5e97875a465c8bf04c96"} Apr 23 18:58:07.090118 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:07.089860 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-tn5k7" Apr 23 18:58:07.113104 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:07.113064 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-tn5k7" podStartSLOduration=1.113053359 podStartE2EDuration="1.113053359s" podCreationTimestamp="2026-04-23 18:58:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:58:07.112347702 +0000 UTC m=+3957.476388648" watchObservedRunningTime="2026-04-23 18:58:07.113053359 +0000 UTC m=+3957.477094337" Apr 23 18:58:07.381625 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:07.381554 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-hqlvp_570f4ccf-8f66-420f-9543-207c02da2783/dns/0.log" Apr 23 18:58:07.407740 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:07.407709 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-hqlvp_570f4ccf-8f66-420f-9543-207c02da2783/kube-rbac-proxy/0.log" Apr 23 18:58:07.606965 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:07.606938 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-z72z9_4f625df8-2016-4ff3-8cc7-d03314b05183/dns-node-resolver/0.log" Apr 23 18:58:08.113385 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:08.113360 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-4767s_58265a7e-9515-43ed-8838-b59c7bc68f1a/node-ca/0.log" Apr 23 18:58:08.963583 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:08.963544 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-85cf97bcfb-crk2g_ef6bbc19-ba30-4d63-ad0f-d37109da20b7/router/0.log" Apr 23 18:58:09.355593 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:09.355523 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-6txtb_a455b3cc-b20e-46c2-9f70-3c5be09cad64/serve-healthcheck-canary/0.log" Apr 23 18:58:09.833063 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:09.833025 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-585dfdc468-7h7vz_d01e1208-1867-464a-822f-89683cda0372/insights-operator/0.log" Apr 23 18:58:09.834353 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:09.834318 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-585dfdc468-7h7vz_d01e1208-1867-464a-822f-89683cda0372/insights-operator/1.log" Apr 23 18:58:09.861728 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:09.861706 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-mvdnw_fb6a5ffd-e8aa-4ab3-a7ab-8658bebc06b1/kube-rbac-proxy/0.log" Apr 23 18:58:09.891592 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:09.891566 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-mvdnw_fb6a5ffd-e8aa-4ab3-a7ab-8658bebc06b1/exporter/0.log" Apr 23 18:58:09.922501 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:09.922481 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-mvdnw_fb6a5ffd-e8aa-4ab3-a7ab-8658bebc06b1/extractor/0.log" Apr 23 18:58:13.104668 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:13.104639 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-tn5k7" Apr 23 18:58:17.646371 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:17.646275 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator_migrator-74bb7799d9-jggdb_c068db57-9b93-4515-9608-59a3ccaa6d07/migrator/0.log" Apr 23 18:58:17.673733 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:17.673700 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator_migrator-74bb7799d9-jggdb_c068db57-9b93-4515-9608-59a3ccaa6d07/graceful-termination/0.log" Apr 23 18:58:18.039448 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:18.039414 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-6769c5d45-567k5_9601dc49-4014-4c79-9bb2-5871bb8d36a1/kube-storage-version-migrator-operator/1.log" Apr 23 18:58:18.041403 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:18.041357 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-6769c5d45-567k5_9601dc49-4014-4c79-9bb2-5871bb8d36a1/kube-storage-version-migrator-operator/0.log" Apr 23 18:58:19.450770 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:19.450741 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-rn6ls_bbe2b171-bf55-475a-a044-e38bab188f11/kube-multus-additional-cni-plugins/0.log" Apr 23 18:58:19.474647 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:19.474623 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-rn6ls_bbe2b171-bf55-475a-a044-e38bab188f11/egress-router-binary-copy/0.log" Apr 23 18:58:19.500135 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:19.500113 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-rn6ls_bbe2b171-bf55-475a-a044-e38bab188f11/cni-plugins/0.log" Apr 23 18:58:19.524114 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:19.524090 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-rn6ls_bbe2b171-bf55-475a-a044-e38bab188f11/bond-cni-plugin/0.log" Apr 23 18:58:19.550025 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:19.550004 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-rn6ls_bbe2b171-bf55-475a-a044-e38bab188f11/routeoverride-cni/0.log" Apr 23 18:58:19.576410 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:19.576394 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-rn6ls_bbe2b171-bf55-475a-a044-e38bab188f11/whereabouts-cni-bincopy/0.log" Apr 23 18:58:19.601788 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:19.601770 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-rn6ls_bbe2b171-bf55-475a-a044-e38bab188f11/whereabouts-cni/0.log" Apr 23 18:58:19.638358 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:19.638328 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hhc5p_339ba7f9-7ad9-40ca-b311-6f109fbcfc6a/kube-multus/0.log" Apr 23 18:58:19.804280 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:19.804211 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-v8bcb_43c90ba9-23a0-4be9-a89b-8ff980f1bb05/network-metrics-daemon/0.log" Apr 23 18:58:19.824577 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:19.824553 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-v8bcb_43c90ba9-23a0-4be9-a89b-8ff980f1bb05/kube-rbac-proxy/0.log" Apr 23 18:58:21.034322 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:21.034283 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hhdnl_934aa068-0f79-4196-9fc1-e81a90b22334/ovn-controller/0.log" Apr 23 18:58:21.091306 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:21.091267 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hhdnl_934aa068-0f79-4196-9fc1-e81a90b22334/ovn-acl-logging/0.log" Apr 23 18:58:21.111701 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:21.111679 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hhdnl_934aa068-0f79-4196-9fc1-e81a90b22334/kube-rbac-proxy-node/0.log" Apr 23 18:58:21.135257 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:21.135212 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hhdnl_934aa068-0f79-4196-9fc1-e81a90b22334/kube-rbac-proxy-ovn-metrics/0.log" Apr 23 18:58:21.156727 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:21.156706 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hhdnl_934aa068-0f79-4196-9fc1-e81a90b22334/northd/0.log" Apr 23 18:58:21.179156 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:21.179133 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hhdnl_934aa068-0f79-4196-9fc1-e81a90b22334/nbdb/0.log" Apr 23 18:58:21.201572 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:21.201555 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hhdnl_934aa068-0f79-4196-9fc1-e81a90b22334/sbdb/0.log" Apr 23 18:58:21.383581 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:21.383516 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hhdnl_934aa068-0f79-4196-9fc1-e81a90b22334/ovnkube-controller/0.log" Apr 23 18:58:22.890095 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:22.890055 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-8894fc9bd-pvs64_fff84aa0-f5b3-4d5a-add6-04dc79b3bf54/check-endpoints/0.log" Apr 23 18:58:22.970079 ip-10-0-135-87 kubenswrapper[2574]: I0423 18:58:22.970053 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-target-vfxjl_194b68f6-135d-472e-a449-ddda482b9755/network-check-target-container/0.log"