Apr 17 14:04:42.801063 ip-10-0-140-104 systemd[1]: kubelet.service: Failed to load environment files: No such file or directory Apr 17 14:04:42.801074 ip-10-0-140-104 systemd[1]: kubelet.service: Failed to run 'start-pre' task: No such file or directory Apr 17 14:04:42.801081 ip-10-0-140-104 systemd[1]: kubelet.service: Failed with result 'resources'. Apr 17 14:04:42.801356 ip-10-0-140-104 systemd[1]: Failed to start Kubernetes Kubelet. Apr 17 14:04:52.838383 ip-10-0-140-104 systemd[1]: kubelet.service: Failed to schedule restart job: Unit crio.service not found. Apr 17 14:04:52.838402 ip-10-0-140-104 systemd[1]: kubelet.service: Failed with result 'resources'. -- Boot c5840c237e524fcfbe057ff58edbdc26 -- Apr 17 14:07:30.543193 ip-10-0-140-104 systemd[1]: Starting Kubernetes Kubelet... Apr 17 14:07:30.977792 ip-10-0-140-104 kubenswrapper[2568]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 14:07:30.977792 ip-10-0-140-104 kubenswrapper[2568]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Apr 17 14:07:30.977792 ip-10-0-140-104 kubenswrapper[2568]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 14:07:30.977792 ip-10-0-140-104 kubenswrapper[2568]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 14:07:30.977792 ip-10-0-140-104 kubenswrapper[2568]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 14:07:30.978573 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.978487 2568 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 14:07:30.982446 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982416 2568 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 17 14:07:30.982446 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982443 2568 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 17 14:07:30.982446 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982449 2568 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 17 14:07:30.982446 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982455 2568 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 17 14:07:30.982670 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982459 2568 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 17 14:07:30.982670 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982464 2568 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 17 14:07:30.982670 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982469 2568 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 17 14:07:30.982670 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982479 2568 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 17 14:07:30.982670 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982485 2568 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 17 14:07:30.982670 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982489 2568 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 17 14:07:30.982670 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982493 2568 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 17 14:07:30.982670 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982497 2568 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 17 14:07:30.982670 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982501 2568 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 17 14:07:30.982670 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982506 2568 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 17 14:07:30.982670 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982510 2568 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 17 14:07:30.982670 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982515 2568 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 17 14:07:30.982670 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982521 2568 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 17 14:07:30.982670 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982526 2568 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 17 14:07:30.982670 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982530 2568 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 17 14:07:30.982670 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982534 2568 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 17 14:07:30.982670 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982543 2568 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 17 14:07:30.982670 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982547 2568 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 17 14:07:30.982670 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982553 2568 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 17 14:07:30.983315 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982558 2568 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 17 14:07:30.983315 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982562 2568 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 17 14:07:30.983315 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982566 2568 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 17 14:07:30.983315 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982570 2568 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 17 14:07:30.983315 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982575 2568 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 17 14:07:30.983315 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982579 2568 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 17 14:07:30.983315 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982583 2568 feature_gate.go:328] unrecognized feature gate: Example2 Apr 17 14:07:30.983315 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982588 2568 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 17 14:07:30.983315 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982592 2568 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 17 14:07:30.983315 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982601 2568 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 17 14:07:30.983315 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982606 2568 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 17 14:07:30.983315 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982610 2568 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 17 14:07:30.983315 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982615 2568 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 17 14:07:30.983315 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982619 2568 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 17 14:07:30.983315 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982624 2568 feature_gate.go:328] unrecognized feature gate: Example Apr 17 14:07:30.983315 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982628 2568 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 17 14:07:30.983315 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982632 2568 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 17 14:07:30.983315 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982637 2568 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 17 14:07:30.983315 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982642 2568 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 17 14:07:30.983315 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982647 2568 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 17 14:07:30.983315 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982652 2568 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 17 14:07:30.983824 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982658 2568 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 17 14:07:30.983824 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982661 2568 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 17 14:07:30.983824 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982663 2568 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 17 14:07:30.983824 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982666 2568 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 17 14:07:30.983824 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982670 2568 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 17 14:07:30.983824 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982673 2568 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 17 14:07:30.983824 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982676 2568 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 17 14:07:30.983824 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982679 2568 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 17 14:07:30.983824 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982683 2568 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 17 14:07:30.983824 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982688 2568 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 17 14:07:30.983824 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982693 2568 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 17 14:07:30.983824 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982697 2568 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 17 14:07:30.983824 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982702 2568 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 17 14:07:30.983824 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982709 2568 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 17 14:07:30.983824 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982712 2568 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 17 14:07:30.983824 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982716 2568 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 17 14:07:30.983824 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982719 2568 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 17 14:07:30.983824 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982722 2568 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 17 14:07:30.983824 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982725 2568 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 17 14:07:30.984296 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982729 2568 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 17 14:07:30.984296 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982732 2568 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 17 14:07:30.984296 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982735 2568 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 17 14:07:30.984296 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982739 2568 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 17 14:07:30.984296 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982742 2568 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 17 14:07:30.984296 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982747 2568 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 17 14:07:30.984296 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982753 2568 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 17 14:07:30.984296 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982756 2568 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 17 14:07:30.984296 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982760 2568 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 17 14:07:30.984296 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982762 2568 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 17 14:07:30.984296 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982765 2568 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 17 14:07:30.984296 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982769 2568 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 17 14:07:30.984296 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982772 2568 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 17 14:07:30.984296 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982792 2568 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 17 14:07:30.984296 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982835 2568 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 17 14:07:30.984296 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982842 2568 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 17 14:07:30.984296 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982848 2568 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 17 14:07:30.984296 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982868 2568 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 17 14:07:30.984296 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982873 2568 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 17 14:07:30.984296 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982878 2568 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 17 14:07:30.984781 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982882 2568 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 17 14:07:30.984781 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982887 2568 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 17 14:07:30.984781 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.982891 2568 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 17 14:07:30.984781 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983618 2568 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 17 14:07:30.984781 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983630 2568 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 17 14:07:30.984781 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983634 2568 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 17 14:07:30.984781 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983636 2568 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 17 14:07:30.984781 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983639 2568 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 17 14:07:30.984781 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983643 2568 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 17 14:07:30.984781 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983646 2568 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 17 14:07:30.984781 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983649 2568 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 17 14:07:30.984781 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983651 2568 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 17 14:07:30.984781 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983654 2568 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 17 14:07:30.984781 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983656 2568 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 17 14:07:30.984781 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983659 2568 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 17 14:07:30.984781 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983662 2568 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 17 14:07:30.984781 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983664 2568 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 17 14:07:30.984781 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983667 2568 feature_gate.go:328] unrecognized feature gate: Example2 Apr 17 14:07:30.984781 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983669 2568 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 17 14:07:30.984781 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983673 2568 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 17 14:07:30.985285 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983680 2568 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 17 14:07:30.985285 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983683 2568 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 17 14:07:30.985285 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983685 2568 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 17 14:07:30.985285 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983688 2568 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 17 14:07:30.985285 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983690 2568 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 17 14:07:30.985285 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983693 2568 feature_gate.go:328] unrecognized feature gate: Example Apr 17 14:07:30.985285 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983696 2568 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 17 14:07:30.985285 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983699 2568 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 17 14:07:30.985285 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983702 2568 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 17 14:07:30.985285 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983704 2568 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 17 14:07:30.985285 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983707 2568 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 17 14:07:30.985285 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983709 2568 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 17 14:07:30.985285 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983712 2568 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 17 14:07:30.985285 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983715 2568 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 17 14:07:30.985285 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983717 2568 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 17 14:07:30.985285 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983722 2568 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 17 14:07:30.985285 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983726 2568 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 17 14:07:30.985285 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983729 2568 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 17 14:07:30.985285 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983732 2568 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 17 14:07:30.985792 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983735 2568 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 17 14:07:30.985792 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983737 2568 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 17 14:07:30.985792 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983740 2568 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 17 14:07:30.985792 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983743 2568 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 17 14:07:30.985792 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983745 2568 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 17 14:07:30.985792 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983748 2568 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 17 14:07:30.985792 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983751 2568 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 17 14:07:30.985792 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983753 2568 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 17 14:07:30.985792 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983756 2568 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 17 14:07:30.985792 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983758 2568 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 17 14:07:30.985792 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983761 2568 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 17 14:07:30.985792 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983763 2568 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 17 14:07:30.985792 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983766 2568 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 17 14:07:30.985792 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983769 2568 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 17 14:07:30.985792 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983771 2568 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 17 14:07:30.985792 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983774 2568 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 17 14:07:30.985792 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983776 2568 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 17 14:07:30.985792 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983779 2568 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 17 14:07:30.985792 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983781 2568 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 17 14:07:30.985792 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983784 2568 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 17 14:07:30.986300 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983787 2568 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 17 14:07:30.986300 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983790 2568 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 17 14:07:30.986300 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983792 2568 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 17 14:07:30.986300 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983795 2568 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 17 14:07:30.986300 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983798 2568 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 17 14:07:30.986300 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983800 2568 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 17 14:07:30.986300 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983803 2568 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 17 14:07:30.986300 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983805 2568 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 17 14:07:30.986300 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983808 2568 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 17 14:07:30.986300 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983810 2568 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 17 14:07:30.986300 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983813 2568 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 17 14:07:30.986300 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983815 2568 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 17 14:07:30.986300 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983818 2568 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 17 14:07:30.986300 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983820 2568 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 17 14:07:30.986300 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983823 2568 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 17 14:07:30.986300 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983826 2568 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 17 14:07:30.986300 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983830 2568 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 17 14:07:30.986300 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983834 2568 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 17 14:07:30.986300 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983837 2568 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 17 14:07:30.986300 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983840 2568 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 17 14:07:30.986785 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983843 2568 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 17 14:07:30.986785 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983845 2568 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 17 14:07:30.986785 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983848 2568 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 17 14:07:30.986785 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983851 2568 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 17 14:07:30.986785 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983868 2568 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 17 14:07:30.986785 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983873 2568 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 17 14:07:30.986785 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983878 2568 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 17 14:07:30.986785 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983883 2568 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 17 14:07:30.986785 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983886 2568 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 17 14:07:30.986785 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.983888 2568 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 17 14:07:30.986785 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984668 2568 flags.go:64] FLAG: --address="0.0.0.0" Apr 17 14:07:30.986785 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984677 2568 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Apr 17 14:07:30.986785 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984685 2568 flags.go:64] FLAG: --anonymous-auth="true" Apr 17 14:07:30.986785 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984690 2568 flags.go:64] FLAG: --application-metrics-count-limit="100" Apr 17 14:07:30.986785 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984695 2568 flags.go:64] FLAG: --authentication-token-webhook="false" Apr 17 14:07:30.986785 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984706 2568 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Apr 17 14:07:30.986785 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984712 2568 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Apr 17 14:07:30.986785 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984717 2568 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Apr 17 14:07:30.986785 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984720 2568 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Apr 17 14:07:30.986785 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984723 2568 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Apr 17 14:07:30.986785 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984727 2568 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Apr 17 14:07:30.987296 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984730 2568 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Apr 17 14:07:30.987296 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984733 2568 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Apr 17 14:07:30.987296 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984736 2568 flags.go:64] FLAG: --cgroup-root="" Apr 17 14:07:30.987296 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984739 2568 flags.go:64] FLAG: --cgroups-per-qos="true" Apr 17 14:07:30.987296 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984742 2568 flags.go:64] FLAG: --client-ca-file="" Apr 17 14:07:30.987296 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984745 2568 flags.go:64] FLAG: --cloud-config="" Apr 17 14:07:30.987296 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984748 2568 flags.go:64] FLAG: --cloud-provider="external" Apr 17 14:07:30.987296 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984751 2568 flags.go:64] FLAG: --cluster-dns="[]" Apr 17 14:07:30.987296 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984755 2568 flags.go:64] FLAG: --cluster-domain="" Apr 17 14:07:30.987296 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984758 2568 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Apr 17 14:07:30.987296 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984761 2568 flags.go:64] FLAG: --config-dir="" Apr 17 14:07:30.987296 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984764 2568 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Apr 17 14:07:30.987296 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984768 2568 flags.go:64] FLAG: --container-log-max-files="5" Apr 17 14:07:30.987296 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984772 2568 flags.go:64] FLAG: --container-log-max-size="10Mi" Apr 17 14:07:30.987296 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984775 2568 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Apr 17 14:07:30.987296 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984778 2568 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Apr 17 14:07:30.987296 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984781 2568 flags.go:64] FLAG: --containerd-namespace="k8s.io" Apr 17 14:07:30.987296 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984784 2568 flags.go:64] FLAG: --contention-profiling="false" Apr 17 14:07:30.987296 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984788 2568 flags.go:64] FLAG: --cpu-cfs-quota="true" Apr 17 14:07:30.987296 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984791 2568 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Apr 17 14:07:30.987296 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984794 2568 flags.go:64] FLAG: --cpu-manager-policy="none" Apr 17 14:07:30.987296 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984797 2568 flags.go:64] FLAG: --cpu-manager-policy-options="" Apr 17 14:07:30.987296 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984802 2568 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Apr 17 14:07:30.987296 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984806 2568 flags.go:64] FLAG: --enable-controller-attach-detach="true" Apr 17 14:07:30.987296 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984809 2568 flags.go:64] FLAG: --enable-debugging-handlers="true" Apr 17 14:07:30.987902 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984813 2568 flags.go:64] FLAG: --enable-load-reader="false" Apr 17 14:07:30.987902 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984816 2568 flags.go:64] FLAG: --enable-server="true" Apr 17 14:07:30.987902 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984820 2568 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Apr 17 14:07:30.987902 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984826 2568 flags.go:64] FLAG: --event-burst="100" Apr 17 14:07:30.987902 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984829 2568 flags.go:64] FLAG: --event-qps="50" Apr 17 14:07:30.987902 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984832 2568 flags.go:64] FLAG: --event-storage-age-limit="default=0" Apr 17 14:07:30.987902 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984835 2568 flags.go:64] FLAG: --event-storage-event-limit="default=0" Apr 17 14:07:30.987902 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984838 2568 flags.go:64] FLAG: --eviction-hard="" Apr 17 14:07:30.987902 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984850 2568 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Apr 17 14:07:30.987902 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984853 2568 flags.go:64] FLAG: --eviction-minimum-reclaim="" Apr 17 14:07:30.987902 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984868 2568 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Apr 17 14:07:30.987902 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984872 2568 flags.go:64] FLAG: --eviction-soft="" Apr 17 14:07:30.987902 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984875 2568 flags.go:64] FLAG: --eviction-soft-grace-period="" Apr 17 14:07:30.987902 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984878 2568 flags.go:64] FLAG: --exit-on-lock-contention="false" Apr 17 14:07:30.987902 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984882 2568 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Apr 17 14:07:30.987902 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984885 2568 flags.go:64] FLAG: --experimental-mounter-path="" Apr 17 14:07:30.987902 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984888 2568 flags.go:64] FLAG: --fail-cgroupv1="false" Apr 17 14:07:30.987902 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984891 2568 flags.go:64] FLAG: --fail-swap-on="true" Apr 17 14:07:30.987902 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984894 2568 flags.go:64] FLAG: --feature-gates="" Apr 17 14:07:30.987902 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984899 2568 flags.go:64] FLAG: --file-check-frequency="20s" Apr 17 14:07:30.987902 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984902 2568 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Apr 17 14:07:30.987902 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984905 2568 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Apr 17 14:07:30.987902 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984908 2568 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Apr 17 14:07:30.987902 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984912 2568 flags.go:64] FLAG: --healthz-port="10248" Apr 17 14:07:30.987902 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984915 2568 flags.go:64] FLAG: --help="false" Apr 17 14:07:30.988504 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984918 2568 flags.go:64] FLAG: --hostname-override="ip-10-0-140-104.ec2.internal" Apr 17 14:07:30.988504 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984922 2568 flags.go:64] FLAG: --housekeeping-interval="10s" Apr 17 14:07:30.988504 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984925 2568 flags.go:64] FLAG: --http-check-frequency="20s" Apr 17 14:07:30.988504 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984928 2568 flags.go:64] FLAG: --image-credential-provider-bin-dir="/usr/libexec/kubelet-image-credential-provider-plugins" Apr 17 14:07:30.988504 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984932 2568 flags.go:64] FLAG: --image-credential-provider-config="/etc/kubernetes/credential-providers/ecr-credential-provider.yaml" Apr 17 14:07:30.988504 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984936 2568 flags.go:64] FLAG: --image-gc-high-threshold="85" Apr 17 14:07:30.988504 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984939 2568 flags.go:64] FLAG: --image-gc-low-threshold="80" Apr 17 14:07:30.988504 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984942 2568 flags.go:64] FLAG: --image-service-endpoint="" Apr 17 14:07:30.988504 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984945 2568 flags.go:64] FLAG: --kernel-memcg-notification="false" Apr 17 14:07:30.988504 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984949 2568 flags.go:64] FLAG: --kube-api-burst="100" Apr 17 14:07:30.988504 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984952 2568 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Apr 17 14:07:30.988504 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984955 2568 flags.go:64] FLAG: --kube-api-qps="50" Apr 17 14:07:30.988504 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984958 2568 flags.go:64] FLAG: --kube-reserved="" Apr 17 14:07:30.988504 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984961 2568 flags.go:64] FLAG: --kube-reserved-cgroup="" Apr 17 14:07:30.988504 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984964 2568 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Apr 17 14:07:30.988504 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984967 2568 flags.go:64] FLAG: --kubelet-cgroups="" Apr 17 14:07:30.988504 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984970 2568 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Apr 17 14:07:30.988504 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984973 2568 flags.go:64] FLAG: --lock-file="" Apr 17 14:07:30.988504 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984976 2568 flags.go:64] FLAG: --log-cadvisor-usage="false" Apr 17 14:07:30.988504 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984979 2568 flags.go:64] FLAG: --log-flush-frequency="5s" Apr 17 14:07:30.988504 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984982 2568 flags.go:64] FLAG: --log-json-info-buffer-size="0" Apr 17 14:07:30.988504 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984987 2568 flags.go:64] FLAG: --log-json-split-stream="false" Apr 17 14:07:30.988504 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984990 2568 flags.go:64] FLAG: --log-text-info-buffer-size="0" Apr 17 14:07:30.989081 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984993 2568 flags.go:64] FLAG: --log-text-split-stream="false" Apr 17 14:07:30.989081 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984996 2568 flags.go:64] FLAG: --logging-format="text" Apr 17 14:07:30.989081 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.984999 2568 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Apr 17 14:07:30.989081 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985003 2568 flags.go:64] FLAG: --make-iptables-util-chains="true" Apr 17 14:07:30.989081 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985006 2568 flags.go:64] FLAG: --manifest-url="" Apr 17 14:07:30.989081 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985009 2568 flags.go:64] FLAG: --manifest-url-header="" Apr 17 14:07:30.989081 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985013 2568 flags.go:64] FLAG: --max-housekeeping-interval="15s" Apr 17 14:07:30.989081 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985017 2568 flags.go:64] FLAG: --max-open-files="1000000" Apr 17 14:07:30.989081 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985021 2568 flags.go:64] FLAG: --max-pods="110" Apr 17 14:07:30.989081 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985024 2568 flags.go:64] FLAG: --maximum-dead-containers="-1" Apr 17 14:07:30.989081 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985027 2568 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Apr 17 14:07:30.989081 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985030 2568 flags.go:64] FLAG: --memory-manager-policy="None" Apr 17 14:07:30.989081 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985033 2568 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Apr 17 14:07:30.989081 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985036 2568 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Apr 17 14:07:30.989081 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985039 2568 flags.go:64] FLAG: --node-ip="0.0.0.0" Apr 17 14:07:30.989081 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985042 2568 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhel" Apr 17 14:07:30.989081 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985050 2568 flags.go:64] FLAG: --node-status-max-images="50" Apr 17 14:07:30.989081 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985053 2568 flags.go:64] FLAG: --node-status-update-frequency="10s" Apr 17 14:07:30.989081 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985056 2568 flags.go:64] FLAG: --oom-score-adj="-999" Apr 17 14:07:30.989081 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985060 2568 flags.go:64] FLAG: --pod-cidr="" Apr 17 14:07:30.989081 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985063 2568 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8cfe89231412ff3ee8cb6207fa0be33cad0f08e88c9c0f1e9f7e8c6f14d6715" Apr 17 14:07:30.989081 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985068 2568 flags.go:64] FLAG: --pod-manifest-path="" Apr 17 14:07:30.989081 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985071 2568 flags.go:64] FLAG: --pod-max-pids="-1" Apr 17 14:07:30.989081 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985074 2568 flags.go:64] FLAG: --pods-per-core="0" Apr 17 14:07:30.989657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985077 2568 flags.go:64] FLAG: --port="10250" Apr 17 14:07:30.989657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985081 2568 flags.go:64] FLAG: --protect-kernel-defaults="false" Apr 17 14:07:30.989657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985084 2568 flags.go:64] FLAG: --provider-id="aws:///us-east-1a/i-0ec5addb8cf0bce1f" Apr 17 14:07:30.989657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985088 2568 flags.go:64] FLAG: --qos-reserved="" Apr 17 14:07:30.989657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985091 2568 flags.go:64] FLAG: --read-only-port="10255" Apr 17 14:07:30.989657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985094 2568 flags.go:64] FLAG: --register-node="true" Apr 17 14:07:30.989657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985097 2568 flags.go:64] FLAG: --register-schedulable="true" Apr 17 14:07:30.989657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985100 2568 flags.go:64] FLAG: --register-with-taints="" Apr 17 14:07:30.989657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985104 2568 flags.go:64] FLAG: --registry-burst="10" Apr 17 14:07:30.989657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985107 2568 flags.go:64] FLAG: --registry-qps="5" Apr 17 14:07:30.989657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985109 2568 flags.go:64] FLAG: --reserved-cpus="" Apr 17 14:07:30.989657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985113 2568 flags.go:64] FLAG: --reserved-memory="" Apr 17 14:07:30.989657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985117 2568 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Apr 17 14:07:30.989657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985120 2568 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Apr 17 14:07:30.989657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985123 2568 flags.go:64] FLAG: --rotate-certificates="false" Apr 17 14:07:30.989657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985126 2568 flags.go:64] FLAG: --rotate-server-certificates="false" Apr 17 14:07:30.989657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985129 2568 flags.go:64] FLAG: --runonce="false" Apr 17 14:07:30.989657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985132 2568 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Apr 17 14:07:30.989657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985135 2568 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Apr 17 14:07:30.989657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985139 2568 flags.go:64] FLAG: --seccomp-default="false" Apr 17 14:07:30.989657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985141 2568 flags.go:64] FLAG: --serialize-image-pulls="true" Apr 17 14:07:30.989657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985144 2568 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Apr 17 14:07:30.989657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985148 2568 flags.go:64] FLAG: --storage-driver-db="cadvisor" Apr 17 14:07:30.989657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985151 2568 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Apr 17 14:07:30.989657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985154 2568 flags.go:64] FLAG: --storage-driver-password="root" Apr 17 14:07:30.989657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985157 2568 flags.go:64] FLAG: --storage-driver-secure="false" Apr 17 14:07:30.990302 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985160 2568 flags.go:64] FLAG: --storage-driver-table="stats" Apr 17 14:07:30.990302 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985163 2568 flags.go:64] FLAG: --storage-driver-user="root" Apr 17 14:07:30.990302 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985166 2568 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Apr 17 14:07:30.990302 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985169 2568 flags.go:64] FLAG: --sync-frequency="1m0s" Apr 17 14:07:30.990302 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985172 2568 flags.go:64] FLAG: --system-cgroups="" Apr 17 14:07:30.990302 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985175 2568 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Apr 17 14:07:30.990302 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985181 2568 flags.go:64] FLAG: --system-reserved-cgroup="" Apr 17 14:07:30.990302 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985183 2568 flags.go:64] FLAG: --tls-cert-file="" Apr 17 14:07:30.990302 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985187 2568 flags.go:64] FLAG: --tls-cipher-suites="[]" Apr 17 14:07:30.990302 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985190 2568 flags.go:64] FLAG: --tls-min-version="" Apr 17 14:07:30.990302 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985193 2568 flags.go:64] FLAG: --tls-private-key-file="" Apr 17 14:07:30.990302 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985196 2568 flags.go:64] FLAG: --topology-manager-policy="none" Apr 17 14:07:30.990302 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985199 2568 flags.go:64] FLAG: --topology-manager-policy-options="" Apr 17 14:07:30.990302 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985202 2568 flags.go:64] FLAG: --topology-manager-scope="container" Apr 17 14:07:30.990302 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985205 2568 flags.go:64] FLAG: --v="2" Apr 17 14:07:30.990302 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985210 2568 flags.go:64] FLAG: --version="false" Apr 17 14:07:30.990302 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985216 2568 flags.go:64] FLAG: --vmodule="" Apr 17 14:07:30.990302 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985221 2568 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Apr 17 14:07:30.990302 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.985225 2568 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Apr 17 14:07:30.990302 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985326 2568 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 17 14:07:30.990302 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985330 2568 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 17 14:07:30.990302 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985334 2568 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 17 14:07:30.990302 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985337 2568 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 17 14:07:30.990302 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985340 2568 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 17 14:07:30.990928 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985343 2568 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 17 14:07:30.990928 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985346 2568 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 17 14:07:30.990928 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985349 2568 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 17 14:07:30.990928 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985352 2568 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 17 14:07:30.990928 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985355 2568 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 17 14:07:30.990928 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985358 2568 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 17 14:07:30.990928 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985360 2568 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 17 14:07:30.990928 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985363 2568 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 17 14:07:30.990928 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985366 2568 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 17 14:07:30.990928 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985368 2568 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 17 14:07:30.990928 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985371 2568 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 17 14:07:30.990928 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985374 2568 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 17 14:07:30.990928 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985377 2568 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 17 14:07:30.990928 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985379 2568 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 17 14:07:30.990928 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985382 2568 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 17 14:07:30.990928 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985385 2568 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 17 14:07:30.990928 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985387 2568 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 17 14:07:30.990928 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985389 2568 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 17 14:07:30.990928 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985392 2568 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 17 14:07:30.991411 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985395 2568 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 17 14:07:30.991411 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985398 2568 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 17 14:07:30.991411 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985401 2568 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 17 14:07:30.991411 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985403 2568 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 17 14:07:30.991411 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985406 2568 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 17 14:07:30.991411 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985409 2568 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 17 14:07:30.991411 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985412 2568 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 17 14:07:30.991411 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985415 2568 feature_gate.go:328] unrecognized feature gate: Example Apr 17 14:07:30.991411 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985417 2568 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 17 14:07:30.991411 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985420 2568 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 17 14:07:30.991411 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985423 2568 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 17 14:07:30.991411 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985425 2568 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 17 14:07:30.991411 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985428 2568 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 17 14:07:30.991411 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985430 2568 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 17 14:07:30.991411 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985433 2568 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 17 14:07:30.991411 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985435 2568 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 17 14:07:30.991411 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985438 2568 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 17 14:07:30.991411 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985440 2568 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 17 14:07:30.991411 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985443 2568 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 17 14:07:30.991411 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985445 2568 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 17 14:07:30.991920 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985448 2568 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 17 14:07:30.991920 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985450 2568 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 17 14:07:30.991920 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985453 2568 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 17 14:07:30.991920 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985455 2568 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 17 14:07:30.991920 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985458 2568 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 17 14:07:30.991920 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985461 2568 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 17 14:07:30.991920 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985463 2568 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 17 14:07:30.991920 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985466 2568 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 17 14:07:30.991920 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985468 2568 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 17 14:07:30.991920 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985472 2568 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 17 14:07:30.991920 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985475 2568 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 17 14:07:30.991920 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985478 2568 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 17 14:07:30.991920 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985481 2568 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 17 14:07:30.991920 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985485 2568 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 17 14:07:30.991920 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985489 2568 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 17 14:07:30.991920 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985492 2568 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 17 14:07:30.991920 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985494 2568 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 17 14:07:30.991920 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985497 2568 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 17 14:07:30.991920 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985500 2568 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 17 14:07:30.992379 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985503 2568 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 17 14:07:30.992379 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985505 2568 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 17 14:07:30.992379 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985508 2568 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 17 14:07:30.992379 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985511 2568 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 17 14:07:30.992379 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985513 2568 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 17 14:07:30.992379 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985515 2568 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 17 14:07:30.992379 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985518 2568 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 17 14:07:30.992379 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985520 2568 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 17 14:07:30.992379 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985523 2568 feature_gate.go:328] unrecognized feature gate: Example2 Apr 17 14:07:30.992379 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985525 2568 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 17 14:07:30.992379 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985528 2568 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 17 14:07:30.992379 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985530 2568 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 17 14:07:30.992379 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985533 2568 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 17 14:07:30.992379 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985535 2568 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 17 14:07:30.992379 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985538 2568 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 17 14:07:30.992379 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985540 2568 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 17 14:07:30.992379 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985543 2568 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 17 14:07:30.992379 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985545 2568 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 17 14:07:30.992379 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985548 2568 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 17 14:07:30.992379 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985551 2568 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 17 14:07:30.992960 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985554 2568 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 17 14:07:30.992960 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985557 2568 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 17 14:07:30.992960 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.985559 2568 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 17 14:07:30.992960 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.986283 2568 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 17 14:07:30.993214 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.993191 2568 server.go:530] "Kubelet version" kubeletVersion="v1.33.9" Apr 17 14:07:30.993244 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.993215 2568 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 14:07:30.993270 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993265 2568 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 17 14:07:30.993270 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993270 2568 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 17 14:07:30.993320 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993274 2568 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 17 14:07:30.993320 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993278 2568 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 17 14:07:30.993320 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993281 2568 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 17 14:07:30.993320 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993284 2568 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 17 14:07:30.993320 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993287 2568 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 17 14:07:30.993320 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993289 2568 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 17 14:07:30.993320 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993292 2568 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 17 14:07:30.993320 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993295 2568 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 17 14:07:30.993320 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993298 2568 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 17 14:07:30.993320 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993301 2568 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 17 14:07:30.993320 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993303 2568 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 17 14:07:30.993320 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993306 2568 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 17 14:07:30.993320 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993309 2568 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 17 14:07:30.993320 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993312 2568 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 17 14:07:30.993320 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993314 2568 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 17 14:07:30.993320 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993317 2568 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 17 14:07:30.993320 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993320 2568 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 17 14:07:30.993320 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993323 2568 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 17 14:07:30.993320 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993326 2568 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 17 14:07:30.993320 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993329 2568 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 17 14:07:30.993817 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993332 2568 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 17 14:07:30.993817 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993335 2568 feature_gate.go:328] unrecognized feature gate: Example2 Apr 17 14:07:30.993817 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993338 2568 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 17 14:07:30.993817 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993340 2568 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 17 14:07:30.993817 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993343 2568 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 17 14:07:30.993817 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993345 2568 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 17 14:07:30.993817 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993348 2568 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 17 14:07:30.993817 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993350 2568 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 17 14:07:30.993817 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993353 2568 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 17 14:07:30.993817 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993356 2568 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 17 14:07:30.993817 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993358 2568 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 17 14:07:30.993817 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993361 2568 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 17 14:07:30.993817 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993363 2568 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 17 14:07:30.993817 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993366 2568 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 17 14:07:30.993817 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993369 2568 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 17 14:07:30.993817 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993371 2568 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 17 14:07:30.993817 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993375 2568 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 17 14:07:30.993817 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993379 2568 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 17 14:07:30.993817 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993382 2568 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 17 14:07:30.994302 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993385 2568 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 17 14:07:30.994302 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993388 2568 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 17 14:07:30.994302 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993391 2568 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 17 14:07:30.994302 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993393 2568 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 17 14:07:30.994302 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993396 2568 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 17 14:07:30.994302 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993398 2568 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 17 14:07:30.994302 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993401 2568 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 17 14:07:30.994302 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993403 2568 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 17 14:07:30.994302 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993406 2568 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 17 14:07:30.994302 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993408 2568 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 17 14:07:30.994302 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993411 2568 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 17 14:07:30.994302 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993413 2568 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 17 14:07:30.994302 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993416 2568 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 17 14:07:30.994302 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993418 2568 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 17 14:07:30.994302 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993422 2568 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 17 14:07:30.994302 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993425 2568 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 17 14:07:30.994302 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993428 2568 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 17 14:07:30.994302 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993430 2568 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 17 14:07:30.994302 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993433 2568 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 17 14:07:30.994302 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993436 2568 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 17 14:07:30.994813 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993438 2568 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 17 14:07:30.994813 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993442 2568 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 17 14:07:30.994813 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993446 2568 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 17 14:07:30.994813 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993453 2568 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 17 14:07:30.994813 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993456 2568 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 17 14:07:30.994813 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993459 2568 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 17 14:07:30.994813 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993462 2568 feature_gate.go:328] unrecognized feature gate: Example Apr 17 14:07:30.994813 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993464 2568 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 17 14:07:30.994813 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993467 2568 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 17 14:07:30.994813 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993470 2568 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 17 14:07:30.994813 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993472 2568 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 17 14:07:30.994813 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993475 2568 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 17 14:07:30.994813 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993477 2568 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 17 14:07:30.994813 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993480 2568 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 17 14:07:30.994813 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993482 2568 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 17 14:07:30.994813 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993485 2568 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 17 14:07:30.994813 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993488 2568 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 17 14:07:30.994813 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993490 2568 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 17 14:07:30.994813 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993493 2568 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 17 14:07:30.995365 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993495 2568 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 17 14:07:30.995365 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993498 2568 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 17 14:07:30.995365 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993500 2568 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 17 14:07:30.995365 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993503 2568 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 17 14:07:30.995365 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993506 2568 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 17 14:07:30.995365 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993508 2568 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 17 14:07:30.995365 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.993513 2568 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 17 14:07:30.995365 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993611 2568 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 17 14:07:30.995365 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993616 2568 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 17 14:07:30.995365 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993619 2568 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 17 14:07:30.995365 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993622 2568 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 17 14:07:30.995365 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993625 2568 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 17 14:07:30.995365 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993627 2568 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 17 14:07:30.995365 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993630 2568 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 17 14:07:30.995365 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993633 2568 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 17 14:07:30.995745 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993636 2568 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 17 14:07:30.995745 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993639 2568 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 17 14:07:30.995745 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993642 2568 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 17 14:07:30.995745 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993644 2568 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 17 14:07:30.995745 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993647 2568 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 17 14:07:30.995745 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993649 2568 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 17 14:07:30.995745 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993652 2568 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 17 14:07:30.995745 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993654 2568 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 17 14:07:30.995745 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993657 2568 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 17 14:07:30.995745 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993660 2568 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 17 14:07:30.995745 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993663 2568 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 17 14:07:30.995745 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993665 2568 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 17 14:07:30.995745 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993668 2568 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 17 14:07:30.995745 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993670 2568 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 17 14:07:30.995745 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993673 2568 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 17 14:07:30.995745 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993675 2568 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 17 14:07:30.995745 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993678 2568 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 17 14:07:30.995745 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993681 2568 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 17 14:07:30.995745 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993683 2568 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 17 14:07:30.995745 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993686 2568 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 17 14:07:30.996262 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993688 2568 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 17 14:07:30.996262 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993691 2568 feature_gate.go:328] unrecognized feature gate: Example2 Apr 17 14:07:30.996262 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993693 2568 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 17 14:07:30.996262 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993696 2568 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 17 14:07:30.996262 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993699 2568 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 17 14:07:30.996262 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993702 2568 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 17 14:07:30.996262 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993704 2568 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 17 14:07:30.996262 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993707 2568 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 17 14:07:30.996262 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993709 2568 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 17 14:07:30.996262 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993712 2568 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 17 14:07:30.996262 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993715 2568 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 17 14:07:30.996262 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993717 2568 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 17 14:07:30.996262 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993720 2568 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 17 14:07:30.996262 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993722 2568 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 17 14:07:30.996262 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993725 2568 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 17 14:07:30.996262 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993728 2568 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 17 14:07:30.996262 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993731 2568 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 17 14:07:30.996262 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993734 2568 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 17 14:07:30.996262 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993736 2568 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 17 14:07:30.996262 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993738 2568 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 17 14:07:30.996758 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993741 2568 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 17 14:07:30.996758 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993743 2568 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 17 14:07:30.996758 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993746 2568 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 17 14:07:30.996758 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993749 2568 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 17 14:07:30.996758 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993751 2568 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 17 14:07:30.996758 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993753 2568 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 17 14:07:30.996758 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993756 2568 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 17 14:07:30.996758 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993758 2568 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 17 14:07:30.996758 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993761 2568 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 17 14:07:30.996758 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993765 2568 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 17 14:07:30.996758 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993768 2568 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 17 14:07:30.996758 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993771 2568 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 17 14:07:30.996758 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993774 2568 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 17 14:07:30.996758 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993777 2568 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 17 14:07:30.996758 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993780 2568 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 17 14:07:30.996758 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993783 2568 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 17 14:07:30.996758 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993786 2568 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 17 14:07:30.996758 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993788 2568 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 17 14:07:30.996758 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993792 2568 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 17 14:07:30.997240 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993795 2568 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 17 14:07:30.997240 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993798 2568 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 17 14:07:30.997240 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993800 2568 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 17 14:07:30.997240 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993803 2568 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 17 14:07:30.997240 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993805 2568 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 17 14:07:30.997240 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993808 2568 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 17 14:07:30.997240 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993810 2568 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 17 14:07:30.997240 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993814 2568 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 17 14:07:30.997240 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993818 2568 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 17 14:07:30.997240 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993822 2568 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 17 14:07:30.997240 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993825 2568 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 17 14:07:30.997240 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993828 2568 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 17 14:07:30.997240 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993831 2568 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 17 14:07:30.997240 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993833 2568 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 17 14:07:30.997240 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993836 2568 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 17 14:07:30.997240 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993839 2568 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 17 14:07:30.997240 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993841 2568 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 17 14:07:30.997240 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993844 2568 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 17 14:07:30.997240 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:30.993847 2568 feature_gate.go:328] unrecognized feature gate: Example Apr 17 14:07:30.997704 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.993851 2568 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 17 14:07:30.997704 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.994699 2568 server.go:962] "Client rotation is on, will bootstrap in background" Apr 17 14:07:30.997704 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.996971 2568 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Apr 17 14:07:30.997831 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.997818 2568 server.go:1019] "Starting client certificate rotation" Apr 17 14:07:30.997956 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.997920 2568 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 17 14:07:30.997956 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:30.997953 2568 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 17 14:07:31.022353 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.022329 2568 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 17 14:07:31.024924 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.024899 2568 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 17 14:07:31.036418 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.036392 2568 log.go:25] "Validated CRI v1 runtime API" Apr 17 14:07:31.042428 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.042404 2568 log.go:25] "Validated CRI v1 image API" Apr 17 14:07:31.046825 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.046807 2568 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 17 14:07:31.051012 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.050989 2568 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 17 14:07:31.053923 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.053902 2568 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/nvme0n1p2 e966668c-66bb-4cb9-9d47-adc0047130b0:/dev/nvme0n1p3 eb043192-56c7-4325-b423-071e3982aa99:/dev/nvme0n1p4] Apr 17 14:07:31.054000 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.053922 2568 fs.go:136] Filesystem partitions: map[/dev/nvme0n1p3:{mountpoint:/boot major:259 minor:3 fsType:ext4 blockSize:0} /dev/nvme0n1p4:{mountpoint:/var major:259 minor:4 fsType:xfs blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Apr 17 14:07:31.059531 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.059422 2568 manager.go:217] Machine: {Timestamp:2026-04-17 14:07:31.057616015 +0000 UTC m=+0.395407128 CPUVendorID:GenuineIntel NumCores:8 NumPhysicalCores:4 NumSockets:1 CpuFrequency:3161091 MemoryCapacity:33164492800 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ec204454eac90119bd8360abc4144ff2 SystemUUID:ec204454-eac9-0119-bd83-60abc4144ff2 BootID:c5840c23-7e52-4fcf-be05-7ff58edbdc26 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16582246400 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/nvme0n1p3 DeviceMajor:259 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16582246400 Type:vfs Inodes:4048400 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6632898560 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/nvme0n1p4 DeviceMajor:259 DeviceMinor:4 Capacity:128243970048 Type:vfs Inodes:62651840 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6103040 Type:vfs Inodes:18446744073709551615 HasInodes:true}] DiskMap:map[259:0:{Name:nvme0n1 Major:259 Minor:0 Size:128849018880 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:02:85:cf:c1:48:b7 Speed:0 Mtu:9001} {Name:ens5 MacAddress:02:85:cf:c1:48:b7 Speed:0 Mtu:9001} {Name:ovs-system MacAddress:86:81:1d:66:b8:00 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33164492800 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0 4] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:1 Threads:[1 5] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:2 Threads:[2 6] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:3 Threads:[3 7] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:}] Caches:[{Id:0 Size:37486592 Type:Unified Level:3}] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Apr 17 14:07:31.059531 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.059526 2568 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Apr 17 14:07:31.059634 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.059609 2568 manager.go:233] Version: {KernelVersion:5.14.0-570.107.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20260414-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Apr 17 14:07:31.059966 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.059945 2568 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 14:07:31.060133 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.059968 2568 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-10-0-140-104.ec2.internal","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 14:07:31.060180 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.060143 2568 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 14:07:31.060180 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.060152 2568 container_manager_linux.go:306] "Creating device plugin manager" Apr 17 14:07:31.060180 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.060165 2568 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 17 14:07:31.061094 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.061083 2568 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 17 14:07:31.062798 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.062788 2568 state_mem.go:36] "Initialized new in-memory state store" Apr 17 14:07:31.062923 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.062914 2568 server.go:1267] "Using root directory" path="/var/lib/kubelet" Apr 17 14:07:31.065058 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.065049 2568 kubelet.go:491] "Attempting to sync node with API server" Apr 17 14:07:31.065091 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.065062 2568 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 14:07:31.065091 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.065075 2568 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Apr 17 14:07:31.065091 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.065084 2568 kubelet.go:397] "Adding apiserver pod source" Apr 17 14:07:31.065194 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.065100 2568 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 14:07:31.066144 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.066128 2568 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 17 14:07:31.066144 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.066147 2568 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 17 14:07:31.068171 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.068153 2568 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-wpbnd" Apr 17 14:07:31.068977 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.068960 2568 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.10-2.rhaos4.20.gita4d0894.el9" apiVersion="v1" Apr 17 14:07:31.070719 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.070706 2568 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 14:07:31.072592 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.072574 2568 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Apr 17 14:07:31.072653 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.072597 2568 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Apr 17 14:07:31.072653 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.072607 2568 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Apr 17 14:07:31.072653 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.072616 2568 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Apr 17 14:07:31.072653 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.072624 2568 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Apr 17 14:07:31.072653 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.072634 2568 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Apr 17 14:07:31.072802 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.072656 2568 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Apr 17 14:07:31.072802 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.072668 2568 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Apr 17 14:07:31.072802 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.072680 2568 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Apr 17 14:07:31.072802 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.072690 2568 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Apr 17 14:07:31.072802 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.072720 2568 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Apr 17 14:07:31.072802 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.072733 2568 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Apr 17 14:07:31.073488 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.073478 2568 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Apr 17 14:07:31.073519 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.073490 2568 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Apr 17 14:07:31.077074 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.077053 2568 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-wpbnd" Apr 17 14:07:31.077175 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.077162 2568 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 14:07:31.077239 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.077196 2568 server.go:1295] "Started kubelet" Apr 17 14:07:31.077388 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.077332 2568 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 14:07:31.077614 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.077574 2568 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 14:07:31.077691 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.077634 2568 server_v1.go:47] "podresources" method="list" useActivePods=true Apr 17 14:07:31.078242 ip-10-0-140-104 systemd[1]: Started Kubernetes Kubelet. Apr 17 14:07:31.078977 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.078950 2568 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-140-104.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 17 14:07:31.079053 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:31.078975 2568 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-140-104.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 14:07:31.079053 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:31.079029 2568 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 14:07:31.080110 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.080092 2568 server.go:317] "Adding debug handlers to kubelet server" Apr 17 14:07:31.081694 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.081677 2568 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 14:07:31.085086 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:31.085060 2568 kubelet.go:1618] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Apr 17 14:07:31.088357 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.088339 2568 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Apr 17 14:07:31.088929 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.088907 2568 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 14:07:31.089749 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.089685 2568 factory.go:55] Registering systemd factory Apr 17 14:07:31.089749 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.089740 2568 factory.go:223] Registration of the systemd container factory successfully Apr 17 14:07:31.090068 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:31.089830 2568 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-140-104.ec2.internal\" not found" Apr 17 14:07:31.090068 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.089922 2568 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 14:07:31.090068 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.089969 2568 volume_manager.go:295] "The desired_state_of_world populator starts" Apr 17 14:07:31.090068 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.090008 2568 factory.go:153] Registering CRI-O factory Apr 17 14:07:31.090068 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.090016 2568 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 14:07:31.090068 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.090023 2568 factory.go:223] Registration of the crio container factory successfully Apr 17 14:07:31.090310 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.090091 2568 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Apr 17 14:07:31.090310 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.090114 2568 factory.go:103] Registering Raw factory Apr 17 14:07:31.090310 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.090129 2568 manager.go:1196] Started watching for new ooms in manager Apr 17 14:07:31.090310 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.090208 2568 reconstruct.go:97] "Volume reconstruction finished" Apr 17 14:07:31.090310 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.090218 2568 reconciler.go:26] "Reconciler: start to sync state" Apr 17 14:07:31.091170 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.090882 2568 manager.go:319] Starting recovery of all containers Apr 17 14:07:31.092527 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.092503 2568 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Apr 17 14:07:31.094776 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:31.094751 2568 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-10-0-140-104.ec2.internal\" not found" node="ip-10-0-140-104.ec2.internal" Apr 17 14:07:31.103108 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.103084 2568 manager.go:324] Recovery completed Apr 17 14:07:31.107106 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.107092 2568 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 14:07:31.111109 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.111091 2568 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-104.ec2.internal" event="NodeHasSufficientMemory" Apr 17 14:07:31.111192 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.111121 2568 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-104.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 14:07:31.111192 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.111136 2568 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-104.ec2.internal" event="NodeHasSufficientPID" Apr 17 14:07:31.111691 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.111677 2568 cpu_manager.go:222] "Starting CPU manager" policy="none" Apr 17 14:07:31.111691 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.111689 2568 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Apr 17 14:07:31.111774 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.111708 2568 state_mem.go:36] "Initialized new in-memory state store" Apr 17 14:07:31.114438 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.114423 2568 policy_none.go:49] "None policy: Start" Apr 17 14:07:31.114438 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.114439 2568 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 14:07:31.114541 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.114448 2568 state_mem.go:35] "Initializing new in-memory state store" Apr 17 14:07:31.152512 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.152494 2568 manager.go:341] "Starting Device Plugin manager" Apr 17 14:07:31.160489 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:31.152535 2568 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 14:07:31.160489 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.152548 2568 server.go:85] "Starting device plugin registration server" Apr 17 14:07:31.160489 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.152791 2568 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 14:07:31.160489 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.152801 2568 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 14:07:31.160489 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.152899 2568 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Apr 17 14:07:31.160489 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.152997 2568 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Apr 17 14:07:31.160489 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.153006 2568 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 14:07:31.160489 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:31.154402 2568 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Apr 17 14:07:31.160489 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:31.154440 2568 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-140-104.ec2.internal\" not found" Apr 17 14:07:31.253250 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.253157 2568 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 14:07:31.254664 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.254647 2568 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-104.ec2.internal" event="NodeHasSufficientMemory" Apr 17 14:07:31.254784 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.254676 2568 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-104.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 14:07:31.254784 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.254687 2568 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-104.ec2.internal" event="NodeHasSufficientPID" Apr 17 14:07:31.254784 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.254721 2568 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-140-104.ec2.internal" Apr 17 14:07:31.257401 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.257371 2568 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 14:07:31.258555 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.258539 2568 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 14:07:31.258650 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.258562 2568 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 14:07:31.258650 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.258579 2568 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 14:07:31.258650 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.258585 2568 kubelet.go:2451] "Starting kubelet main sync loop" Apr 17 14:07:31.258650 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:31.258614 2568 kubelet.go:2475] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Apr 17 14:07:31.261544 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.261529 2568 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 17 14:07:31.265175 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.265157 2568 kubelet_node_status.go:81] "Successfully registered node" node="ip-10-0-140-104.ec2.internal" Apr 17 14:07:31.265269 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:31.265184 2568 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-140-104.ec2.internal\": node \"ip-10-0-140-104.ec2.internal\" not found" Apr 17 14:07:31.336373 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:31.336345 2568 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-140-104.ec2.internal\" not found" Apr 17 14:07:31.359443 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.359406 2568 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-140-104.ec2.internal","kube-system/kube-apiserver-proxy-ip-10-0-140-104.ec2.internal"] Apr 17 14:07:31.359571 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.359487 2568 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 14:07:31.360429 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.360410 2568 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-104.ec2.internal" event="NodeHasSufficientMemory" Apr 17 14:07:31.360507 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.360442 2568 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-104.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 14:07:31.360507 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.360455 2568 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-104.ec2.internal" event="NodeHasSufficientPID" Apr 17 14:07:31.361959 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.361947 2568 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 14:07:31.362109 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.362094 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-140-104.ec2.internal" Apr 17 14:07:31.362149 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.362124 2568 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 14:07:31.362610 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.362596 2568 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-104.ec2.internal" event="NodeHasSufficientMemory" Apr 17 14:07:31.362679 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.362621 2568 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-104.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 14:07:31.362679 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.362632 2568 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-104.ec2.internal" event="NodeHasSufficientPID" Apr 17 14:07:31.362679 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.362597 2568 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-104.ec2.internal" event="NodeHasSufficientMemory" Apr 17 14:07:31.362786 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.362693 2568 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-104.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 14:07:31.362786 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.362702 2568 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-104.ec2.internal" event="NodeHasSufficientPID" Apr 17 14:07:31.363918 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.363904 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-140-104.ec2.internal" Apr 17 14:07:31.363969 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.363931 2568 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 14:07:31.364508 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.364483 2568 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-104.ec2.internal" event="NodeHasSufficientMemory" Apr 17 14:07:31.364584 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.364512 2568 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-104.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 14:07:31.364584 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.364528 2568 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-104.ec2.internal" event="NodeHasSufficientPID" Apr 17 14:07:31.389600 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:31.389575 2568 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-140-104.ec2.internal\" not found" node="ip-10-0-140-104.ec2.internal" Apr 17 14:07:31.391510 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.391491 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9b0edd108d39af17896538950b3b13f-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-140-104.ec2.internal\" (UID: \"e9b0edd108d39af17896538950b3b13f\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-140-104.ec2.internal" Apr 17 14:07:31.391612 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.391517 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9b0edd108d39af17896538950b3b13f-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-140-104.ec2.internal\" (UID: \"e9b0edd108d39af17896538950b3b13f\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-140-104.ec2.internal" Apr 17 14:07:31.391612 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.391535 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/d77c3cf6e24439d7792eeaf3b4807554-config\") pod \"kube-apiserver-proxy-ip-10-0-140-104.ec2.internal\" (UID: \"d77c3cf6e24439d7792eeaf3b4807554\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-140-104.ec2.internal" Apr 17 14:07:31.393943 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:31.393928 2568 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-140-104.ec2.internal\" not found" node="ip-10-0-140-104.ec2.internal" Apr 17 14:07:31.437071 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:31.437039 2568 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-140-104.ec2.internal\" not found" Apr 17 14:07:31.492081 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.492047 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9b0edd108d39af17896538950b3b13f-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-140-104.ec2.internal\" (UID: \"e9b0edd108d39af17896538950b3b13f\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-140-104.ec2.internal" Apr 17 14:07:31.492081 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.492087 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/d77c3cf6e24439d7792eeaf3b4807554-config\") pod \"kube-apiserver-proxy-ip-10-0-140-104.ec2.internal\" (UID: \"d77c3cf6e24439d7792eeaf3b4807554\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-140-104.ec2.internal" Apr 17 14:07:31.492287 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.492106 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9b0edd108d39af17896538950b3b13f-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-140-104.ec2.internal\" (UID: \"e9b0edd108d39af17896538950b3b13f\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-140-104.ec2.internal" Apr 17 14:07:31.492287 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.492150 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9b0edd108d39af17896538950b3b13f-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-140-104.ec2.internal\" (UID: \"e9b0edd108d39af17896538950b3b13f\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-140-104.ec2.internal" Apr 17 14:07:31.492287 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.492155 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9b0edd108d39af17896538950b3b13f-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-140-104.ec2.internal\" (UID: \"e9b0edd108d39af17896538950b3b13f\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-140-104.ec2.internal" Apr 17 14:07:31.492287 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.492164 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/d77c3cf6e24439d7792eeaf3b4807554-config\") pod \"kube-apiserver-proxy-ip-10-0-140-104.ec2.internal\" (UID: \"d77c3cf6e24439d7792eeaf3b4807554\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-140-104.ec2.internal" Apr 17 14:07:31.537369 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:31.537307 2568 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-140-104.ec2.internal\" not found" Apr 17 14:07:31.638067 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:31.637999 2568 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-140-104.ec2.internal\" not found" Apr 17 14:07:31.692349 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.692322 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-140-104.ec2.internal" Apr 17 14:07:31.697222 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.697199 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-140-104.ec2.internal" Apr 17 14:07:31.738403 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:31.738368 2568 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-140-104.ec2.internal\" not found" Apr 17 14:07:31.838989 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:31.838908 2568 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-140-104.ec2.internal\" not found" Apr 17 14:07:31.939365 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:31.939329 2568 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-140-104.ec2.internal\" not found" Apr 17 14:07:31.997886 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.997840 2568 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Apr 17 14:07:31.998435 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.998000 2568 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Apr 17 14:07:31.998435 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:31.998020 2568 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Apr 17 14:07:32.040157 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:32.040121 2568 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-140-104.ec2.internal\" not found" Apr 17 14:07:32.079541 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:32.079508 2568 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2028-04-16 14:02:31 +0000 UTC" deadline="2027-09-20 02:03:16.677921651 +0000 UTC" Apr 17 14:07:32.079541 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:32.079536 2568 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="12491h55m44.598388269s" Apr 17 14:07:32.089270 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:32.089222 2568 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Apr 17 14:07:32.100922 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:32.100894 2568 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 17 14:07:32.140704 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:32.140666 2568 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-140-104.ec2.internal\" not found" Apr 17 14:07:32.186100 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:32.186064 2568 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-nt5sj" Apr 17 14:07:32.193154 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:32.193128 2568 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-nt5sj" Apr 17 14:07:32.234673 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:32.234637 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9b0edd108d39af17896538950b3b13f.slice/crio-2a0c67f82cd2919ed1b4a49b11fb8831c3dd4a538f63f5cfcf3df528acf369b2 WatchSource:0}: Error finding container 2a0c67f82cd2919ed1b4a49b11fb8831c3dd4a538f63f5cfcf3df528acf369b2: Status 404 returned error can't find the container with id 2a0c67f82cd2919ed1b4a49b11fb8831c3dd4a538f63f5cfcf3df528acf369b2 Apr 17 14:07:32.237087 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:32.237067 2568 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Apr 17 14:07:32.238295 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:32.238280 2568 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 17 14:07:32.261609 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:32.261566 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-140-104.ec2.internal" event={"ID":"e9b0edd108d39af17896538950b3b13f","Type":"ContainerStarted","Data":"2a0c67f82cd2919ed1b4a49b11fb8831c3dd4a538f63f5cfcf3df528acf369b2"} Apr 17 14:07:32.289755 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:32.289727 2568 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-140-104.ec2.internal" Apr 17 14:07:32.301711 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:32.301685 2568 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 17 14:07:32.303432 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:32.303418 2568 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-140-104.ec2.internal" Apr 17 14:07:32.309829 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:32.309814 2568 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 17 14:07:32.502247 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:32.502163 2568 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Apr 17 14:07:33.065893 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.065850 2568 apiserver.go:52] "Watching apiserver" Apr 17 14:07:33.073568 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.073545 2568 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Apr 17 14:07:33.075942 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.075910 2568 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-hhjfc","openshift-network-operator/iptables-alerter-5qssl","kube-system/konnectivity-agent-5bbkp","kube-system/kube-apiserver-proxy-ip-10-0-140-104.ec2.internal","openshift-cluster-node-tuning-operator/tuned-vzgdn","openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-140-104.ec2.internal","openshift-multus/network-metrics-daemon-vgr2h","openshift-network-diagnostics/network-check-target-gh5vx","openshift-ovn-kubernetes/ovnkube-node-vmbqf","openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-wn4kk","openshift-dns/node-resolver-sh5zz","openshift-image-registry/node-ca-6nflw","openshift-multus/multus-additional-cni-plugins-x4sv2"] Apr 17 14:07:33.079425 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.079406 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-wn4kk" Apr 17 14:07:33.080594 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.080576 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5qssl" Apr 17 14:07:33.082182 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.081935 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-5bbkp" Apr 17 14:07:33.082182 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.081967 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-metrics-serving-cert\"" Apr 17 14:07:33.082182 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.081979 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"kube-root-ca.crt\"" Apr 17 14:07:33.082182 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.082009 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-sa-dockercfg-7ccvg\"" Apr 17 14:07:33.082182 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.082015 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.082182 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.082040 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"openshift-service-ca.crt\"" Apr 17 14:07:33.082810 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.082789 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Apr 17 14:07:33.083000 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.082986 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Apr 17 14:07:33.083205 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.083192 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-dockercfg-gj4w6\"" Apr 17 14:07:33.083301 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.083285 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Apr 17 14:07:33.084948 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.084410 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kube-system\"/\"konnectivity-ca-bundle\"" Apr 17 14:07:33.084948 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.084505 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"openshift-service-ca.crt\"" Apr 17 14:07:33.085100 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.085034 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"default-dockercfg-h5nks\"" Apr 17 14:07:33.086622 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.085393 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"kube-root-ca.crt\"" Apr 17 14:07:33.086622 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.085476 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"konnectivity-agent\"" Apr 17 14:07:33.086622 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.085903 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"tuned-dockercfg-vjnwm\"" Apr 17 14:07:33.088653 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.088637 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vgr2h" Apr 17 14:07:33.088733 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:33.088707 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vgr2h" podUID="30cbf063-628a-472d-981e-312f5bea1f7f" Apr 17 14:07:33.090073 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.090053 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gh5vx" Apr 17 14:07:33.090176 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:33.090121 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-gh5vx" podUID="6946cb58-b181-4207-94e3-02bca6a030b2" Apr 17 14:07:33.091388 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.091367 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.091468 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.091375 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.092669 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.092651 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-sh5zz" Apr 17 14:07:33.094150 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.094133 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Apr 17 14:07:33.094250 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.094229 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-6nflw" Apr 17 14:07:33.095013 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.094997 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-hm9xv\"" Apr 17 14:07:33.095105 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.095000 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Apr 17 14:07:33.095105 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.095002 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Apr 17 14:07:33.095236 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.095208 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Apr 17 14:07:33.095236 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.095213 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Apr 17 14:07:33.095342 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.095233 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Apr 17 14:07:33.095342 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.095259 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Apr 17 14:07:33.095342 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.095213 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Apr 17 14:07:33.095342 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.095320 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-qqrsd\"" Apr 17 14:07:33.095519 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.095399 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Apr 17 14:07:33.095519 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.095416 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Apr 17 14:07:33.095519 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.095510 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-dss5c\"" Apr 17 14:07:33.095630 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.095544 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Apr 17 14:07:33.095630 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.095563 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Apr 17 14:07:33.096064 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.096046 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-x4sv2" Apr 17 14:07:33.096423 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.096407 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Apr 17 14:07:33.096617 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.096596 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Apr 17 14:07:33.096617 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.096610 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Apr 17 14:07:33.096756 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.096707 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-96h9n\"" Apr 17 14:07:33.097769 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.097753 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-host\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.097845 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.097783 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-node-log\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.097845 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.097809 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-multus-conf-dir\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.097845 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.097833 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-host-slash\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.098016 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.097870 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-host-run-netns\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.098016 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.097902 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-etc-openvswitch\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.098016 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.097926 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-run-openvswitch\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.098016 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.097950 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-hostroot\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.098016 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.097974 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-etc-kubernetes\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.098016 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.097999 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-host-var-lib-kubelet\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.098225 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098021 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-etc-systemd\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.098225 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098055 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-host-cni-bin\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.098225 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098080 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/629add9d-ca73-450f-84ca-bbde403bb4a1-agent-certs\") pod \"konnectivity-agent-5bbkp\" (UID: \"629add9d-ca73-450f-84ca-bbde403bb4a1\") " pod="kube-system/konnectivity-agent-5bbkp" Apr 17 14:07:33.098225 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098108 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-etc-sysctl-d\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.098225 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098131 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-host-kubelet\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.098225 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098153 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/380cb33f-199e-44fd-8e74-06e5aad709a9-device-dir\") pod \"aws-ebs-csi-driver-node-wn4kk\" (UID: \"380cb33f-199e-44fd-8e74-06e5aad709a9\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-wn4kk" Apr 17 14:07:33.098225 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098170 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rzw5\" (UniqueName: \"kubernetes.io/projected/30cbf063-628a-472d-981e-312f5bea1f7f-kube-api-access-8rzw5\") pod \"network-metrics-daemon-vgr2h\" (UID: \"30cbf063-628a-472d-981e-312f5bea1f7f\") " pod="openshift-multus/network-metrics-daemon-vgr2h" Apr 17 14:07:33.098225 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098186 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-var-lib-openvswitch\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.098225 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098202 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/380cb33f-199e-44fd-8e74-06e5aad709a9-socket-dir\") pod \"aws-ebs-csi-driver-node-wn4kk\" (UID: \"380cb33f-199e-44fd-8e74-06e5aad709a9\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-wn4kk" Apr 17 14:07:33.098225 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098225 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/30cbf063-628a-472d-981e-312f5bea1f7f-metrics-certs\") pod \"network-metrics-daemon-vgr2h\" (UID: \"30cbf063-628a-472d-981e-312f5bea1f7f\") " pod="openshift-multus/network-metrics-daemon-vgr2h" Apr 17 14:07:33.098569 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098239 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-host-cni-netd\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.098569 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098260 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-os-release\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.098569 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098284 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/51daea5e-57b3-4362-8315-ad830e53345a-host-slash\") pod \"iptables-alerter-5qssl\" (UID: \"51daea5e-57b3-4362-8315-ad830e53345a\") " pod="openshift-network-operator/iptables-alerter-5qssl" Apr 17 14:07:33.098569 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098311 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-ovnkube-script-lib\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.098569 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098327 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dklkp\" (UniqueName: \"kubernetes.io/projected/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-kube-api-access-dklkp\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.098569 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098346 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-system-cni-dir\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.098569 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098360 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-multus-cni-dir\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.098569 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098384 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcs64\" (UniqueName: \"kubernetes.io/projected/98c5f7fc-8ede-450b-961f-6812d4ee961b-kube-api-access-fcs64\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.098569 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098398 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/380cb33f-199e-44fd-8e74-06e5aad709a9-etc-selinux\") pod \"aws-ebs-csi-driver-node-wn4kk\" (UID: \"380cb33f-199e-44fd-8e74-06e5aad709a9\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-wn4kk" Apr 17 14:07:33.098569 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098412 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/380cb33f-199e-44fd-8e74-06e5aad709a9-sys-fs\") pod \"aws-ebs-csi-driver-node-wn4kk\" (UID: \"380cb33f-199e-44fd-8e74-06e5aad709a9\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-wn4kk" Apr 17 14:07:33.098569 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098439 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/51daea5e-57b3-4362-8315-ad830e53345a-iptables-alerter-script\") pod \"iptables-alerter-5qssl\" (UID: \"51daea5e-57b3-4362-8315-ad830e53345a\") " pod="openshift-network-operator/iptables-alerter-5qssl" Apr 17 14:07:33.098569 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098453 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-tmp\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.098569 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098485 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45sx9\" (UniqueName: \"kubernetes.io/projected/380cb33f-199e-44fd-8e74-06e5aad709a9-kube-api-access-45sx9\") pod \"aws-ebs-csi-driver-node-wn4kk\" (UID: \"380cb33f-199e-44fd-8e74-06e5aad709a9\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-wn4kk" Apr 17 14:07:33.098569 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098526 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-etc-tuned\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.098569 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098550 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzpm4\" (UniqueName: \"kubernetes.io/projected/6946cb58-b181-4207-94e3-02bca6a030b2-kube-api-access-zzpm4\") pod \"network-check-target-gh5vx\" (UID: \"6946cb58-b181-4207-94e3-02bca6a030b2\") " pod="openshift-network-diagnostics/network-check-target-gh5vx" Apr 17 14:07:33.098569 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098567 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/98c5f7fc-8ede-450b-961f-6812d4ee961b-cni-binary-copy\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.099110 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098591 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/98c5f7fc-8ede-450b-961f-6812d4ee961b-multus-daemon-config\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.099110 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098613 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/380cb33f-199e-44fd-8e74-06e5aad709a9-registration-dir\") pod \"aws-ebs-csi-driver-node-wn4kk\" (UID: \"380cb33f-199e-44fd-8e74-06e5aad709a9\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-wn4kk" Apr 17 14:07:33.099110 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098631 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-etc-sysctl-conf\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.099110 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098644 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-sys\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.099110 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098660 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-host-run-ovn-kubernetes\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.099110 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098683 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.099110 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098704 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-ovnkube-config\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.099110 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098740 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-host-var-lib-cni-bin\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.099110 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098773 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-host-run-multus-certs\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.099110 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098799 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/380cb33f-199e-44fd-8e74-06e5aad709a9-kubelet-dir\") pod \"aws-ebs-csi-driver-node-wn4kk\" (UID: \"380cb33f-199e-44fd-8e74-06e5aad709a9\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-wn4kk" Apr 17 14:07:33.099110 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098844 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-etc-kubernetes\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.099110 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098912 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-lib-modules\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.099110 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098938 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-cnibin\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.099110 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098956 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-host-run-netns\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.099110 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098978 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-etc-modprobe-d\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.099110 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.098980 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Apr 17 14:07:33.099110 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.099005 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cstcr\" (UniqueName: \"kubernetes.io/projected/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-kube-api-access-cstcr\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.099832 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.099026 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-env-overrides\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.099832 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.099048 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-ovn-node-metrics-cert\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.099832 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.099069 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-etc-sysconfig\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.099832 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.099089 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-run\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.099832 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.099108 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Apr 17 14:07:33.099832 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.099112 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-var-lib-kubelet\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.099832 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.099138 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-systemd-units\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.099832 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.099163 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-run-systemd\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.099832 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.099186 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-log-socket\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.099832 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.099210 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-host-var-lib-cni-multus\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.099832 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.099235 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lzzc\" (UniqueName: \"kubernetes.io/projected/51daea5e-57b3-4362-8315-ad830e53345a-kube-api-access-8lzzc\") pod \"iptables-alerter-5qssl\" (UID: \"51daea5e-57b3-4362-8315-ad830e53345a\") " pod="openshift-network-operator/iptables-alerter-5qssl" Apr 17 14:07:33.099832 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.099296 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/629add9d-ca73-450f-84ca-bbde403bb4a1-konnectivity-ca\") pod \"konnectivity-agent-5bbkp\" (UID: \"629add9d-ca73-450f-84ca-bbde403bb4a1\") " pod="kube-system/konnectivity-agent-5bbkp" Apr 17 14:07:33.099832 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.099317 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-wtrxn\"" Apr 17 14:07:33.099832 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.099318 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-run-ovn\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.099832 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.099388 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-multus-socket-dir-parent\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.099832 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.099412 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-host-run-k8s-cni-cncf-io\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.166143 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.166117 2568 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 17 14:07:33.190678 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.190653 2568 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 14:07:33.194423 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.194396 2568 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-16 14:02:32 +0000 UTC" deadline="2028-01-19 07:47:28.887151126 +0000 UTC" Apr 17 14:07:33.194423 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.194421 2568 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="15401h39m55.692733385s" Apr 17 14:07:33.199982 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.199959 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-etc-tuned\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.200087 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.199987 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zzpm4\" (UniqueName: \"kubernetes.io/projected/6946cb58-b181-4207-94e3-02bca6a030b2-kube-api-access-zzpm4\") pod \"network-check-target-gh5vx\" (UID: \"6946cb58-b181-4207-94e3-02bca6a030b2\") " pod="openshift-network-diagnostics/network-check-target-gh5vx" Apr 17 14:07:33.200087 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200004 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/98c5f7fc-8ede-450b-961f-6812d4ee961b-cni-binary-copy\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.200087 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200027 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/98c5f7fc-8ede-450b-961f-6812d4ee961b-multus-daemon-config\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.200087 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200050 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/380cb33f-199e-44fd-8e74-06e5aad709a9-registration-dir\") pod \"aws-ebs-csi-driver-node-wn4kk\" (UID: \"380cb33f-199e-44fd-8e74-06e5aad709a9\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-wn4kk" Apr 17 14:07:33.200220 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200180 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7075f961-4efd-41c8-9591-c7608ce4563a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-x4sv2\" (UID: \"7075f961-4efd-41c8-9591-c7608ce4563a\") " pod="openshift-multus/multus-additional-cni-plugins-x4sv2" Apr 17 14:07:33.200270 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200230 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-etc-sysctl-conf\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.200270 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200257 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-sys\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.200358 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200271 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/380cb33f-199e-44fd-8e74-06e5aad709a9-registration-dir\") pod \"aws-ebs-csi-driver-node-wn4kk\" (UID: \"380cb33f-199e-44fd-8e74-06e5aad709a9\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-wn4kk" Apr 17 14:07:33.200358 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200312 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-sys\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.200358 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200315 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-host-run-ovn-kubernetes\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.200502 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200370 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.200502 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200357 2568 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Apr 17 14:07:33.200502 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200403 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-ovnkube-config\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.200502 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200409 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-host-run-ovn-kubernetes\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.200502 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200426 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-etc-sysctl-conf\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.200502 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200432 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-host-var-lib-cni-bin\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.200502 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200440 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.200502 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200469 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-host-run-multus-certs\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.200502 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200480 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-host-var-lib-cni-bin\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.200502 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200495 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/380cb33f-199e-44fd-8e74-06e5aad709a9-kubelet-dir\") pod \"aws-ebs-csi-driver-node-wn4kk\" (UID: \"380cb33f-199e-44fd-8e74-06e5aad709a9\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-wn4kk" Apr 17 14:07:33.200976 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200522 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-host-run-multus-certs\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.200976 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200523 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-etc-kubernetes\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.200976 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200561 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-lib-modules\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.200976 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200572 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/380cb33f-199e-44fd-8e74-06e5aad709a9-kubelet-dir\") pod \"aws-ebs-csi-driver-node-wn4kk\" (UID: \"380cb33f-199e-44fd-8e74-06e5aad709a9\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-wn4kk" Apr 17 14:07:33.200976 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200562 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-etc-kubernetes\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.200976 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200590 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-cnibin\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.200976 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200632 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-cnibin\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.200976 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200639 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-host-run-netns\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.200976 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200665 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-etc-modprobe-d\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.200976 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200667 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/98c5f7fc-8ede-450b-961f-6812d4ee961b-multus-daemon-config\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.200976 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200690 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cstcr\" (UniqueName: \"kubernetes.io/projected/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-kube-api-access-cstcr\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.200976 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200672 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-lib-modules\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.200976 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200725 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-env-overrides\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.200976 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200727 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-host-run-netns\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.200976 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200808 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-etc-modprobe-d\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.200976 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200838 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-ovn-node-metrics-cert\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.200976 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200891 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-etc-sysconfig\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.200976 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200919 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-run\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.201743 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200941 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-var-lib-kubelet\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.201743 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200961 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-etc-sysconfig\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.201743 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200994 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-run\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.201743 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200997 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-ovnkube-config\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.201743 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.200966 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-systemd-units\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.201743 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201010 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-var-lib-kubelet\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.201743 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201005 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-systemd-units\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.201743 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201036 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-run-systemd\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.201743 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201057 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-log-socket\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.201743 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201072 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-host-var-lib-cni-multus\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.201743 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201080 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-env-overrides\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.201743 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201092 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvm6z\" (UniqueName: \"kubernetes.io/projected/dd5f44ea-0850-4ff4-8f21-1f4135fc02ca-kube-api-access-lvm6z\") pod \"node-ca-6nflw\" (UID: \"dd5f44ea-0850-4ff4-8f21-1f4135fc02ca\") " pod="openshift-image-registry/node-ca-6nflw" Apr 17 14:07:33.201743 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201124 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-log-socket\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.201743 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201127 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-run-systemd\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.201743 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201134 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-host-var-lib-cni-multus\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.201743 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201156 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8lzzc\" (UniqueName: \"kubernetes.io/projected/51daea5e-57b3-4362-8315-ad830e53345a-kube-api-access-8lzzc\") pod \"iptables-alerter-5qssl\" (UID: \"51daea5e-57b3-4362-8315-ad830e53345a\") " pod="openshift-network-operator/iptables-alerter-5qssl" Apr 17 14:07:33.201743 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201180 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/629add9d-ca73-450f-84ca-bbde403bb4a1-konnectivity-ca\") pod \"konnectivity-agent-5bbkp\" (UID: \"629add9d-ca73-450f-84ca-bbde403bb4a1\") " pod="kube-system/konnectivity-agent-5bbkp" Apr 17 14:07:33.201743 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201203 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-run-ovn\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.202386 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201227 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-multus-socket-dir-parent\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.202386 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201254 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-host-run-k8s-cni-cncf-io\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.202386 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201282 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0ca24926-e680-41ea-85df-e8d6ba856597-hosts-file\") pod \"node-resolver-sh5zz\" (UID: \"0ca24926-e680-41ea-85df-e8d6ba856597\") " pod="openshift-dns/node-resolver-sh5zz" Apr 17 14:07:33.202386 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201306 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd5f44ea-0850-4ff4-8f21-1f4135fc02ca-host\") pod \"node-ca-6nflw\" (UID: \"dd5f44ea-0850-4ff4-8f21-1f4135fc02ca\") " pod="openshift-image-registry/node-ca-6nflw" Apr 17 14:07:33.202386 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201330 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-host\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.202386 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201336 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/98c5f7fc-8ede-450b-961f-6812d4ee961b-cni-binary-copy\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.202386 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201353 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-node-log\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.202386 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201391 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-multus-conf-dir\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.202386 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201391 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-host-run-k8s-cni-cncf-io\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.202386 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201388 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-node-log\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.202386 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201402 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-host\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.202386 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201326 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-multus-socket-dir-parent\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.202386 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201439 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-run-ovn\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.202386 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201478 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-multus-conf-dir\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.202386 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201485 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7075f961-4efd-41c8-9591-c7608ce4563a-cni-binary-copy\") pod \"multus-additional-cni-plugins-x4sv2\" (UID: \"7075f961-4efd-41c8-9591-c7608ce4563a\") " pod="openshift-multus/multus-additional-cni-plugins-x4sv2" Apr 17 14:07:33.202386 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201525 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-host-slash\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.202386 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201541 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-host-run-netns\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.202386 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201574 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-host-slash\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.203187 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201590 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-host-run-netns\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.203187 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201560 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-etc-openvswitch\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.203187 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201636 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-run-openvswitch\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.203187 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201661 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-etc-openvswitch\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.203187 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201690 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-run-openvswitch\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.203187 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201711 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9629\" (UniqueName: \"kubernetes.io/projected/0ca24926-e680-41ea-85df-e8d6ba856597-kube-api-access-l9629\") pod \"node-resolver-sh5zz\" (UID: \"0ca24926-e680-41ea-85df-e8d6ba856597\") " pod="openshift-dns/node-resolver-sh5zz" Apr 17 14:07:33.203187 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201712 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/629add9d-ca73-450f-84ca-bbde403bb4a1-konnectivity-ca\") pod \"konnectivity-agent-5bbkp\" (UID: \"629add9d-ca73-450f-84ca-bbde403bb4a1\") " pod="kube-system/konnectivity-agent-5bbkp" Apr 17 14:07:33.203187 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201740 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-hostroot\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.203187 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201763 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-etc-kubernetes\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.203187 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201778 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-host-var-lib-kubelet\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.203187 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201797 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-etc-systemd\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.203187 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201797 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-hostroot\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.203187 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201832 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-etc-kubernetes\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.203187 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201823 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-host-cni-bin\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.203187 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201852 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-etc-systemd\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.203187 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201852 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-host-var-lib-kubelet\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.203187 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201899 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/dd5f44ea-0850-4ff4-8f21-1f4135fc02ca-serviceca\") pod \"node-ca-6nflw\" (UID: \"dd5f44ea-0850-4ff4-8f21-1f4135fc02ca\") " pod="openshift-image-registry/node-ca-6nflw" Apr 17 14:07:33.203187 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201922 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-host-cni-bin\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.204038 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201924 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7075f961-4efd-41c8-9591-c7608ce4563a-cnibin\") pod \"multus-additional-cni-plugins-x4sv2\" (UID: \"7075f961-4efd-41c8-9591-c7608ce4563a\") " pod="openshift-multus/multus-additional-cni-plugins-x4sv2" Apr 17 14:07:33.204038 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201965 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/629add9d-ca73-450f-84ca-bbde403bb4a1-agent-certs\") pod \"konnectivity-agent-5bbkp\" (UID: \"629add9d-ca73-450f-84ca-bbde403bb4a1\") " pod="kube-system/konnectivity-agent-5bbkp" Apr 17 14:07:33.204038 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.201991 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-etc-sysctl-d\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.204038 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.202028 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-host-kubelet\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.204038 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.202054 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/380cb33f-199e-44fd-8e74-06e5aad709a9-device-dir\") pod \"aws-ebs-csi-driver-node-wn4kk\" (UID: \"380cb33f-199e-44fd-8e74-06e5aad709a9\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-wn4kk" Apr 17 14:07:33.204038 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.202079 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8rzw5\" (UniqueName: \"kubernetes.io/projected/30cbf063-628a-472d-981e-312f5bea1f7f-kube-api-access-8rzw5\") pod \"network-metrics-daemon-vgr2h\" (UID: \"30cbf063-628a-472d-981e-312f5bea1f7f\") " pod="openshift-multus/network-metrics-daemon-vgr2h" Apr 17 14:07:33.204038 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.202089 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-host-kubelet\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.204038 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.202104 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-var-lib-openvswitch\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.204038 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.202143 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/380cb33f-199e-44fd-8e74-06e5aad709a9-socket-dir\") pod \"aws-ebs-csi-driver-node-wn4kk\" (UID: \"380cb33f-199e-44fd-8e74-06e5aad709a9\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-wn4kk" Apr 17 14:07:33.204038 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.202149 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-var-lib-openvswitch\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.204038 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.202171 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0ca24926-e680-41ea-85df-e8d6ba856597-tmp-dir\") pod \"node-resolver-sh5zz\" (UID: \"0ca24926-e680-41ea-85df-e8d6ba856597\") " pod="openshift-dns/node-resolver-sh5zz" Apr 17 14:07:33.204038 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.202171 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-etc-sysctl-d\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.204038 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.202198 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7075f961-4efd-41c8-9591-c7608ce4563a-os-release\") pod \"multus-additional-cni-plugins-x4sv2\" (UID: \"7075f961-4efd-41c8-9591-c7608ce4563a\") " pod="openshift-multus/multus-additional-cni-plugins-x4sv2" Apr 17 14:07:33.204038 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.202171 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/380cb33f-199e-44fd-8e74-06e5aad709a9-device-dir\") pod \"aws-ebs-csi-driver-node-wn4kk\" (UID: \"380cb33f-199e-44fd-8e74-06e5aad709a9\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-wn4kk" Apr 17 14:07:33.204038 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.202226 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/30cbf063-628a-472d-981e-312f5bea1f7f-metrics-certs\") pod \"network-metrics-daemon-vgr2h\" (UID: \"30cbf063-628a-472d-981e-312f5bea1f7f\") " pod="openshift-multus/network-metrics-daemon-vgr2h" Apr 17 14:07:33.204038 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.202269 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-host-cni-netd\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.204038 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.202315 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-host-cni-netd\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.204657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.202320 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/380cb33f-199e-44fd-8e74-06e5aad709a9-socket-dir\") pod \"aws-ebs-csi-driver-node-wn4kk\" (UID: \"380cb33f-199e-44fd-8e74-06e5aad709a9\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-wn4kk" Apr 17 14:07:33.204657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.202335 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-os-release\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.204657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.202366 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7075f961-4efd-41c8-9591-c7608ce4563a-system-cni-dir\") pod \"multus-additional-cni-plugins-x4sv2\" (UID: \"7075f961-4efd-41c8-9591-c7608ce4563a\") " pod="openshift-multus/multus-additional-cni-plugins-x4sv2" Apr 17 14:07:33.204657 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:33.202371 2568 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 14:07:33.204657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.202385 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-os-release\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.204657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.202411 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk6jb\" (UniqueName: \"kubernetes.io/projected/7075f961-4efd-41c8-9591-c7608ce4563a-kube-api-access-kk6jb\") pod \"multus-additional-cni-plugins-x4sv2\" (UID: \"7075f961-4efd-41c8-9591-c7608ce4563a\") " pod="openshift-multus/multus-additional-cni-plugins-x4sv2" Apr 17 14:07:33.204657 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:33.202462 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30cbf063-628a-472d-981e-312f5bea1f7f-metrics-certs podName:30cbf063-628a-472d-981e-312f5bea1f7f nodeName:}" failed. No retries permitted until 2026-04-17 14:07:33.702430455 +0000 UTC m=+3.040221550 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/30cbf063-628a-472d-981e-312f5bea1f7f-metrics-certs") pod "network-metrics-daemon-vgr2h" (UID: "30cbf063-628a-472d-981e-312f5bea1f7f") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 14:07:33.204657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.202481 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/51daea5e-57b3-4362-8315-ad830e53345a-host-slash\") pod \"iptables-alerter-5qssl\" (UID: \"51daea5e-57b3-4362-8315-ad830e53345a\") " pod="openshift-network-operator/iptables-alerter-5qssl" Apr 17 14:07:33.204657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.202536 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/51daea5e-57b3-4362-8315-ad830e53345a-host-slash\") pod \"iptables-alerter-5qssl\" (UID: \"51daea5e-57b3-4362-8315-ad830e53345a\") " pod="openshift-network-operator/iptables-alerter-5qssl" Apr 17 14:07:33.204657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.202576 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-ovnkube-script-lib\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.204657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.202598 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dklkp\" (UniqueName: \"kubernetes.io/projected/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-kube-api-access-dklkp\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.204657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.202633 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-system-cni-dir\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.204657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.202657 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-multus-cni-dir\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.204657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.202682 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fcs64\" (UniqueName: \"kubernetes.io/projected/98c5f7fc-8ede-450b-961f-6812d4ee961b-kube-api-access-fcs64\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.204657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.202706 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/380cb33f-199e-44fd-8e74-06e5aad709a9-etc-selinux\") pod \"aws-ebs-csi-driver-node-wn4kk\" (UID: \"380cb33f-199e-44fd-8e74-06e5aad709a9\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-wn4kk" Apr 17 14:07:33.204657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.202798 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/380cb33f-199e-44fd-8e74-06e5aad709a9-etc-selinux\") pod \"aws-ebs-csi-driver-node-wn4kk\" (UID: \"380cb33f-199e-44fd-8e74-06e5aad709a9\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-wn4kk" Apr 17 14:07:33.204657 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.202797 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-multus-cni-dir\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.205204 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.202823 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7075f961-4efd-41c8-9591-c7608ce4563a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-x4sv2\" (UID: \"7075f961-4efd-41c8-9591-c7608ce4563a\") " pod="openshift-multus/multus-additional-cni-plugins-x4sv2" Apr 17 14:07:33.205204 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.202870 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/380cb33f-199e-44fd-8e74-06e5aad709a9-sys-fs\") pod \"aws-ebs-csi-driver-node-wn4kk\" (UID: \"380cb33f-199e-44fd-8e74-06e5aad709a9\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-wn4kk" Apr 17 14:07:33.205204 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.202898 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/51daea5e-57b3-4362-8315-ad830e53345a-iptables-alerter-script\") pod \"iptables-alerter-5qssl\" (UID: \"51daea5e-57b3-4362-8315-ad830e53345a\") " pod="openshift-network-operator/iptables-alerter-5qssl" Apr 17 14:07:33.205204 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.202926 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-tmp\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.205204 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.202948 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-45sx9\" (UniqueName: \"kubernetes.io/projected/380cb33f-199e-44fd-8e74-06e5aad709a9-kube-api-access-45sx9\") pod \"aws-ebs-csi-driver-node-wn4kk\" (UID: \"380cb33f-199e-44fd-8e74-06e5aad709a9\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-wn4kk" Apr 17 14:07:33.205204 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.202976 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/7075f961-4efd-41c8-9591-c7608ce4563a-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-x4sv2\" (UID: \"7075f961-4efd-41c8-9591-c7608ce4563a\") " pod="openshift-multus/multus-additional-cni-plugins-x4sv2" Apr 17 14:07:33.205204 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.203151 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-ovnkube-script-lib\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.205204 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.202897 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/98c5f7fc-8ede-450b-961f-6812d4ee961b-system-cni-dir\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.205204 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.203261 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/380cb33f-199e-44fd-8e74-06e5aad709a9-sys-fs\") pod \"aws-ebs-csi-driver-node-wn4kk\" (UID: \"380cb33f-199e-44fd-8e74-06e5aad709a9\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-wn4kk" Apr 17 14:07:33.205204 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.203397 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/51daea5e-57b3-4362-8315-ad830e53345a-iptables-alerter-script\") pod \"iptables-alerter-5qssl\" (UID: \"51daea5e-57b3-4362-8315-ad830e53345a\") " pod="openshift-network-operator/iptables-alerter-5qssl" Apr 17 14:07:33.205204 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.203764 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-etc-tuned\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.205204 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.203813 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-ovn-node-metrics-cert\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.205204 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.204768 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/629add9d-ca73-450f-84ca-bbde403bb4a1-agent-certs\") pod \"konnectivity-agent-5bbkp\" (UID: \"629add9d-ca73-450f-84ca-bbde403bb4a1\") " pod="kube-system/konnectivity-agent-5bbkp" Apr 17 14:07:33.205204 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.204891 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-tmp\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.207551 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:33.207527 2568 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 17 14:07:33.207551 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:33.207548 2568 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 17 14:07:33.207690 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:33.207558 2568 projected.go:194] Error preparing data for projected volume kube-api-access-zzpm4 for pod openshift-network-diagnostics/network-check-target-gh5vx: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 14:07:33.207690 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:33.207626 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6946cb58-b181-4207-94e3-02bca6a030b2-kube-api-access-zzpm4 podName:6946cb58-b181-4207-94e3-02bca6a030b2 nodeName:}" failed. No retries permitted until 2026-04-17 14:07:33.707607042 +0000 UTC m=+3.045398132 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zzpm4" (UniqueName: "kubernetes.io/projected/6946cb58-b181-4207-94e3-02bca6a030b2-kube-api-access-zzpm4") pod "network-check-target-gh5vx" (UID: "6946cb58-b181-4207-94e3-02bca6a030b2") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 14:07:33.209487 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.209468 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cstcr\" (UniqueName: \"kubernetes.io/projected/a7543f64-7649-44f7-bd0e-fdc6724b7f1e-kube-api-access-cstcr\") pod \"tuned-vzgdn\" (UID: \"a7543f64-7649-44f7-bd0e-fdc6724b7f1e\") " pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.210098 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.210079 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lzzc\" (UniqueName: \"kubernetes.io/projected/51daea5e-57b3-4362-8315-ad830e53345a-kube-api-access-8lzzc\") pod \"iptables-alerter-5qssl\" (UID: \"51daea5e-57b3-4362-8315-ad830e53345a\") " pod="openshift-network-operator/iptables-alerter-5qssl" Apr 17 14:07:33.210623 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.210607 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rzw5\" (UniqueName: \"kubernetes.io/projected/30cbf063-628a-472d-981e-312f5bea1f7f-kube-api-access-8rzw5\") pod \"network-metrics-daemon-vgr2h\" (UID: \"30cbf063-628a-472d-981e-312f5bea1f7f\") " pod="openshift-multus/network-metrics-daemon-vgr2h" Apr 17 14:07:33.215367 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.215347 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-45sx9\" (UniqueName: \"kubernetes.io/projected/380cb33f-199e-44fd-8e74-06e5aad709a9-kube-api-access-45sx9\") pod \"aws-ebs-csi-driver-node-wn4kk\" (UID: \"380cb33f-199e-44fd-8e74-06e5aad709a9\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-wn4kk" Apr 17 14:07:33.215816 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.215803 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcs64\" (UniqueName: \"kubernetes.io/projected/98c5f7fc-8ede-450b-961f-6812d4ee961b-kube-api-access-fcs64\") pod \"multus-hhjfc\" (UID: \"98c5f7fc-8ede-450b-961f-6812d4ee961b\") " pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.215881 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.215817 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dklkp\" (UniqueName: \"kubernetes.io/projected/47f04f56-df85-4d1d-ade1-4e5bc3b49e67-kube-api-access-dklkp\") pod \"ovnkube-node-vmbqf\" (UID: \"47f04f56-df85-4d1d-ade1-4e5bc3b49e67\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.303683 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.303640 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lvm6z\" (UniqueName: \"kubernetes.io/projected/dd5f44ea-0850-4ff4-8f21-1f4135fc02ca-kube-api-access-lvm6z\") pod \"node-ca-6nflw\" (UID: \"dd5f44ea-0850-4ff4-8f21-1f4135fc02ca\") " pod="openshift-image-registry/node-ca-6nflw" Apr 17 14:07:33.303683 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.303681 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0ca24926-e680-41ea-85df-e8d6ba856597-hosts-file\") pod \"node-resolver-sh5zz\" (UID: \"0ca24926-e680-41ea-85df-e8d6ba856597\") " pod="openshift-dns/node-resolver-sh5zz" Apr 17 14:07:33.303915 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.303700 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd5f44ea-0850-4ff4-8f21-1f4135fc02ca-host\") pod \"node-ca-6nflw\" (UID: \"dd5f44ea-0850-4ff4-8f21-1f4135fc02ca\") " pod="openshift-image-registry/node-ca-6nflw" Apr 17 14:07:33.303915 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.303738 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7075f961-4efd-41c8-9591-c7608ce4563a-cni-binary-copy\") pod \"multus-additional-cni-plugins-x4sv2\" (UID: \"7075f961-4efd-41c8-9591-c7608ce4563a\") " pod="openshift-multus/multus-additional-cni-plugins-x4sv2" Apr 17 14:07:33.303915 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.303770 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l9629\" (UniqueName: \"kubernetes.io/projected/0ca24926-e680-41ea-85df-e8d6ba856597-kube-api-access-l9629\") pod \"node-resolver-sh5zz\" (UID: \"0ca24926-e680-41ea-85df-e8d6ba856597\") " pod="openshift-dns/node-resolver-sh5zz" Apr 17 14:07:33.303915 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.303795 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/dd5f44ea-0850-4ff4-8f21-1f4135fc02ca-serviceca\") pod \"node-ca-6nflw\" (UID: \"dd5f44ea-0850-4ff4-8f21-1f4135fc02ca\") " pod="openshift-image-registry/node-ca-6nflw" Apr 17 14:07:33.303915 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.303823 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7075f961-4efd-41c8-9591-c7608ce4563a-cnibin\") pod \"multus-additional-cni-plugins-x4sv2\" (UID: \"7075f961-4efd-41c8-9591-c7608ce4563a\") " pod="openshift-multus/multus-additional-cni-plugins-x4sv2" Apr 17 14:07:33.303915 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.303829 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0ca24926-e680-41ea-85df-e8d6ba856597-hosts-file\") pod \"node-resolver-sh5zz\" (UID: \"0ca24926-e680-41ea-85df-e8d6ba856597\") " pod="openshift-dns/node-resolver-sh5zz" Apr 17 14:07:33.303915 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.303836 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd5f44ea-0850-4ff4-8f21-1f4135fc02ca-host\") pod \"node-ca-6nflw\" (UID: \"dd5f44ea-0850-4ff4-8f21-1f4135fc02ca\") " pod="openshift-image-registry/node-ca-6nflw" Apr 17 14:07:33.303915 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.303851 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0ca24926-e680-41ea-85df-e8d6ba856597-tmp-dir\") pod \"node-resolver-sh5zz\" (UID: \"0ca24926-e680-41ea-85df-e8d6ba856597\") " pod="openshift-dns/node-resolver-sh5zz" Apr 17 14:07:33.303915 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.303893 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7075f961-4efd-41c8-9591-c7608ce4563a-os-release\") pod \"multus-additional-cni-plugins-x4sv2\" (UID: \"7075f961-4efd-41c8-9591-c7608ce4563a\") " pod="openshift-multus/multus-additional-cni-plugins-x4sv2" Apr 17 14:07:33.304324 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.303928 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7075f961-4efd-41c8-9591-c7608ce4563a-system-cni-dir\") pod \"multus-additional-cni-plugins-x4sv2\" (UID: \"7075f961-4efd-41c8-9591-c7608ce4563a\") " pod="openshift-multus/multus-additional-cni-plugins-x4sv2" Apr 17 14:07:33.304324 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.303895 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7075f961-4efd-41c8-9591-c7608ce4563a-cnibin\") pod \"multus-additional-cni-plugins-x4sv2\" (UID: \"7075f961-4efd-41c8-9591-c7608ce4563a\") " pod="openshift-multus/multus-additional-cni-plugins-x4sv2" Apr 17 14:07:33.304324 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.304002 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7075f961-4efd-41c8-9591-c7608ce4563a-system-cni-dir\") pod \"multus-additional-cni-plugins-x4sv2\" (UID: \"7075f961-4efd-41c8-9591-c7608ce4563a\") " pod="openshift-multus/multus-additional-cni-plugins-x4sv2" Apr 17 14:07:33.304324 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.303945 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kk6jb\" (UniqueName: \"kubernetes.io/projected/7075f961-4efd-41c8-9591-c7608ce4563a-kube-api-access-kk6jb\") pod \"multus-additional-cni-plugins-x4sv2\" (UID: \"7075f961-4efd-41c8-9591-c7608ce4563a\") " pod="openshift-multus/multus-additional-cni-plugins-x4sv2" Apr 17 14:07:33.304324 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.304049 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7075f961-4efd-41c8-9591-c7608ce4563a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-x4sv2\" (UID: \"7075f961-4efd-41c8-9591-c7608ce4563a\") " pod="openshift-multus/multus-additional-cni-plugins-x4sv2" Apr 17 14:07:33.304324 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.304078 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7075f961-4efd-41c8-9591-c7608ce4563a-os-release\") pod \"multus-additional-cni-plugins-x4sv2\" (UID: \"7075f961-4efd-41c8-9591-c7608ce4563a\") " pod="openshift-multus/multus-additional-cni-plugins-x4sv2" Apr 17 14:07:33.304324 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.304080 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/7075f961-4efd-41c8-9591-c7608ce4563a-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-x4sv2\" (UID: \"7075f961-4efd-41c8-9591-c7608ce4563a\") " pod="openshift-multus/multus-additional-cni-plugins-x4sv2" Apr 17 14:07:33.304324 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.304129 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7075f961-4efd-41c8-9591-c7608ce4563a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-x4sv2\" (UID: \"7075f961-4efd-41c8-9591-c7608ce4563a\") " pod="openshift-multus/multus-additional-cni-plugins-x4sv2" Apr 17 14:07:33.304324 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.304252 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0ca24926-e680-41ea-85df-e8d6ba856597-tmp-dir\") pod \"node-resolver-sh5zz\" (UID: \"0ca24926-e680-41ea-85df-e8d6ba856597\") " pod="openshift-dns/node-resolver-sh5zz" Apr 17 14:07:33.304324 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.304268 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7075f961-4efd-41c8-9591-c7608ce4563a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-x4sv2\" (UID: \"7075f961-4efd-41c8-9591-c7608ce4563a\") " pod="openshift-multus/multus-additional-cni-plugins-x4sv2" Apr 17 14:07:33.304654 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.304371 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/dd5f44ea-0850-4ff4-8f21-1f4135fc02ca-serviceca\") pod \"node-ca-6nflw\" (UID: \"dd5f44ea-0850-4ff4-8f21-1f4135fc02ca\") " pod="openshift-image-registry/node-ca-6nflw" Apr 17 14:07:33.304654 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.304521 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/7075f961-4efd-41c8-9591-c7608ce4563a-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-x4sv2\" (UID: \"7075f961-4efd-41c8-9591-c7608ce4563a\") " pod="openshift-multus/multus-additional-cni-plugins-x4sv2" Apr 17 14:07:33.304654 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.304535 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7075f961-4efd-41c8-9591-c7608ce4563a-cni-binary-copy\") pod \"multus-additional-cni-plugins-x4sv2\" (UID: \"7075f961-4efd-41c8-9591-c7608ce4563a\") " pod="openshift-multus/multus-additional-cni-plugins-x4sv2" Apr 17 14:07:33.304654 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.304544 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7075f961-4efd-41c8-9591-c7608ce4563a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-x4sv2\" (UID: \"7075f961-4efd-41c8-9591-c7608ce4563a\") " pod="openshift-multus/multus-additional-cni-plugins-x4sv2" Apr 17 14:07:33.311300 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.311274 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9629\" (UniqueName: \"kubernetes.io/projected/0ca24926-e680-41ea-85df-e8d6ba856597-kube-api-access-l9629\") pod \"node-resolver-sh5zz\" (UID: \"0ca24926-e680-41ea-85df-e8d6ba856597\") " pod="openshift-dns/node-resolver-sh5zz" Apr 17 14:07:33.311395 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.311277 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvm6z\" (UniqueName: \"kubernetes.io/projected/dd5f44ea-0850-4ff4-8f21-1f4135fc02ca-kube-api-access-lvm6z\") pod \"node-ca-6nflw\" (UID: \"dd5f44ea-0850-4ff4-8f21-1f4135fc02ca\") " pod="openshift-image-registry/node-ca-6nflw" Apr 17 14:07:33.311395 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.311312 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk6jb\" (UniqueName: \"kubernetes.io/projected/7075f961-4efd-41c8-9591-c7608ce4563a-kube-api-access-kk6jb\") pod \"multus-additional-cni-plugins-x4sv2\" (UID: \"7075f961-4efd-41c8-9591-c7608ce4563a\") " pod="openshift-multus/multus-additional-cni-plugins-x4sv2" Apr 17 14:07:33.391420 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.391333 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-wn4kk" Apr 17 14:07:33.397089 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.397068 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5qssl" Apr 17 14:07:33.399646 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:33.399621 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod380cb33f_199e_44fd_8e74_06e5aad709a9.slice/crio-2ad2239ff99e0fbc34d2c18e4e07ebf61611fb482dee7a529096eff890c25f35 WatchSource:0}: Error finding container 2ad2239ff99e0fbc34d2c18e4e07ebf61611fb482dee7a529096eff890c25f35: Status 404 returned error can't find the container with id 2ad2239ff99e0fbc34d2c18e4e07ebf61611fb482dee7a529096eff890c25f35 Apr 17 14:07:33.403324 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:33.403297 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51daea5e_57b3_4362_8315_ad830e53345a.slice/crio-22a227af123cae1d506302836e43f6e8c3e6776bd407d6dad67a3cd2421ba923 WatchSource:0}: Error finding container 22a227af123cae1d506302836e43f6e8c3e6776bd407d6dad67a3cd2421ba923: Status 404 returned error can't find the container with id 22a227af123cae1d506302836e43f6e8c3e6776bd407d6dad67a3cd2421ba923 Apr 17 14:07:33.404668 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.404653 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-5bbkp" Apr 17 14:07:33.409497 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.409477 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" Apr 17 14:07:33.411013 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:33.410991 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod629add9d_ca73_450f_84ca_bbde403bb4a1.slice/crio-594910a9d60bc379a4ae870f23673a06fc8b77fab7c920c7a85a88d093adc318 WatchSource:0}: Error finding container 594910a9d60bc379a4ae870f23673a06fc8b77fab7c920c7a85a88d093adc318: Status 404 returned error can't find the container with id 594910a9d60bc379a4ae870f23673a06fc8b77fab7c920c7a85a88d093adc318 Apr 17 14:07:33.414161 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.414146 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:33.415882 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:33.415843 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7543f64_7649_44f7_bd0e_fdc6724b7f1e.slice/crio-be060ce7e96efd15c08e47b1d7a95ac425d0d20fb2a30974d3e81d45b8598013 WatchSource:0}: Error finding container be060ce7e96efd15c08e47b1d7a95ac425d0d20fb2a30974d3e81d45b8598013: Status 404 returned error can't find the container with id be060ce7e96efd15c08e47b1d7a95ac425d0d20fb2a30974d3e81d45b8598013 Apr 17 14:07:33.419710 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.419691 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hhjfc" Apr 17 14:07:33.423971 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:33.423949 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47f04f56_df85_4d1d_ade1_4e5bc3b49e67.slice/crio-ef6d88e8d6d5a4c6d88e6ca85cd1c518774edae7aacc8068be2d0e078d769b7a WatchSource:0}: Error finding container ef6d88e8d6d5a4c6d88e6ca85cd1c518774edae7aacc8068be2d0e078d769b7a: Status 404 returned error can't find the container with id ef6d88e8d6d5a4c6d88e6ca85cd1c518774edae7aacc8068be2d0e078d769b7a Apr 17 14:07:33.427867 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:33.427836 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98c5f7fc_8ede_450b_961f_6812d4ee961b.slice/crio-54f39be6b48d15020533d6779888f9ecae8142847386a54d1d93b77c29bc4091 WatchSource:0}: Error finding container 54f39be6b48d15020533d6779888f9ecae8142847386a54d1d93b77c29bc4091: Status 404 returned error can't find the container with id 54f39be6b48d15020533d6779888f9ecae8142847386a54d1d93b77c29bc4091 Apr 17 14:07:33.430390 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.430373 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-sh5zz" Apr 17 14:07:33.435318 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.435298 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-6nflw" Apr 17 14:07:33.436237 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:33.436218 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ca24926_e680_41ea_85df_e8d6ba856597.slice/crio-82fe9ed44f52ae4e11829215675514b2806d6d6fb57e08e764676d06878d0e94 WatchSource:0}: Error finding container 82fe9ed44f52ae4e11829215675514b2806d6d6fb57e08e764676d06878d0e94: Status 404 returned error can't find the container with id 82fe9ed44f52ae4e11829215675514b2806d6d6fb57e08e764676d06878d0e94 Apr 17 14:07:33.439588 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.439570 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-x4sv2" Apr 17 14:07:33.441898 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:33.441876 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd5f44ea_0850_4ff4_8f21_1f4135fc02ca.slice/crio-3e6f1d2a3af64833ffd26a14b893ed9e8277d97f2bf6d2dc8d04409a2e3862b3 WatchSource:0}: Error finding container 3e6f1d2a3af64833ffd26a14b893ed9e8277d97f2bf6d2dc8d04409a2e3862b3: Status 404 returned error can't find the container with id 3e6f1d2a3af64833ffd26a14b893ed9e8277d97f2bf6d2dc8d04409a2e3862b3 Apr 17 14:07:33.446096 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:07:33.446077 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7075f961_4efd_41c8_9591_c7608ce4563a.slice/crio-f3353560dfce84bf6f8ddc0025a557d34c058abcf07c7388c184c082f07beafa WatchSource:0}: Error finding container f3353560dfce84bf6f8ddc0025a557d34c058abcf07c7388c184c082f07beafa: Status 404 returned error can't find the container with id f3353560dfce84bf6f8ddc0025a557d34c058abcf07c7388c184c082f07beafa Apr 17 14:07:33.463117 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.463096 2568 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Apr 17 14:07:33.706646 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.706559 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/30cbf063-628a-472d-981e-312f5bea1f7f-metrics-certs\") pod \"network-metrics-daemon-vgr2h\" (UID: \"30cbf063-628a-472d-981e-312f5bea1f7f\") " pod="openshift-multus/network-metrics-daemon-vgr2h" Apr 17 14:07:33.706784 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:33.706696 2568 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 14:07:33.706784 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:33.706754 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30cbf063-628a-472d-981e-312f5bea1f7f-metrics-certs podName:30cbf063-628a-472d-981e-312f5bea1f7f nodeName:}" failed. No retries permitted until 2026-04-17 14:07:34.706739453 +0000 UTC m=+4.044530541 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/30cbf063-628a-472d-981e-312f5bea1f7f-metrics-certs") pod "network-metrics-daemon-vgr2h" (UID: "30cbf063-628a-472d-981e-312f5bea1f7f") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 14:07:33.807954 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:33.807916 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zzpm4\" (UniqueName: \"kubernetes.io/projected/6946cb58-b181-4207-94e3-02bca6a030b2-kube-api-access-zzpm4\") pod \"network-check-target-gh5vx\" (UID: \"6946cb58-b181-4207-94e3-02bca6a030b2\") " pod="openshift-network-diagnostics/network-check-target-gh5vx" Apr 17 14:07:33.808197 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:33.808109 2568 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 17 14:07:33.808197 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:33.808135 2568 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 17 14:07:33.808197 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:33.808148 2568 projected.go:194] Error preparing data for projected volume kube-api-access-zzpm4 for pod openshift-network-diagnostics/network-check-target-gh5vx: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 14:07:33.808370 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:33.808215 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6946cb58-b181-4207-94e3-02bca6a030b2-kube-api-access-zzpm4 podName:6946cb58-b181-4207-94e3-02bca6a030b2 nodeName:}" failed. No retries permitted until 2026-04-17 14:07:34.808195968 +0000 UTC m=+4.145987074 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-zzpm4" (UniqueName: "kubernetes.io/projected/6946cb58-b181-4207-94e3-02bca6a030b2-kube-api-access-zzpm4") pod "network-check-target-gh5vx" (UID: "6946cb58-b181-4207-94e3-02bca6a030b2") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 14:07:34.195001 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:34.194906 2568 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-16 14:02:32 +0000 UTC" deadline="2027-10-14 00:10:03.095362612 +0000 UTC" Apr 17 14:07:34.195001 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:34.194944 2568 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="13066h2m28.900422308s" Apr 17 14:07:34.259030 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:34.258996 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vgr2h" Apr 17 14:07:34.259196 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:34.259141 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vgr2h" podUID="30cbf063-628a-472d-981e-312f5bea1f7f" Apr 17 14:07:34.271987 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:34.271944 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" event={"ID":"47f04f56-df85-4d1d-ade1-4e5bc3b49e67","Type":"ContainerStarted","Data":"ef6d88e8d6d5a4c6d88e6ca85cd1c518774edae7aacc8068be2d0e078d769b7a"} Apr 17 14:07:34.277226 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:34.277180 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" event={"ID":"a7543f64-7649-44f7-bd0e-fdc6724b7f1e","Type":"ContainerStarted","Data":"be060ce7e96efd15c08e47b1d7a95ac425d0d20fb2a30974d3e81d45b8598013"} Apr 17 14:07:34.280503 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:34.280473 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-wn4kk" event={"ID":"380cb33f-199e-44fd-8e74-06e5aad709a9","Type":"ContainerStarted","Data":"2ad2239ff99e0fbc34d2c18e4e07ebf61611fb482dee7a529096eff890c25f35"} Apr 17 14:07:34.289246 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:34.289215 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x4sv2" event={"ID":"7075f961-4efd-41c8-9591-c7608ce4563a","Type":"ContainerStarted","Data":"f3353560dfce84bf6f8ddc0025a557d34c058abcf07c7388c184c082f07beafa"} Apr 17 14:07:34.291903 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:34.291876 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-6nflw" event={"ID":"dd5f44ea-0850-4ff4-8f21-1f4135fc02ca","Type":"ContainerStarted","Data":"3e6f1d2a3af64833ffd26a14b893ed9e8277d97f2bf6d2dc8d04409a2e3862b3"} Apr 17 14:07:34.299503 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:34.299456 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-sh5zz" event={"ID":"0ca24926-e680-41ea-85df-e8d6ba856597","Type":"ContainerStarted","Data":"82fe9ed44f52ae4e11829215675514b2806d6d6fb57e08e764676d06878d0e94"} Apr 17 14:07:34.301754 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:34.301723 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hhjfc" event={"ID":"98c5f7fc-8ede-450b-961f-6812d4ee961b","Type":"ContainerStarted","Data":"54f39be6b48d15020533d6779888f9ecae8142847386a54d1d93b77c29bc4091"} Apr 17 14:07:34.310669 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:34.310634 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-5bbkp" event={"ID":"629add9d-ca73-450f-84ca-bbde403bb4a1","Type":"ContainerStarted","Data":"594910a9d60bc379a4ae870f23673a06fc8b77fab7c920c7a85a88d093adc318"} Apr 17 14:07:34.322094 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:34.322061 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5qssl" event={"ID":"51daea5e-57b3-4362-8315-ad830e53345a","Type":"ContainerStarted","Data":"22a227af123cae1d506302836e43f6e8c3e6776bd407d6dad67a3cd2421ba923"} Apr 17 14:07:34.715369 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:34.715334 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/30cbf063-628a-472d-981e-312f5bea1f7f-metrics-certs\") pod \"network-metrics-daemon-vgr2h\" (UID: \"30cbf063-628a-472d-981e-312f5bea1f7f\") " pod="openshift-multus/network-metrics-daemon-vgr2h" Apr 17 14:07:34.715589 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:34.715568 2568 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 14:07:34.715661 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:34.715644 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30cbf063-628a-472d-981e-312f5bea1f7f-metrics-certs podName:30cbf063-628a-472d-981e-312f5bea1f7f nodeName:}" failed. No retries permitted until 2026-04-17 14:07:36.715625557 +0000 UTC m=+6.053416653 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/30cbf063-628a-472d-981e-312f5bea1f7f-metrics-certs") pod "network-metrics-daemon-vgr2h" (UID: "30cbf063-628a-472d-981e-312f5bea1f7f") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 14:07:34.816579 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:34.816541 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zzpm4\" (UniqueName: \"kubernetes.io/projected/6946cb58-b181-4207-94e3-02bca6a030b2-kube-api-access-zzpm4\") pod \"network-check-target-gh5vx\" (UID: \"6946cb58-b181-4207-94e3-02bca6a030b2\") " pod="openshift-network-diagnostics/network-check-target-gh5vx" Apr 17 14:07:34.816803 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:34.816701 2568 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 17 14:07:34.816803 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:34.816722 2568 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 17 14:07:34.816803 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:34.816735 2568 projected.go:194] Error preparing data for projected volume kube-api-access-zzpm4 for pod openshift-network-diagnostics/network-check-target-gh5vx: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 14:07:34.816803 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:34.816794 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6946cb58-b181-4207-94e3-02bca6a030b2-kube-api-access-zzpm4 podName:6946cb58-b181-4207-94e3-02bca6a030b2 nodeName:}" failed. No retries permitted until 2026-04-17 14:07:36.816775881 +0000 UTC m=+6.154566984 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-zzpm4" (UniqueName: "kubernetes.io/projected/6946cb58-b181-4207-94e3-02bca6a030b2-kube-api-access-zzpm4") pod "network-check-target-gh5vx" (UID: "6946cb58-b181-4207-94e3-02bca6a030b2") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 14:07:35.259745 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:35.259712 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gh5vx" Apr 17 14:07:35.260185 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:35.259840 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-gh5vx" podUID="6946cb58-b181-4207-94e3-02bca6a030b2" Apr 17 14:07:36.258841 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:36.258810 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vgr2h" Apr 17 14:07:36.259036 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:36.258958 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vgr2h" podUID="30cbf063-628a-472d-981e-312f5bea1f7f" Apr 17 14:07:36.730852 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:36.730772 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/30cbf063-628a-472d-981e-312f5bea1f7f-metrics-certs\") pod \"network-metrics-daemon-vgr2h\" (UID: \"30cbf063-628a-472d-981e-312f5bea1f7f\") " pod="openshift-multus/network-metrics-daemon-vgr2h" Apr 17 14:07:36.731283 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:36.731014 2568 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 14:07:36.731283 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:36.731094 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30cbf063-628a-472d-981e-312f5bea1f7f-metrics-certs podName:30cbf063-628a-472d-981e-312f5bea1f7f nodeName:}" failed. No retries permitted until 2026-04-17 14:07:40.731072091 +0000 UTC m=+10.068863192 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/30cbf063-628a-472d-981e-312f5bea1f7f-metrics-certs") pod "network-metrics-daemon-vgr2h" (UID: "30cbf063-628a-472d-981e-312f5bea1f7f") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 14:07:36.832257 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:36.832217 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zzpm4\" (UniqueName: \"kubernetes.io/projected/6946cb58-b181-4207-94e3-02bca6a030b2-kube-api-access-zzpm4\") pod \"network-check-target-gh5vx\" (UID: \"6946cb58-b181-4207-94e3-02bca6a030b2\") " pod="openshift-network-diagnostics/network-check-target-gh5vx" Apr 17 14:07:36.832450 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:36.832399 2568 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 17 14:07:36.832450 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:36.832425 2568 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 17 14:07:36.832450 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:36.832439 2568 projected.go:194] Error preparing data for projected volume kube-api-access-zzpm4 for pod openshift-network-diagnostics/network-check-target-gh5vx: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 14:07:36.832679 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:36.832506 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6946cb58-b181-4207-94e3-02bca6a030b2-kube-api-access-zzpm4 podName:6946cb58-b181-4207-94e3-02bca6a030b2 nodeName:}" failed. No retries permitted until 2026-04-17 14:07:40.832487251 +0000 UTC m=+10.170278354 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-zzpm4" (UniqueName: "kubernetes.io/projected/6946cb58-b181-4207-94e3-02bca6a030b2-kube-api-access-zzpm4") pod "network-check-target-gh5vx" (UID: "6946cb58-b181-4207-94e3-02bca6a030b2") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 14:07:37.259314 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:37.259280 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gh5vx" Apr 17 14:07:37.259482 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:37.259413 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-gh5vx" podUID="6946cb58-b181-4207-94e3-02bca6a030b2" Apr 17 14:07:37.351148 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:37.351112 2568 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kube-system/global-pull-secret-syncer-cgd26"] Apr 17 14:07:37.356437 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:37.356342 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-cgd26" Apr 17 14:07:37.356437 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:37.356425 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-cgd26" podUID="a39d75e4-6ec9-4f74-b702-49461b73e668" Apr 17 14:07:37.437206 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:37.437017 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/a39d75e4-6ec9-4f74-b702-49461b73e668-original-pull-secret\") pod \"global-pull-secret-syncer-cgd26\" (UID: \"a39d75e4-6ec9-4f74-b702-49461b73e668\") " pod="kube-system/global-pull-secret-syncer-cgd26" Apr 17 14:07:37.437206 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:37.437080 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/a39d75e4-6ec9-4f74-b702-49461b73e668-dbus\") pod \"global-pull-secret-syncer-cgd26\" (UID: \"a39d75e4-6ec9-4f74-b702-49461b73e668\") " pod="kube-system/global-pull-secret-syncer-cgd26" Apr 17 14:07:37.437206 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:37.437116 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/a39d75e4-6ec9-4f74-b702-49461b73e668-kubelet-config\") pod \"global-pull-secret-syncer-cgd26\" (UID: \"a39d75e4-6ec9-4f74-b702-49461b73e668\") " pod="kube-system/global-pull-secret-syncer-cgd26" Apr 17 14:07:37.538376 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:37.538294 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/a39d75e4-6ec9-4f74-b702-49461b73e668-original-pull-secret\") pod \"global-pull-secret-syncer-cgd26\" (UID: \"a39d75e4-6ec9-4f74-b702-49461b73e668\") " pod="kube-system/global-pull-secret-syncer-cgd26" Apr 17 14:07:37.538376 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:37.538349 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/a39d75e4-6ec9-4f74-b702-49461b73e668-dbus\") pod \"global-pull-secret-syncer-cgd26\" (UID: \"a39d75e4-6ec9-4f74-b702-49461b73e668\") " pod="kube-system/global-pull-secret-syncer-cgd26" Apr 17 14:07:37.538595 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:37.538382 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/a39d75e4-6ec9-4f74-b702-49461b73e668-kubelet-config\") pod \"global-pull-secret-syncer-cgd26\" (UID: \"a39d75e4-6ec9-4f74-b702-49461b73e668\") " pod="kube-system/global-pull-secret-syncer-cgd26" Apr 17 14:07:37.538595 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:37.538459 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/a39d75e4-6ec9-4f74-b702-49461b73e668-kubelet-config\") pod \"global-pull-secret-syncer-cgd26\" (UID: \"a39d75e4-6ec9-4f74-b702-49461b73e668\") " pod="kube-system/global-pull-secret-syncer-cgd26" Apr 17 14:07:37.538595 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:37.538489 2568 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 17 14:07:37.538595 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:37.538560 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a39d75e4-6ec9-4f74-b702-49461b73e668-original-pull-secret podName:a39d75e4-6ec9-4f74-b702-49461b73e668 nodeName:}" failed. No retries permitted until 2026-04-17 14:07:38.03854246 +0000 UTC m=+7.376333553 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/a39d75e4-6ec9-4f74-b702-49461b73e668-original-pull-secret") pod "global-pull-secret-syncer-cgd26" (UID: "a39d75e4-6ec9-4f74-b702-49461b73e668") : object "kube-system"/"original-pull-secret" not registered Apr 17 14:07:37.538595 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:37.538565 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/a39d75e4-6ec9-4f74-b702-49461b73e668-dbus\") pod \"global-pull-secret-syncer-cgd26\" (UID: \"a39d75e4-6ec9-4f74-b702-49461b73e668\") " pod="kube-system/global-pull-secret-syncer-cgd26" Apr 17 14:07:38.042108 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:38.042014 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/a39d75e4-6ec9-4f74-b702-49461b73e668-original-pull-secret\") pod \"global-pull-secret-syncer-cgd26\" (UID: \"a39d75e4-6ec9-4f74-b702-49461b73e668\") " pod="kube-system/global-pull-secret-syncer-cgd26" Apr 17 14:07:38.042575 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:38.042154 2568 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 17 14:07:38.042575 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:38.042223 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a39d75e4-6ec9-4f74-b702-49461b73e668-original-pull-secret podName:a39d75e4-6ec9-4f74-b702-49461b73e668 nodeName:}" failed. No retries permitted until 2026-04-17 14:07:39.042202315 +0000 UTC m=+8.379993409 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/a39d75e4-6ec9-4f74-b702-49461b73e668-original-pull-secret") pod "global-pull-secret-syncer-cgd26" (UID: "a39d75e4-6ec9-4f74-b702-49461b73e668") : object "kube-system"/"original-pull-secret" not registered Apr 17 14:07:38.259704 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:38.259671 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vgr2h" Apr 17 14:07:38.259922 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:38.259808 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vgr2h" podUID="30cbf063-628a-472d-981e-312f5bea1f7f" Apr 17 14:07:39.049647 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:39.049420 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/a39d75e4-6ec9-4f74-b702-49461b73e668-original-pull-secret\") pod \"global-pull-secret-syncer-cgd26\" (UID: \"a39d75e4-6ec9-4f74-b702-49461b73e668\") " pod="kube-system/global-pull-secret-syncer-cgd26" Apr 17 14:07:39.049647 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:39.049571 2568 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 17 14:07:39.049647 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:39.049638 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a39d75e4-6ec9-4f74-b702-49461b73e668-original-pull-secret podName:a39d75e4-6ec9-4f74-b702-49461b73e668 nodeName:}" failed. No retries permitted until 2026-04-17 14:07:41.049618948 +0000 UTC m=+10.387410039 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/a39d75e4-6ec9-4f74-b702-49461b73e668-original-pull-secret") pod "global-pull-secret-syncer-cgd26" (UID: "a39d75e4-6ec9-4f74-b702-49461b73e668") : object "kube-system"/"original-pull-secret" not registered Apr 17 14:07:39.259699 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:39.259481 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-cgd26" Apr 17 14:07:39.259699 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:39.259486 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gh5vx" Apr 17 14:07:39.259699 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:39.259620 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-cgd26" podUID="a39d75e4-6ec9-4f74-b702-49461b73e668" Apr 17 14:07:39.259699 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:39.259646 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-gh5vx" podUID="6946cb58-b181-4207-94e3-02bca6a030b2" Apr 17 14:07:40.259555 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:40.259312 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vgr2h" Apr 17 14:07:40.260049 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:40.259758 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vgr2h" podUID="30cbf063-628a-472d-981e-312f5bea1f7f" Apr 17 14:07:40.765250 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:40.765136 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/30cbf063-628a-472d-981e-312f5bea1f7f-metrics-certs\") pod \"network-metrics-daemon-vgr2h\" (UID: \"30cbf063-628a-472d-981e-312f5bea1f7f\") " pod="openshift-multus/network-metrics-daemon-vgr2h" Apr 17 14:07:40.765424 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:40.765342 2568 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 14:07:40.765424 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:40.765411 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30cbf063-628a-472d-981e-312f5bea1f7f-metrics-certs podName:30cbf063-628a-472d-981e-312f5bea1f7f nodeName:}" failed. No retries permitted until 2026-04-17 14:07:48.765391795 +0000 UTC m=+18.103182890 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/30cbf063-628a-472d-981e-312f5bea1f7f-metrics-certs") pod "network-metrics-daemon-vgr2h" (UID: "30cbf063-628a-472d-981e-312f5bea1f7f") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 14:07:40.866156 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:40.866116 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zzpm4\" (UniqueName: \"kubernetes.io/projected/6946cb58-b181-4207-94e3-02bca6a030b2-kube-api-access-zzpm4\") pod \"network-check-target-gh5vx\" (UID: \"6946cb58-b181-4207-94e3-02bca6a030b2\") " pod="openshift-network-diagnostics/network-check-target-gh5vx" Apr 17 14:07:40.866713 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:40.866396 2568 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 17 14:07:40.866713 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:40.866427 2568 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 17 14:07:40.866713 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:40.866442 2568 projected.go:194] Error preparing data for projected volume kube-api-access-zzpm4 for pod openshift-network-diagnostics/network-check-target-gh5vx: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 14:07:40.866713 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:40.866511 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6946cb58-b181-4207-94e3-02bca6a030b2-kube-api-access-zzpm4 podName:6946cb58-b181-4207-94e3-02bca6a030b2 nodeName:}" failed. No retries permitted until 2026-04-17 14:07:48.866493709 +0000 UTC m=+18.204284814 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-zzpm4" (UniqueName: "kubernetes.io/projected/6946cb58-b181-4207-94e3-02bca6a030b2-kube-api-access-zzpm4") pod "network-check-target-gh5vx" (UID: "6946cb58-b181-4207-94e3-02bca6a030b2") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 14:07:41.068463 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:41.068426 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/a39d75e4-6ec9-4f74-b702-49461b73e668-original-pull-secret\") pod \"global-pull-secret-syncer-cgd26\" (UID: \"a39d75e4-6ec9-4f74-b702-49461b73e668\") " pod="kube-system/global-pull-secret-syncer-cgd26" Apr 17 14:07:41.068617 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:41.068567 2568 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 17 14:07:41.068676 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:41.068645 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a39d75e4-6ec9-4f74-b702-49461b73e668-original-pull-secret podName:a39d75e4-6ec9-4f74-b702-49461b73e668 nodeName:}" failed. No retries permitted until 2026-04-17 14:07:45.068625488 +0000 UTC m=+14.406416582 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/a39d75e4-6ec9-4f74-b702-49461b73e668-original-pull-secret") pod "global-pull-secret-syncer-cgd26" (UID: "a39d75e4-6ec9-4f74-b702-49461b73e668") : object "kube-system"/"original-pull-secret" not registered Apr 17 14:07:41.260358 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:41.260331 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-cgd26" Apr 17 14:07:41.260631 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:41.260452 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-cgd26" podUID="a39d75e4-6ec9-4f74-b702-49461b73e668" Apr 17 14:07:41.260631 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:41.260504 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gh5vx" Apr 17 14:07:41.260738 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:41.260631 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-gh5vx" podUID="6946cb58-b181-4207-94e3-02bca6a030b2" Apr 17 14:07:41.338240 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:41.338052 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-wn4kk" event={"ID":"380cb33f-199e-44fd-8e74-06e5aad709a9","Type":"ContainerStarted","Data":"687531475ee722adc7db7920ee4c86f844cd96157b753688222b6739a35f2a34"} Apr 17 14:07:41.339721 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:41.339599 2568 generic.go:358] "Generic (PLEG): container finished" podID="7075f961-4efd-41c8-9591-c7608ce4563a" containerID="1b219ffc68bd9a9b10f750ee6b23fbcb4beeb601d7015076c589729de0a85555" exitCode=0 Apr 17 14:07:41.339721 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:41.339682 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x4sv2" event={"ID":"7075f961-4efd-41c8-9591-c7608ce4563a","Type":"ContainerDied","Data":"1b219ffc68bd9a9b10f750ee6b23fbcb4beeb601d7015076c589729de0a85555"} Apr 17 14:07:41.342275 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:41.342245 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-6nflw" event={"ID":"dd5f44ea-0850-4ff4-8f21-1f4135fc02ca","Type":"ContainerStarted","Data":"9495a0eebb10190bbe11dd753895f62f0e45b655d9ab2d6fa51d369002315c1e"} Apr 17 14:07:41.344432 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:41.344201 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-sh5zz" event={"ID":"0ca24926-e680-41ea-85df-e8d6ba856597","Type":"ContainerStarted","Data":"242728ea7e44536850b6e9d5119573119bf8b07b4246e395c5a66306e1e297ec"} Apr 17 14:07:41.346443 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:41.346123 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-5bbkp" event={"ID":"629add9d-ca73-450f-84ca-bbde403bb4a1","Type":"ContainerStarted","Data":"30ece6b0793d8dd82427bc559b6fa12bcd724175ba6fe95b6eebc3e65e1cd17b"} Apr 17 14:07:41.377309 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:41.376980 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/konnectivity-agent-5bbkp" podStartSLOduration=3.677526626 podStartE2EDuration="10.376960736s" podCreationTimestamp="2026-04-17 14:07:31 +0000 UTC" firstStartedPulling="2026-04-17 14:07:33.412709752 +0000 UTC m=+2.750500841" lastFinishedPulling="2026-04-17 14:07:40.112143857 +0000 UTC m=+9.449934951" observedRunningTime="2026-04-17 14:07:41.376319299 +0000 UTC m=+10.714110412" watchObservedRunningTime="2026-04-17 14:07:41.376960736 +0000 UTC m=+10.714751848" Apr 17 14:07:41.406804 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:41.406743 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-sh5zz" podStartSLOduration=3.73323292 podStartE2EDuration="10.406729325s" podCreationTimestamp="2026-04-17 14:07:31 +0000 UTC" firstStartedPulling="2026-04-17 14:07:33.438258228 +0000 UTC m=+2.776049320" lastFinishedPulling="2026-04-17 14:07:40.111754622 +0000 UTC m=+9.449545725" observedRunningTime="2026-04-17 14:07:41.392118063 +0000 UTC m=+10.729909175" watchObservedRunningTime="2026-04-17 14:07:41.406729325 +0000 UTC m=+10.744520426" Apr 17 14:07:41.406990 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:41.406843 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-6nflw" podStartSLOduration=3.7392321920000002 podStartE2EDuration="10.406836904s" podCreationTimestamp="2026-04-17 14:07:31 +0000 UTC" firstStartedPulling="2026-04-17 14:07:33.444170966 +0000 UTC m=+2.781962056" lastFinishedPulling="2026-04-17 14:07:40.111775665 +0000 UTC m=+9.449566768" observedRunningTime="2026-04-17 14:07:41.406059229 +0000 UTC m=+10.743850341" watchObservedRunningTime="2026-04-17 14:07:41.406836904 +0000 UTC m=+10.744628017" Apr 17 14:07:41.930975 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:41.930921 2568 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = creating pod sandbox with name \"k8s_kube-apiserver-proxy-ip-10-0-140-104.ec2.internal_kube-system_d77c3cf6e24439d7792eeaf3b4807554_0\": unable to pull image or OCI artifact: pull image err: initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8cfe89231412ff3ee8cb6207fa0be33cad0f08e88c9c0f1e9f7e8c6f14d6715: Requesting bearer token: received unexpected HTTP status: 504 Gateway Time-out; artifact err: provided artifact is a container image" Apr 17 14:07:41.931077 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:41.931012 2568 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = creating pod sandbox with name \"k8s_kube-apiserver-proxy-ip-10-0-140-104.ec2.internal_kube-system_d77c3cf6e24439d7792eeaf3b4807554_0\": unable to pull image or OCI artifact: pull image err: initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8cfe89231412ff3ee8cb6207fa0be33cad0f08e88c9c0f1e9f7e8c6f14d6715: Requesting bearer token: received unexpected HTTP status: 504 Gateway Time-out; artifact err: provided artifact is a container image" pod="kube-system/kube-apiserver-proxy-ip-10-0-140-104.ec2.internal" Apr 17 14:07:41.931077 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:41.931050 2568 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = creating pod sandbox with name \"k8s_kube-apiserver-proxy-ip-10-0-140-104.ec2.internal_kube-system_d77c3cf6e24439d7792eeaf3b4807554_0\": unable to pull image or OCI artifact: pull image err: initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8cfe89231412ff3ee8cb6207fa0be33cad0f08e88c9c0f1e9f7e8c6f14d6715: Requesting bearer token: received unexpected HTTP status: 504 Gateway Time-out; artifact err: provided artifact is a container image" pod="kube-system/kube-apiserver-proxy-ip-10-0-140-104.ec2.internal" Apr 17 14:07:41.931187 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:41.931127 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-proxy-ip-10-0-140-104.ec2.internal_kube-system(d77c3cf6e24439d7792eeaf3b4807554)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-proxy-ip-10-0-140-104.ec2.internal_kube-system(d77c3cf6e24439d7792eeaf3b4807554)\\\": rpc error: code = Unknown desc = creating pod sandbox with name \\\"k8s_kube-apiserver-proxy-ip-10-0-140-104.ec2.internal_kube-system_d77c3cf6e24439d7792eeaf3b4807554_0\\\": unable to pull image or OCI artifact: pull image err: initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8cfe89231412ff3ee8cb6207fa0be33cad0f08e88c9c0f1e9f7e8c6f14d6715: Requesting bearer token: received unexpected HTTP status: 504 Gateway Time-out; artifact err: provided artifact is a container image\"" pod="kube-system/kube-apiserver-proxy-ip-10-0-140-104.ec2.internal" podUID="d77c3cf6e24439d7792eeaf3b4807554" Apr 17 14:07:42.259407 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:42.259378 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vgr2h" Apr 17 14:07:42.259555 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:42.259511 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vgr2h" podUID="30cbf063-628a-472d-981e-312f5bea1f7f" Apr 17 14:07:42.349497 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:42.349461 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5qssl" event={"ID":"51daea5e-57b3-4362-8315-ad830e53345a","Type":"ContainerStarted","Data":"cf1f7447b77fd6a9eb09b7fbe4a8682eb6411f19b187192759559bb3432ebfbc"} Apr 17 14:07:42.349973 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:42.349699 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-140-104.ec2.internal" Apr 17 14:07:42.349973 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:42.349945 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-140-104.ec2.internal" Apr 17 14:07:42.364100 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:42.364042 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/iptables-alerter-5qssl" podStartSLOduration=4.651232658 podStartE2EDuration="11.364022879s" podCreationTimestamp="2026-04-17 14:07:31 +0000 UTC" firstStartedPulling="2026-04-17 14:07:33.4047358 +0000 UTC m=+2.742526888" lastFinishedPulling="2026-04-17 14:07:40.117526013 +0000 UTC m=+9.455317109" observedRunningTime="2026-04-17 14:07:42.363783785 +0000 UTC m=+11.701574896" watchObservedRunningTime="2026-04-17 14:07:42.364022879 +0000 UTC m=+11.701813990" Apr 17 14:07:42.401456 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:42.401404 2568 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81: Requesting bearer token: received unexpected HTTP status: 504 Gateway Time-out; artifact err: provided artifact is a container image" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81" Apr 17 14:07:42.401650 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:42.401628 2568 kuberuntime_manager.go:1358] "Unhandled Error" err=< Apr 17 14:07:42.401650 ip-10-0-140-104 kubenswrapper[2568]: init container &Container{Name:setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81,Command:[/bin/bash -ec],Args:[echo -n "Waiting for kubelet key and certificate to be available" Apr 17 14:07:42.401650 ip-10-0-140-104 kubenswrapper[2568]: while [ -n "$(test -e /var/lib/kubelet/pki/kubelet-server-current.pem)" ] ; do Apr 17 14:07:42.401650 ip-10-0-140-104 kubenswrapper[2568]: echo -n "." Apr 17 14:07:42.401650 ip-10-0-140-104 kubenswrapper[2568]: sleep 1 Apr 17 14:07:42.401650 ip-10-0-140-104 kubenswrapper[2568]: (( tries += 1 )) Apr 17 14:07:42.401650 ip-10-0-140-104 kubenswrapper[2568]: if [[ "${tries}" -gt 10 ]]; then Apr 17 14:07:42.401650 ip-10-0-140-104 kubenswrapper[2568]: echo "Timed out waiting for kubelet key and cert." Apr 17 14:07:42.401650 ip-10-0-140-104 kubenswrapper[2568]: exit 1 Apr 17 14:07:42.401650 ip-10-0-140-104 kubenswrapper[2568]: fi Apr 17 14:07:42.401650 ip-10-0-140-104 kubenswrapper[2568]: done Apr 17 14:07:42.401650 ip-10-0-140-104 kubenswrapper[2568]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-lib-kubelet,ReadOnly:false,MountPath:/var,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-rbac-proxy-crio-ip-10-0-140-104.ec2.internal_openshift-machine-config-operator(e9b0edd108d39af17896538950b3b13f): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81: Requesting bearer token: received unexpected HTTP status: 504 Gateway Time-out; artifact err: provided artifact is a container image Apr 17 14:07:42.401650 ip-10-0-140-104 kubenswrapper[2568]: > logger="UnhandledError" Apr 17 14:07:42.402806 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:42.402775 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81: Requesting bearer token: received unexpected HTTP status: 504 Gateway Time-out; artifact err: provided artifact is a container image\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-140-104.ec2.internal" podUID="e9b0edd108d39af17896538950b3b13f" Apr 17 14:07:43.259866 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:43.259829 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gh5vx" Apr 17 14:07:43.260062 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:43.259837 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-cgd26" Apr 17 14:07:43.260062 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:43.259963 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-gh5vx" podUID="6946cb58-b181-4207-94e3-02bca6a030b2" Apr 17 14:07:43.260175 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:43.260106 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-cgd26" podUID="a39d75e4-6ec9-4f74-b702-49461b73e668" Apr 17 14:07:43.352261 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:43.352231 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81: Requesting bearer token: received unexpected HTTP status: 504 Gateway Time-out; artifact err: provided artifact is a container image\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-140-104.ec2.internal" podUID="e9b0edd108d39af17896538950b3b13f" Apr 17 14:07:43.791658 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:43.791611 2568 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f2123db9953f98da0e43c266e6a8a070bf221533b995f87b7e358cc7498ca6d: reading manifest sha256:96316433550661db3ef74c1200d3edc0ec9d0b87f2b41589aa7b5e841b6660e3 in quay.io/openshift-release-dev/ocp-v4.0-art-dev: received unexpected HTTP status: 504 Gateway Time-out; artifact err: provided artifact is a container image" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f2123db9953f98da0e43c266e6a8a070bf221533b995f87b7e358cc7498ca6d" Apr 17 14:07:43.791937 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:43.791848 2568 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:tuned,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f2123db9953f98da0e43c266e6a8a070bf221533b995f87b7e358cc7498ca6d,Command:[/usr/bin/cluster-node-tuning-operator ocp-tuned --in-cluster -v=0],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OCP_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:RESYNC_PERIOD,Value:600,ValueFrom:nil,},EnvVar{Name:RELEASE_VERSION,Value:4.20.19,ValueFrom:nil,},EnvVar{Name:CLUSTER_NODE_TUNED_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f2123db9953f98da0e43c266e6a8a070bf221533b995f87b7e358cc7498ca6d,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-modprobe-d,ReadOnly:false,MountPath:/etc/modprobe.d,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-sysconfig,ReadOnly:false,MountPath:/etc/sysconfig,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-sysctl-d,ReadOnly:true,MountPath:/etc/sysctl.d,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-sysctl-conf,ReadOnly:true,MountPath:/etc/sysctl.conf,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-systemd,ReadOnly:false,MountPath:/etc/systemd,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-tuned,ReadOnly:false,MountPath:/etc/tuned,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run,ReadOnly:false,MountPath:/run,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:sys,ReadOnly:false,MountPath:/sys,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-kubelet,ReadOnly:true,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cstcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tuned-vzgdn_openshift-cluster-node-tuning-operator(a7543f64-7649-44f7-bd0e-fdc6724b7f1e): ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f2123db9953f98da0e43c266e6a8a070bf221533b995f87b7e358cc7498ca6d: reading manifest sha256:96316433550661db3ef74c1200d3edc0ec9d0b87f2b41589aa7b5e841b6660e3 in quay.io/openshift-release-dev/ocp-v4.0-art-dev: received unexpected HTTP status: 504 Gateway Time-out; artifact err: provided artifact is a container image" logger="UnhandledError" Apr 17 14:07:43.793079 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:43.793031 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tuned\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f2123db9953f98da0e43c266e6a8a070bf221533b995f87b7e358cc7498ca6d: reading manifest sha256:96316433550661db3ef74c1200d3edc0ec9d0b87f2b41589aa7b5e841b6660e3 in quay.io/openshift-release-dev/ocp-v4.0-art-dev: received unexpected HTTP status: 504 Gateway Time-out; artifact err: provided artifact is a container image\"" pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" podUID="a7543f64-7649-44f7-bd0e-fdc6724b7f1e" Apr 17 14:07:44.038349 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:44.038303 2568 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/konnectivity-agent-5bbkp" Apr 17 14:07:44.039021 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:44.038998 2568 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/konnectivity-agent-5bbkp" Apr 17 14:07:44.258834 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:44.258796 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vgr2h" Apr 17 14:07:44.258983 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:44.258928 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vgr2h" podUID="30cbf063-628a-472d-981e-312f5bea1f7f" Apr 17 14:07:44.353275 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:44.353160 2568 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kube-system/konnectivity-agent-5bbkp" Apr 17 14:07:44.353715 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:44.353686 2568 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/konnectivity-agent-5bbkp" Apr 17 14:07:44.354531 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:44.354485 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tuned\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f2123db9953f98da0e43c266e6a8a070bf221533b995f87b7e358cc7498ca6d\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f2123db9953f98da0e43c266e6a8a070bf221533b995f87b7e358cc7498ca6d: reading manifest sha256:96316433550661db3ef74c1200d3edc0ec9d0b87f2b41589aa7b5e841b6660e3 in quay.io/openshift-release-dev/ocp-v4.0-art-dev: received unexpected HTTP status: 504 Gateway Time-out; artifact err: provided artifact is a container image\"" pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" podUID="a7543f64-7649-44f7-bd0e-fdc6724b7f1e" Apr 17 14:07:45.100248 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:45.100039 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/a39d75e4-6ec9-4f74-b702-49461b73e668-original-pull-secret\") pod \"global-pull-secret-syncer-cgd26\" (UID: \"a39d75e4-6ec9-4f74-b702-49461b73e668\") " pod="kube-system/global-pull-secret-syncer-cgd26" Apr 17 14:07:45.100406 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:45.100182 2568 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 17 14:07:45.100406 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:45.100355 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a39d75e4-6ec9-4f74-b702-49461b73e668-original-pull-secret podName:a39d75e4-6ec9-4f74-b702-49461b73e668 nodeName:}" failed. No retries permitted until 2026-04-17 14:07:53.10034022 +0000 UTC m=+22.438131311 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/a39d75e4-6ec9-4f74-b702-49461b73e668-original-pull-secret") pod "global-pull-secret-syncer-cgd26" (UID: "a39d75e4-6ec9-4f74-b702-49461b73e668") : object "kube-system"/"original-pull-secret" not registered Apr 17 14:07:45.259530 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:45.259495 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-cgd26" Apr 17 14:07:45.259687 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:45.259496 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gh5vx" Apr 17 14:07:45.259687 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:45.259628 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-cgd26" podUID="a39d75e4-6ec9-4f74-b702-49461b73e668" Apr 17 14:07:45.259810 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:45.259708 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-gh5vx" podUID="6946cb58-b181-4207-94e3-02bca6a030b2" Apr 17 14:07:46.259294 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:46.259236 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vgr2h" Apr 17 14:07:46.259756 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:46.259378 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vgr2h" podUID="30cbf063-628a-472d-981e-312f5bea1f7f" Apr 17 14:07:47.259479 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:47.259439 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gh5vx" Apr 17 14:07:47.259939 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:47.259903 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-cgd26" Apr 17 14:07:47.262125 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:47.260099 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-gh5vx" podUID="6946cb58-b181-4207-94e3-02bca6a030b2" Apr 17 14:07:47.262125 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:47.260708 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-cgd26" podUID="a39d75e4-6ec9-4f74-b702-49461b73e668" Apr 17 14:07:48.259639 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:48.259605 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vgr2h" Apr 17 14:07:48.260155 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:48.259719 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vgr2h" podUID="30cbf063-628a-472d-981e-312f5bea1f7f" Apr 17 14:07:48.829698 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:48.829650 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/30cbf063-628a-472d-981e-312f5bea1f7f-metrics-certs\") pod \"network-metrics-daemon-vgr2h\" (UID: \"30cbf063-628a-472d-981e-312f5bea1f7f\") " pod="openshift-multus/network-metrics-daemon-vgr2h" Apr 17 14:07:48.829896 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:48.829807 2568 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 14:07:48.829955 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:48.829898 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30cbf063-628a-472d-981e-312f5bea1f7f-metrics-certs podName:30cbf063-628a-472d-981e-312f5bea1f7f nodeName:}" failed. No retries permitted until 2026-04-17 14:08:04.829877746 +0000 UTC m=+34.167668838 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/30cbf063-628a-472d-981e-312f5bea1f7f-metrics-certs") pod "network-metrics-daemon-vgr2h" (UID: "30cbf063-628a-472d-981e-312f5bea1f7f") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 14:07:48.930494 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:48.930451 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zzpm4\" (UniqueName: \"kubernetes.io/projected/6946cb58-b181-4207-94e3-02bca6a030b2-kube-api-access-zzpm4\") pod \"network-check-target-gh5vx\" (UID: \"6946cb58-b181-4207-94e3-02bca6a030b2\") " pod="openshift-network-diagnostics/network-check-target-gh5vx" Apr 17 14:07:48.930664 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:48.930616 2568 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 17 14:07:48.930664 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:48.930636 2568 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 17 14:07:48.930664 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:48.930650 2568 projected.go:194] Error preparing data for projected volume kube-api-access-zzpm4 for pod openshift-network-diagnostics/network-check-target-gh5vx: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 14:07:48.930826 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:48.930709 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6946cb58-b181-4207-94e3-02bca6a030b2-kube-api-access-zzpm4 podName:6946cb58-b181-4207-94e3-02bca6a030b2 nodeName:}" failed. No retries permitted until 2026-04-17 14:08:04.930693858 +0000 UTC m=+34.268484946 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-zzpm4" (UniqueName: "kubernetes.io/projected/6946cb58-b181-4207-94e3-02bca6a030b2-kube-api-access-zzpm4") pod "network-check-target-gh5vx" (UID: "6946cb58-b181-4207-94e3-02bca6a030b2") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 14:07:49.259378 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:49.259341 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-cgd26" Apr 17 14:07:49.259547 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:49.259452 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-cgd26" podUID="a39d75e4-6ec9-4f74-b702-49461b73e668" Apr 17 14:07:49.259547 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:49.259527 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gh5vx" Apr 17 14:07:49.259632 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:49.259606 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-gh5vx" podUID="6946cb58-b181-4207-94e3-02bca6a030b2" Apr 17 14:07:50.258919 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:50.258890 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vgr2h" Apr 17 14:07:50.259452 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:50.259037 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vgr2h" podUID="30cbf063-628a-472d-981e-312f5bea1f7f" Apr 17 14:07:50.363799 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:50.363719 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-140-104.ec2.internal" event={"ID":"d77c3cf6e24439d7792eeaf3b4807554","Type":"ContainerStarted","Data":"7b91f73bac9ae4ea7dc6a88c1e5dbff00f317bc247510bd217db38425411d559"} Apr 17 14:07:51.260043 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:51.260005 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-cgd26" Apr 17 14:07:51.260489 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:51.260130 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-cgd26" podUID="a39d75e4-6ec9-4f74-b702-49461b73e668" Apr 17 14:07:51.260489 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:51.260193 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gh5vx" Apr 17 14:07:51.260489 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:51.260312 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-gh5vx" podUID="6946cb58-b181-4207-94e3-02bca6a030b2" Apr 17 14:07:51.882214 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:51.882005 2568 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock" Apr 17 14:07:52.164639 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:52.164523 2568 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock","Timestamp":"2026-04-17T14:07:51.88219614Z","UUID":"3a2283d5-375a-45bd-a952-194d3a84e22a","Handler":null,"Name":"","Endpoint":""} Apr 17 14:07:52.167727 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:52.167702 2568 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: ebs.csi.aws.com endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock versions: 1.0.0 Apr 17 14:07:52.167727 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:52.167733 2568 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: ebs.csi.aws.com at endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock Apr 17 14:07:52.259587 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:52.259509 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vgr2h" Apr 17 14:07:52.259710 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:52.259625 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vgr2h" podUID="30cbf063-628a-472d-981e-312f5bea1f7f" Apr 17 14:07:52.369253 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:52.369207 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-wn4kk" event={"ID":"380cb33f-199e-44fd-8e74-06e5aad709a9","Type":"ContainerStarted","Data":"659ecd98719583c8625bc07a8d48a2beb620235bee9f7e3f1efb3e11507d7f9f"} Apr 17 14:07:52.371239 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:52.371210 2568 generic.go:358] "Generic (PLEG): container finished" podID="7075f961-4efd-41c8-9591-c7608ce4563a" containerID="be4a0e4c3027900f84dba7c87e63bcd0a80f2d99e89d8a7fd040a62efad53632" exitCode=0 Apr 17 14:07:52.371377 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:52.371290 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x4sv2" event={"ID":"7075f961-4efd-41c8-9591-c7608ce4563a","Type":"ContainerDied","Data":"be4a0e4c3027900f84dba7c87e63bcd0a80f2d99e89d8a7fd040a62efad53632"} Apr 17 14:07:52.373747 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:52.373719 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hhjfc" event={"ID":"98c5f7fc-8ede-450b-961f-6812d4ee961b","Type":"ContainerStarted","Data":"2807f1a75e9e526b9ccf5a592ed7eea1c3de1dfc27d7eeb2523845eb8e4f54b8"} Apr 17 14:07:52.376010 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:52.375956 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" event={"ID":"47f04f56-df85-4d1d-ade1-4e5bc3b49e67","Type":"ContainerStarted","Data":"cb5cadcd2d20a00705a6ba18f6fc54d16ac9248adb04e025db3c7a0d0584e1c6"} Apr 17 14:07:52.376010 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:52.375991 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" event={"ID":"47f04f56-df85-4d1d-ade1-4e5bc3b49e67","Type":"ContainerStarted","Data":"1e64bf20651639dfe9d62643a92a401f9bb99eaaf55e52b05cd2ffaf0f893f67"} Apr 17 14:07:53.164086 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:53.164051 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/a39d75e4-6ec9-4f74-b702-49461b73e668-original-pull-secret\") pod \"global-pull-secret-syncer-cgd26\" (UID: \"a39d75e4-6ec9-4f74-b702-49461b73e668\") " pod="kube-system/global-pull-secret-syncer-cgd26" Apr 17 14:07:53.164379 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:53.164194 2568 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 17 14:07:53.164379 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:53.164267 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a39d75e4-6ec9-4f74-b702-49461b73e668-original-pull-secret podName:a39d75e4-6ec9-4f74-b702-49461b73e668 nodeName:}" failed. No retries permitted until 2026-04-17 14:08:09.164247676 +0000 UTC m=+38.502038785 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/a39d75e4-6ec9-4f74-b702-49461b73e668-original-pull-secret") pod "global-pull-secret-syncer-cgd26" (UID: "a39d75e4-6ec9-4f74-b702-49461b73e668") : object "kube-system"/"original-pull-secret" not registered Apr 17 14:07:53.259261 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:53.259228 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-cgd26" Apr 17 14:07:53.259444 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:53.259228 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gh5vx" Apr 17 14:07:53.259444 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:53.259342 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-cgd26" podUID="a39d75e4-6ec9-4f74-b702-49461b73e668" Apr 17 14:07:53.259444 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:53.259395 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-gh5vx" podUID="6946cb58-b181-4207-94e3-02bca6a030b2" Apr 17 14:07:54.259183 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:54.259000 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vgr2h" Apr 17 14:07:54.259465 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:54.259263 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vgr2h" podUID="30cbf063-628a-472d-981e-312f5bea1f7f" Apr 17 14:07:54.380836 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:54.380751 2568 generic.go:358] "Generic (PLEG): container finished" podID="7075f961-4efd-41c8-9591-c7608ce4563a" containerID="651fac1dda0a30ae936efe1acdbea908e6b6ebdb903188956a79e693f343444b" exitCode=0 Apr 17 14:07:54.381006 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:54.380842 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x4sv2" event={"ID":"7075f961-4efd-41c8-9591-c7608ce4563a","Type":"ContainerDied","Data":"651fac1dda0a30ae936efe1acdbea908e6b6ebdb903188956a79e693f343444b"} Apr 17 14:07:54.383504 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:54.383488 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" event={"ID":"47f04f56-df85-4d1d-ade1-4e5bc3b49e67","Type":"ContainerStarted","Data":"f2bbc67a2f9969c06436782f6d2ac37b7590dfe50722c5bfc8499f9d13de4d5a"} Apr 17 14:07:54.383581 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:54.383516 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" event={"ID":"47f04f56-df85-4d1d-ade1-4e5bc3b49e67","Type":"ContainerStarted","Data":"825ae4b3f79e912b73eab13a91bcc412d6abe5a5657f3618b46bd3c0757f736a"} Apr 17 14:07:54.383581 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:54.383534 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" event={"ID":"47f04f56-df85-4d1d-ade1-4e5bc3b49e67","Type":"ContainerStarted","Data":"aa1791055ba5535140cbe5fb843162c31a294b76d936e59235d63322fd41c223"} Apr 17 14:07:54.383581 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:54.383544 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" event={"ID":"47f04f56-df85-4d1d-ade1-4e5bc3b49e67","Type":"ContainerStarted","Data":"a506c7df5e061bd8e70872f097524fee263155d3f945631f03c0deeffdcfa667"} Apr 17 14:07:54.385166 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:54.385141 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-wn4kk" event={"ID":"380cb33f-199e-44fd-8e74-06e5aad709a9","Type":"ContainerStarted","Data":"598a0e7ff8698612b240d3e31f7eae39e094f47573488c6387629a48c1c1d207"} Apr 17 14:07:54.386360 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:54.386339 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-140-104.ec2.internal" event={"ID":"d77c3cf6e24439d7792eeaf3b4807554","Type":"ContainerStarted","Data":"cef71caa57c604bf78baf35a117f2c383aaaad345e51efa7d57b9c00d945b043"} Apr 17 14:07:54.403349 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:54.403308 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-hhjfc" podStartSLOduration=6.623897764 podStartE2EDuration="23.403294842s" podCreationTimestamp="2026-04-17 14:07:31 +0000 UTC" firstStartedPulling="2026-04-17 14:07:33.429096185 +0000 UTC m=+2.766887274" lastFinishedPulling="2026-04-17 14:07:50.208493249 +0000 UTC m=+19.546284352" observedRunningTime="2026-04-17 14:07:52.407196761 +0000 UTC m=+21.744987942" watchObservedRunningTime="2026-04-17 14:07:54.403294842 +0000 UTC m=+23.741085953" Apr 17 14:07:54.430759 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:54.430709 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-wn4kk" podStartSLOduration=2.876757448 podStartE2EDuration="23.430694259s" podCreationTimestamp="2026-04-17 14:07:31 +0000 UTC" firstStartedPulling="2026-04-17 14:07:33.401516075 +0000 UTC m=+2.739307164" lastFinishedPulling="2026-04-17 14:07:53.955452887 +0000 UTC m=+23.293243975" observedRunningTime="2026-04-17 14:07:54.430609545 +0000 UTC m=+23.768400656" watchObservedRunningTime="2026-04-17 14:07:54.430694259 +0000 UTC m=+23.768485389" Apr 17 14:07:54.431071 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:54.431050 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-proxy-ip-10-0-140-104.ec2.internal" podStartSLOduration=22.431044264 podStartE2EDuration="22.431044264s" podCreationTimestamp="2026-04-17 14:07:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 14:07:54.415130246 +0000 UTC m=+23.752921356" watchObservedRunningTime="2026-04-17 14:07:54.431044264 +0000 UTC m=+23.768835375" Apr 17 14:07:55.259300 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:55.259257 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-cgd26" Apr 17 14:07:55.259759 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:55.259257 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gh5vx" Apr 17 14:07:55.259759 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:55.259390 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-cgd26" podUID="a39d75e4-6ec9-4f74-b702-49461b73e668" Apr 17 14:07:55.259759 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:55.259408 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-gh5vx" podUID="6946cb58-b181-4207-94e3-02bca6a030b2" Apr 17 14:07:55.389481 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:55.389401 2568 generic.go:358] "Generic (PLEG): container finished" podID="7075f961-4efd-41c8-9591-c7608ce4563a" containerID="4662179a9fb16c26d091cf3809c081bf59b5c6fe7783915755dd4342540ce520" exitCode=0 Apr 17 14:07:55.389618 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:55.389475 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x4sv2" event={"ID":"7075f961-4efd-41c8-9591-c7608ce4563a","Type":"ContainerDied","Data":"4662179a9fb16c26d091cf3809c081bf59b5c6fe7783915755dd4342540ce520"} Apr 17 14:07:56.259541 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:56.259503 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vgr2h" Apr 17 14:07:56.260006 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:56.259703 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vgr2h" podUID="30cbf063-628a-472d-981e-312f5bea1f7f" Apr 17 14:07:56.393297 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:56.393066 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-140-104.ec2.internal" event={"ID":"e9b0edd108d39af17896538950b3b13f","Type":"ContainerStarted","Data":"3a8d974ace2ae33dab38054beb29992234255eae91ec3d2ca44644e682ee68aa"} Apr 17 14:07:56.398286 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:56.398235 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" event={"ID":"47f04f56-df85-4d1d-ade1-4e5bc3b49e67","Type":"ContainerStarted","Data":"33f16389e6a96b7f0f2e3f8b043d6b4dabb8706a48a5c68d100d7311978b56f8"} Apr 17 14:07:57.259338 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:57.259300 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-cgd26" Apr 17 14:07:57.259338 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:57.259300 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gh5vx" Apr 17 14:07:57.259562 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:57.259532 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-cgd26" podUID="a39d75e4-6ec9-4f74-b702-49461b73e668" Apr 17 14:07:57.260038 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:57.259643 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-gh5vx" podUID="6946cb58-b181-4207-94e3-02bca6a030b2" Apr 17 14:07:57.402128 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:57.402096 2568 generic.go:358] "Generic (PLEG): container finished" podID="e9b0edd108d39af17896538950b3b13f" containerID="3a8d974ace2ae33dab38054beb29992234255eae91ec3d2ca44644e682ee68aa" exitCode=0 Apr 17 14:07:57.402318 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:57.402165 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-140-104.ec2.internal" event={"ID":"e9b0edd108d39af17896538950b3b13f","Type":"ContainerDied","Data":"3a8d974ace2ae33dab38054beb29992234255eae91ec3d2ca44644e682ee68aa"} Apr 17 14:07:58.259295 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:58.259261 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vgr2h" Apr 17 14:07:58.259467 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:58.259374 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vgr2h" podUID="30cbf063-628a-472d-981e-312f5bea1f7f" Apr 17 14:07:58.405332 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:58.405298 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-140-104.ec2.internal" event={"ID":"e9b0edd108d39af17896538950b3b13f","Type":"ContainerStarted","Data":"c2e84974c7eda7a91f94b31f5e42e9ab0b85455adf9f7be2f64d866be0f8d1ea"} Apr 17 14:07:58.419645 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:58.419594 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-140-104.ec2.internal" podStartSLOduration=26.419577762 podStartE2EDuration="26.419577762s" podCreationTimestamp="2026-04-17 14:07:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 14:07:58.419063156 +0000 UTC m=+27.756854267" watchObservedRunningTime="2026-04-17 14:07:58.419577762 +0000 UTC m=+27.757368873" Apr 17 14:07:59.259343 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:59.259153 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-cgd26" Apr 17 14:07:59.259538 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:59.259153 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gh5vx" Apr 17 14:07:59.259538 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:59.259439 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-cgd26" podUID="a39d75e4-6ec9-4f74-b702-49461b73e668" Apr 17 14:07:59.259538 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:07:59.259518 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-gh5vx" podUID="6946cb58-b181-4207-94e3-02bca6a030b2" Apr 17 14:07:59.413474 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:59.412817 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" event={"ID":"47f04f56-df85-4d1d-ade1-4e5bc3b49e67","Type":"ContainerStarted","Data":"d664fcd8f5dd8f32a50c18ebf1e89cb2d826e8cc43dc44468b893f0745d965be"} Apr 17 14:07:59.413474 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:59.413214 2568 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:59.413474 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:59.413339 2568 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:59.413474 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:59.413436 2568 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:59.432963 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:59.432920 2568 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:59.434025 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:59.433976 2568 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:07:59.440036 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:07:59.439980 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" podStartSLOduration=8.040403815 podStartE2EDuration="28.439959584s" podCreationTimestamp="2026-04-17 14:07:31 +0000 UTC" firstStartedPulling="2026-04-17 14:07:33.425608014 +0000 UTC m=+2.763399103" lastFinishedPulling="2026-04-17 14:07:53.825163768 +0000 UTC m=+23.162954872" observedRunningTime="2026-04-17 14:07:59.439300333 +0000 UTC m=+28.777091446" watchObservedRunningTime="2026-04-17 14:07:59.439959584 +0000 UTC m=+28.777750697" Apr 17 14:08:00.259478 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:00.259252 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vgr2h" Apr 17 14:08:00.259478 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:00.259404 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vgr2h" podUID="30cbf063-628a-472d-981e-312f5bea1f7f" Apr 17 14:08:00.478205 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:00.478167 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-cgd26"] Apr 17 14:08:00.478711 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:00.478325 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-cgd26" Apr 17 14:08:00.478711 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:00.478439 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-cgd26" podUID="a39d75e4-6ec9-4f74-b702-49461b73e668" Apr 17 14:08:00.479108 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:00.479072 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-vgr2h"] Apr 17 14:08:00.479287 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:00.479187 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vgr2h" Apr 17 14:08:00.479366 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:00.479298 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vgr2h" podUID="30cbf063-628a-472d-981e-312f5bea1f7f" Apr 17 14:08:00.480482 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:00.480461 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-gh5vx"] Apr 17 14:08:00.480588 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:00.480573 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gh5vx" Apr 17 14:08:00.480691 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:00.480658 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-gh5vx" podUID="6946cb58-b181-4207-94e3-02bca6a030b2" Apr 17 14:08:02.259172 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:02.259138 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gh5vx" Apr 17 14:08:02.259621 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:02.259138 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-cgd26" Apr 17 14:08:02.259621 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:02.259249 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-gh5vx" podUID="6946cb58-b181-4207-94e3-02bca6a030b2" Apr 17 14:08:02.259621 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:02.259352 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-cgd26" podUID="a39d75e4-6ec9-4f74-b702-49461b73e668" Apr 17 14:08:02.259621 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:02.259152 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vgr2h" Apr 17 14:08:02.259621 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:02.259449 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vgr2h" podUID="30cbf063-628a-472d-981e-312f5bea1f7f" Apr 17 14:08:04.259723 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:04.259506 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vgr2h" Apr 17 14:08:04.260224 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:04.259504 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-cgd26" Apr 17 14:08:04.260224 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:04.259759 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vgr2h" podUID="30cbf063-628a-472d-981e-312f5bea1f7f" Apr 17 14:08:04.260224 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:04.259820 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-cgd26" podUID="a39d75e4-6ec9-4f74-b702-49461b73e668" Apr 17 14:08:04.260224 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:04.259504 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gh5vx" Apr 17 14:08:04.260224 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:04.259914 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-gh5vx" podUID="6946cb58-b181-4207-94e3-02bca6a030b2" Apr 17 14:08:04.425427 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:04.425393 2568 generic.go:358] "Generic (PLEG): container finished" podID="7075f961-4efd-41c8-9591-c7608ce4563a" containerID="515291a0f570f6340192daab1b243d5a902bb81a1aae8f075ca31583a602435c" exitCode=0 Apr 17 14:08:04.425587 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:04.425476 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x4sv2" event={"ID":"7075f961-4efd-41c8-9591-c7608ce4563a","Type":"ContainerDied","Data":"515291a0f570f6340192daab1b243d5a902bb81a1aae8f075ca31583a602435c"} Apr 17 14:08:04.426830 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:04.426809 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" event={"ID":"a7543f64-7649-44f7-bd0e-fdc6724b7f1e","Type":"ContainerStarted","Data":"0835c13a985abdac56e749d606c3e73bd490ff334470e7f343b7dbd90fe6f54a"} Apr 17 14:08:04.460180 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:04.460102 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-vzgdn" podStartSLOduration=3.212284454 podStartE2EDuration="33.460084756s" podCreationTimestamp="2026-04-17 14:07:31 +0000 UTC" firstStartedPulling="2026-04-17 14:07:33.418319367 +0000 UTC m=+2.756110456" lastFinishedPulling="2026-04-17 14:08:03.666119664 +0000 UTC m=+33.003910758" observedRunningTime="2026-04-17 14:08:04.459787147 +0000 UTC m=+33.797578258" watchObservedRunningTime="2026-04-17 14:08:04.460084756 +0000 UTC m=+33.797875869" Apr 17 14:08:04.849759 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:04.849679 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/30cbf063-628a-472d-981e-312f5bea1f7f-metrics-certs\") pod \"network-metrics-daemon-vgr2h\" (UID: \"30cbf063-628a-472d-981e-312f5bea1f7f\") " pod="openshift-multus/network-metrics-daemon-vgr2h" Apr 17 14:08:04.849913 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:04.849794 2568 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 14:08:04.849913 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:04.849847 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30cbf063-628a-472d-981e-312f5bea1f7f-metrics-certs podName:30cbf063-628a-472d-981e-312f5bea1f7f nodeName:}" failed. No retries permitted until 2026-04-17 14:08:36.849832055 +0000 UTC m=+66.187623147 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/30cbf063-628a-472d-981e-312f5bea1f7f-metrics-certs") pod "network-metrics-daemon-vgr2h" (UID: "30cbf063-628a-472d-981e-312f5bea1f7f") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 14:08:04.950872 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:04.950827 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zzpm4\" (UniqueName: \"kubernetes.io/projected/6946cb58-b181-4207-94e3-02bca6a030b2-kube-api-access-zzpm4\") pod \"network-check-target-gh5vx\" (UID: \"6946cb58-b181-4207-94e3-02bca6a030b2\") " pod="openshift-network-diagnostics/network-check-target-gh5vx" Apr 17 14:08:04.951071 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:04.950995 2568 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 17 14:08:04.951071 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:04.951021 2568 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 17 14:08:04.951071 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:04.951032 2568 projected.go:194] Error preparing data for projected volume kube-api-access-zzpm4 for pod openshift-network-diagnostics/network-check-target-gh5vx: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 14:08:04.951231 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:04.951095 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6946cb58-b181-4207-94e3-02bca6a030b2-kube-api-access-zzpm4 podName:6946cb58-b181-4207-94e3-02bca6a030b2 nodeName:}" failed. No retries permitted until 2026-04-17 14:08:36.951075027 +0000 UTC m=+66.288866132 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-zzpm4" (UniqueName: "kubernetes.io/projected/6946cb58-b181-4207-94e3-02bca6a030b2-kube-api-access-zzpm4") pod "network-check-target-gh5vx" (UID: "6946cb58-b181-4207-94e3-02bca6a030b2") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 14:08:05.432303 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:05.432266 2568 generic.go:358] "Generic (PLEG): container finished" podID="7075f961-4efd-41c8-9591-c7608ce4563a" containerID="04b4edf064ecf9e7f9c1a20176802a50a918cfd2464324c91bf28578e4ee60d2" exitCode=0 Apr 17 14:08:05.432880 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:05.432331 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x4sv2" event={"ID":"7075f961-4efd-41c8-9591-c7608ce4563a","Type":"ContainerDied","Data":"04b4edf064ecf9e7f9c1a20176802a50a918cfd2464324c91bf28578e4ee60d2"} Apr 17 14:08:06.259746 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:06.259709 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-cgd26" Apr 17 14:08:06.259926 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:06.259777 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vgr2h" Apr 17 14:08:06.259926 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:06.259798 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gh5vx" Apr 17 14:08:06.259996 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:06.259932 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vgr2h" podUID="30cbf063-628a-472d-981e-312f5bea1f7f" Apr 17 14:08:06.259996 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:06.259953 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-cgd26" podUID="a39d75e4-6ec9-4f74-b702-49461b73e668" Apr 17 14:08:06.260064 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:06.260041 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-gh5vx" podUID="6946cb58-b181-4207-94e3-02bca6a030b2" Apr 17 14:08:06.437558 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:06.437527 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x4sv2" event={"ID":"7075f961-4efd-41c8-9591-c7608ce4563a","Type":"ContainerStarted","Data":"15677f30b6f108e584b0764182a2cef5200e996db65481b6ea19e84a5f74cf5c"} Apr 17 14:08:06.459332 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:06.459274 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-x4sv2" podStartSLOduration=5.214515368 podStartE2EDuration="35.459247262s" podCreationTimestamp="2026-04-17 14:07:31 +0000 UTC" firstStartedPulling="2026-04-17 14:07:33.44756254 +0000 UTC m=+2.785353628" lastFinishedPulling="2026-04-17 14:08:03.692294415 +0000 UTC m=+33.030085522" observedRunningTime="2026-04-17 14:08:06.458036268 +0000 UTC m=+35.795827379" watchObservedRunningTime="2026-04-17 14:08:06.459247262 +0000 UTC m=+35.797038372" Apr 17 14:08:06.464106 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:06.464084 2568 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-104.ec2.internal" event="NodeReady" Apr 17 14:08:06.464250 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:06.464209 2568 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Apr 17 14:08:06.503741 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:06.503710 2568 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-mrqpz"] Apr 17 14:08:06.518832 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:06.518804 2568 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-n4248"] Apr 17 14:08:06.518997 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:06.518974 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mrqpz" Apr 17 14:08:06.521472 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:06.521444 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Apr 17 14:08:06.521711 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:06.521693 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Apr 17 14:08:06.521767 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:06.521724 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-fnmsv\"" Apr 17 14:08:06.533073 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:06.533040 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-mrqpz"] Apr 17 14:08:06.533073 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:06.533075 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-n4248"] Apr 17 14:08:06.533221 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:06.533152 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-n4248" Apr 17 14:08:06.535975 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:06.535954 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Apr 17 14:08:06.536119 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:06.536000 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Apr 17 14:08:06.536315 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:06.536297 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Apr 17 14:08:06.536649 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:06.536631 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-nsnr8\"" Apr 17 14:08:06.661987 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:06.661957 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6cr2\" (UniqueName: \"kubernetes.io/projected/c99bcc58-e14e-4455-8308-f2a36ad35eff-kube-api-access-f6cr2\") pod \"dns-default-mrqpz\" (UID: \"c99bcc58-e14e-4455-8308-f2a36ad35eff\") " pod="openshift-dns/dns-default-mrqpz" Apr 17 14:08:06.662172 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:06.661994 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c99bcc58-e14e-4455-8308-f2a36ad35eff-config-volume\") pod \"dns-default-mrqpz\" (UID: \"c99bcc58-e14e-4455-8308-f2a36ad35eff\") " pod="openshift-dns/dns-default-mrqpz" Apr 17 14:08:06.662172 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:06.662018 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f7bc9c86-d3a7-43d7-9862-6170cb691894-cert\") pod \"ingress-canary-n4248\" (UID: \"f7bc9c86-d3a7-43d7-9862-6170cb691894\") " pod="openshift-ingress-canary/ingress-canary-n4248" Apr 17 14:08:06.662172 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:06.662106 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj8bg\" (UniqueName: \"kubernetes.io/projected/f7bc9c86-d3a7-43d7-9862-6170cb691894-kube-api-access-cj8bg\") pod \"ingress-canary-n4248\" (UID: \"f7bc9c86-d3a7-43d7-9862-6170cb691894\") " pod="openshift-ingress-canary/ingress-canary-n4248" Apr 17 14:08:06.662172 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:06.662161 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c99bcc58-e14e-4455-8308-f2a36ad35eff-tmp-dir\") pod \"dns-default-mrqpz\" (UID: \"c99bcc58-e14e-4455-8308-f2a36ad35eff\") " pod="openshift-dns/dns-default-mrqpz" Apr 17 14:08:06.662300 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:06.662190 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c99bcc58-e14e-4455-8308-f2a36ad35eff-metrics-tls\") pod \"dns-default-mrqpz\" (UID: \"c99bcc58-e14e-4455-8308-f2a36ad35eff\") " pod="openshift-dns/dns-default-mrqpz" Apr 17 14:08:06.763029 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:06.762963 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f6cr2\" (UniqueName: \"kubernetes.io/projected/c99bcc58-e14e-4455-8308-f2a36ad35eff-kube-api-access-f6cr2\") pod \"dns-default-mrqpz\" (UID: \"c99bcc58-e14e-4455-8308-f2a36ad35eff\") " pod="openshift-dns/dns-default-mrqpz" Apr 17 14:08:06.763029 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:06.762999 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c99bcc58-e14e-4455-8308-f2a36ad35eff-config-volume\") pod \"dns-default-mrqpz\" (UID: \"c99bcc58-e14e-4455-8308-f2a36ad35eff\") " pod="openshift-dns/dns-default-mrqpz" Apr 17 14:08:06.763029 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:06.763019 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f7bc9c86-d3a7-43d7-9862-6170cb691894-cert\") pod \"ingress-canary-n4248\" (UID: \"f7bc9c86-d3a7-43d7-9862-6170cb691894\") " pod="openshift-ingress-canary/ingress-canary-n4248" Apr 17 14:08:06.763230 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:06.763052 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cj8bg\" (UniqueName: \"kubernetes.io/projected/f7bc9c86-d3a7-43d7-9862-6170cb691894-kube-api-access-cj8bg\") pod \"ingress-canary-n4248\" (UID: \"f7bc9c86-d3a7-43d7-9862-6170cb691894\") " pod="openshift-ingress-canary/ingress-canary-n4248" Apr 17 14:08:06.763230 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:06.763077 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c99bcc58-e14e-4455-8308-f2a36ad35eff-tmp-dir\") pod \"dns-default-mrqpz\" (UID: \"c99bcc58-e14e-4455-8308-f2a36ad35eff\") " pod="openshift-dns/dns-default-mrqpz" Apr 17 14:08:06.763230 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:06.763103 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c99bcc58-e14e-4455-8308-f2a36ad35eff-metrics-tls\") pod \"dns-default-mrqpz\" (UID: \"c99bcc58-e14e-4455-8308-f2a36ad35eff\") " pod="openshift-dns/dns-default-mrqpz" Apr 17 14:08:06.763230 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:06.763164 2568 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 17 14:08:06.763230 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:06.763180 2568 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 17 14:08:06.763411 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:06.763241 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7bc9c86-d3a7-43d7-9862-6170cb691894-cert podName:f7bc9c86-d3a7-43d7-9862-6170cb691894 nodeName:}" failed. No retries permitted until 2026-04-17 14:08:07.263219135 +0000 UTC m=+36.601010230 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f7bc9c86-d3a7-43d7-9862-6170cb691894-cert") pod "ingress-canary-n4248" (UID: "f7bc9c86-d3a7-43d7-9862-6170cb691894") : secret "canary-serving-cert" not found Apr 17 14:08:06.763411 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:06.763259 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c99bcc58-e14e-4455-8308-f2a36ad35eff-metrics-tls podName:c99bcc58-e14e-4455-8308-f2a36ad35eff nodeName:}" failed. No retries permitted until 2026-04-17 14:08:07.26325044 +0000 UTC m=+36.601041531 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c99bcc58-e14e-4455-8308-f2a36ad35eff-metrics-tls") pod "dns-default-mrqpz" (UID: "c99bcc58-e14e-4455-8308-f2a36ad35eff") : secret "dns-default-metrics-tls" not found Apr 17 14:08:06.763493 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:06.763428 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c99bcc58-e14e-4455-8308-f2a36ad35eff-tmp-dir\") pod \"dns-default-mrqpz\" (UID: \"c99bcc58-e14e-4455-8308-f2a36ad35eff\") " pod="openshift-dns/dns-default-mrqpz" Apr 17 14:08:06.763640 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:06.763620 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c99bcc58-e14e-4455-8308-f2a36ad35eff-config-volume\") pod \"dns-default-mrqpz\" (UID: \"c99bcc58-e14e-4455-8308-f2a36ad35eff\") " pod="openshift-dns/dns-default-mrqpz" Apr 17 14:08:06.773124 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:06.773098 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6cr2\" (UniqueName: \"kubernetes.io/projected/c99bcc58-e14e-4455-8308-f2a36ad35eff-kube-api-access-f6cr2\") pod \"dns-default-mrqpz\" (UID: \"c99bcc58-e14e-4455-8308-f2a36ad35eff\") " pod="openshift-dns/dns-default-mrqpz" Apr 17 14:08:06.773240 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:06.773173 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj8bg\" (UniqueName: \"kubernetes.io/projected/f7bc9c86-d3a7-43d7-9862-6170cb691894-kube-api-access-cj8bg\") pod \"ingress-canary-n4248\" (UID: \"f7bc9c86-d3a7-43d7-9862-6170cb691894\") " pod="openshift-ingress-canary/ingress-canary-n4248" Apr 17 14:08:07.265764 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:07.265731 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f7bc9c86-d3a7-43d7-9862-6170cb691894-cert\") pod \"ingress-canary-n4248\" (UID: \"f7bc9c86-d3a7-43d7-9862-6170cb691894\") " pod="openshift-ingress-canary/ingress-canary-n4248" Apr 17 14:08:07.265950 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:07.265822 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c99bcc58-e14e-4455-8308-f2a36ad35eff-metrics-tls\") pod \"dns-default-mrqpz\" (UID: \"c99bcc58-e14e-4455-8308-f2a36ad35eff\") " pod="openshift-dns/dns-default-mrqpz" Apr 17 14:08:07.265950 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:07.265883 2568 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 17 14:08:07.265950 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:07.265940 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7bc9c86-d3a7-43d7-9862-6170cb691894-cert podName:f7bc9c86-d3a7-43d7-9862-6170cb691894 nodeName:}" failed. No retries permitted until 2026-04-17 14:08:08.265924443 +0000 UTC m=+37.603715537 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f7bc9c86-d3a7-43d7-9862-6170cb691894-cert") pod "ingress-canary-n4248" (UID: "f7bc9c86-d3a7-43d7-9862-6170cb691894") : secret "canary-serving-cert" not found Apr 17 14:08:07.266076 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:07.265950 2568 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 17 14:08:07.266076 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:07.266002 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c99bcc58-e14e-4455-8308-f2a36ad35eff-metrics-tls podName:c99bcc58-e14e-4455-8308-f2a36ad35eff nodeName:}" failed. No retries permitted until 2026-04-17 14:08:08.265989641 +0000 UTC m=+37.603780730 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c99bcc58-e14e-4455-8308-f2a36ad35eff-metrics-tls") pod "dns-default-mrqpz" (UID: "c99bcc58-e14e-4455-8308-f2a36ad35eff") : secret "dns-default-metrics-tls" not found Apr 17 14:08:08.259491 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:08.259454 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-cgd26" Apr 17 14:08:08.259956 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:08.259583 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gh5vx" Apr 17 14:08:08.259956 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:08.259814 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vgr2h" Apr 17 14:08:08.262564 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:08.262547 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 17 14:08:08.262675 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:08.262608 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 17 14:08:08.262675 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:08.262634 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"original-pull-secret\"" Apr 17 14:08:08.264362 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:08.264120 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 17 14:08:08.264362 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:08.264197 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-x9vkl\"" Apr 17 14:08:08.264517 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:08.264494 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-bfspf\"" Apr 17 14:08:08.271809 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:08.271783 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c99bcc58-e14e-4455-8308-f2a36ad35eff-metrics-tls\") pod \"dns-default-mrqpz\" (UID: \"c99bcc58-e14e-4455-8308-f2a36ad35eff\") " pod="openshift-dns/dns-default-mrqpz" Apr 17 14:08:08.271944 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:08.271843 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f7bc9c86-d3a7-43d7-9862-6170cb691894-cert\") pod \"ingress-canary-n4248\" (UID: \"f7bc9c86-d3a7-43d7-9862-6170cb691894\") " pod="openshift-ingress-canary/ingress-canary-n4248" Apr 17 14:08:08.272017 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:08.271944 2568 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 17 14:08:08.272017 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:08.272014 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c99bcc58-e14e-4455-8308-f2a36ad35eff-metrics-tls podName:c99bcc58-e14e-4455-8308-f2a36ad35eff nodeName:}" failed. No retries permitted until 2026-04-17 14:08:10.271994477 +0000 UTC m=+39.609785584 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c99bcc58-e14e-4455-8308-f2a36ad35eff-metrics-tls") pod "dns-default-mrqpz" (UID: "c99bcc58-e14e-4455-8308-f2a36ad35eff") : secret "dns-default-metrics-tls" not found Apr 17 14:08:08.272120 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:08.271951 2568 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 17 14:08:08.272120 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:08.272084 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7bc9c86-d3a7-43d7-9862-6170cb691894-cert podName:f7bc9c86-d3a7-43d7-9862-6170cb691894 nodeName:}" failed. No retries permitted until 2026-04-17 14:08:10.272069486 +0000 UTC m=+39.609860575 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f7bc9c86-d3a7-43d7-9862-6170cb691894-cert") pod "ingress-canary-n4248" (UID: "f7bc9c86-d3a7-43d7-9862-6170cb691894") : secret "canary-serving-cert" not found Apr 17 14:08:09.178198 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:09.178154 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/a39d75e4-6ec9-4f74-b702-49461b73e668-original-pull-secret\") pod \"global-pull-secret-syncer-cgd26\" (UID: \"a39d75e4-6ec9-4f74-b702-49461b73e668\") " pod="kube-system/global-pull-secret-syncer-cgd26" Apr 17 14:08:09.180492 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:09.180466 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/a39d75e4-6ec9-4f74-b702-49461b73e668-original-pull-secret\") pod \"global-pull-secret-syncer-cgd26\" (UID: \"a39d75e4-6ec9-4f74-b702-49461b73e668\") " pod="kube-system/global-pull-secret-syncer-cgd26" Apr 17 14:08:09.471780 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:09.471695 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-cgd26" Apr 17 14:08:09.610452 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:09.610423 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-cgd26"] Apr 17 14:08:09.613844 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:08:09.613815 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda39d75e4_6ec9_4f74_b702_49461b73e668.slice/crio-3059fcb55c4e25dbe4f35c370cc367384eacb983953279eef25e0d3e2ba8eb5c WatchSource:0}: Error finding container 3059fcb55c4e25dbe4f35c370cc367384eacb983953279eef25e0d3e2ba8eb5c: Status 404 returned error can't find the container with id 3059fcb55c4e25dbe4f35c370cc367384eacb983953279eef25e0d3e2ba8eb5c Apr 17 14:08:10.286161 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:10.286077 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c99bcc58-e14e-4455-8308-f2a36ad35eff-metrics-tls\") pod \"dns-default-mrqpz\" (UID: \"c99bcc58-e14e-4455-8308-f2a36ad35eff\") " pod="openshift-dns/dns-default-mrqpz" Apr 17 14:08:10.286161 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:10.286132 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f7bc9c86-d3a7-43d7-9862-6170cb691894-cert\") pod \"ingress-canary-n4248\" (UID: \"f7bc9c86-d3a7-43d7-9862-6170cb691894\") " pod="openshift-ingress-canary/ingress-canary-n4248" Apr 17 14:08:10.286425 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:10.286242 2568 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 17 14:08:10.286425 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:10.286313 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c99bcc58-e14e-4455-8308-f2a36ad35eff-metrics-tls podName:c99bcc58-e14e-4455-8308-f2a36ad35eff nodeName:}" failed. No retries permitted until 2026-04-17 14:08:14.286289654 +0000 UTC m=+43.624080755 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c99bcc58-e14e-4455-8308-f2a36ad35eff-metrics-tls") pod "dns-default-mrqpz" (UID: "c99bcc58-e14e-4455-8308-f2a36ad35eff") : secret "dns-default-metrics-tls" not found Apr 17 14:08:10.286425 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:10.286245 2568 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 17 14:08:10.286425 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:10.286396 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7bc9c86-d3a7-43d7-9862-6170cb691894-cert podName:f7bc9c86-d3a7-43d7-9862-6170cb691894 nodeName:}" failed. No retries permitted until 2026-04-17 14:08:14.286379434 +0000 UTC m=+43.624170528 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f7bc9c86-d3a7-43d7-9862-6170cb691894-cert") pod "ingress-canary-n4248" (UID: "f7bc9c86-d3a7-43d7-9862-6170cb691894") : secret "canary-serving-cert" not found Apr 17 14:08:10.448108 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:10.448064 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-cgd26" event={"ID":"a39d75e4-6ec9-4f74-b702-49461b73e668","Type":"ContainerStarted","Data":"3059fcb55c4e25dbe4f35c370cc367384eacb983953279eef25e0d3e2ba8eb5c"} Apr 17 14:08:14.321604 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:14.321499 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f7bc9c86-d3a7-43d7-9862-6170cb691894-cert\") pod \"ingress-canary-n4248\" (UID: \"f7bc9c86-d3a7-43d7-9862-6170cb691894\") " pod="openshift-ingress-canary/ingress-canary-n4248" Apr 17 14:08:14.322041 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:14.321602 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c99bcc58-e14e-4455-8308-f2a36ad35eff-metrics-tls\") pod \"dns-default-mrqpz\" (UID: \"c99bcc58-e14e-4455-8308-f2a36ad35eff\") " pod="openshift-dns/dns-default-mrqpz" Apr 17 14:08:14.322041 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:14.321652 2568 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 17 14:08:14.322041 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:14.321715 2568 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 17 14:08:14.322041 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:14.321719 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7bc9c86-d3a7-43d7-9862-6170cb691894-cert podName:f7bc9c86-d3a7-43d7-9862-6170cb691894 nodeName:}" failed. No retries permitted until 2026-04-17 14:08:22.321702763 +0000 UTC m=+51.659493853 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f7bc9c86-d3a7-43d7-9862-6170cb691894-cert") pod "ingress-canary-n4248" (UID: "f7bc9c86-d3a7-43d7-9862-6170cb691894") : secret "canary-serving-cert" not found Apr 17 14:08:14.322041 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:14.321780 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c99bcc58-e14e-4455-8308-f2a36ad35eff-metrics-tls podName:c99bcc58-e14e-4455-8308-f2a36ad35eff nodeName:}" failed. No retries permitted until 2026-04-17 14:08:22.321763986 +0000 UTC m=+51.659555074 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c99bcc58-e14e-4455-8308-f2a36ad35eff-metrics-tls") pod "dns-default-mrqpz" (UID: "c99bcc58-e14e-4455-8308-f2a36ad35eff") : secret "dns-default-metrics-tls" not found Apr 17 14:08:14.457697 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:14.457661 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-cgd26" event={"ID":"a39d75e4-6ec9-4f74-b702-49461b73e668","Type":"ContainerStarted","Data":"048a8fb6f90d515277de5675bdd3bec489061ea028653598fb333401f2972600"} Apr 17 14:08:14.473099 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:14.473041 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/global-pull-secret-syncer-cgd26" podStartSLOduration=33.085615487 podStartE2EDuration="37.473024981s" podCreationTimestamp="2026-04-17 14:07:37 +0000 UTC" firstStartedPulling="2026-04-17 14:08:09.615407339 +0000 UTC m=+38.953198428" lastFinishedPulling="2026-04-17 14:08:14.00281682 +0000 UTC m=+43.340607922" observedRunningTime="2026-04-17 14:08:14.472632842 +0000 UTC m=+43.810423953" watchObservedRunningTime="2026-04-17 14:08:14.473024981 +0000 UTC m=+43.810816093" Apr 17 14:08:22.373423 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:22.373373 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c99bcc58-e14e-4455-8308-f2a36ad35eff-metrics-tls\") pod \"dns-default-mrqpz\" (UID: \"c99bcc58-e14e-4455-8308-f2a36ad35eff\") " pod="openshift-dns/dns-default-mrqpz" Apr 17 14:08:22.373852 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:22.373441 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f7bc9c86-d3a7-43d7-9862-6170cb691894-cert\") pod \"ingress-canary-n4248\" (UID: \"f7bc9c86-d3a7-43d7-9862-6170cb691894\") " pod="openshift-ingress-canary/ingress-canary-n4248" Apr 17 14:08:22.373852 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:22.373521 2568 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 17 14:08:22.373852 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:22.373575 2568 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 17 14:08:22.373852 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:22.373586 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c99bcc58-e14e-4455-8308-f2a36ad35eff-metrics-tls podName:c99bcc58-e14e-4455-8308-f2a36ad35eff nodeName:}" failed. No retries permitted until 2026-04-17 14:08:38.373568025 +0000 UTC m=+67.711359117 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c99bcc58-e14e-4455-8308-f2a36ad35eff-metrics-tls") pod "dns-default-mrqpz" (UID: "c99bcc58-e14e-4455-8308-f2a36ad35eff") : secret "dns-default-metrics-tls" not found Apr 17 14:08:22.373852 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:22.373628 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7bc9c86-d3a7-43d7-9862-6170cb691894-cert podName:f7bc9c86-d3a7-43d7-9862-6170cb691894 nodeName:}" failed. No retries permitted until 2026-04-17 14:08:38.373610798 +0000 UTC m=+67.711401888 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f7bc9c86-d3a7-43d7-9862-6170cb691894-cert") pod "ingress-canary-n4248" (UID: "f7bc9c86-d3a7-43d7-9862-6170cb691894") : secret "canary-serving-cert" not found Apr 17 14:08:31.427818 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:31.427792 2568 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vmbqf" Apr 17 14:08:36.871738 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:36.871703 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/30cbf063-628a-472d-981e-312f5bea1f7f-metrics-certs\") pod \"network-metrics-daemon-vgr2h\" (UID: \"30cbf063-628a-472d-981e-312f5bea1f7f\") " pod="openshift-multus/network-metrics-daemon-vgr2h" Apr 17 14:08:36.874729 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:36.874711 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 17 14:08:36.882891 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:36.882852 2568 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Apr 17 14:08:36.883003 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:36.882973 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30cbf063-628a-472d-981e-312f5bea1f7f-metrics-certs podName:30cbf063-628a-472d-981e-312f5bea1f7f nodeName:}" failed. No retries permitted until 2026-04-17 14:09:40.882950769 +0000 UTC m=+130.220741870 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/30cbf063-628a-472d-981e-312f5bea1f7f-metrics-certs") pod "network-metrics-daemon-vgr2h" (UID: "30cbf063-628a-472d-981e-312f5bea1f7f") : secret "metrics-daemon-secret" not found Apr 17 14:08:36.972936 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:36.972901 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zzpm4\" (UniqueName: \"kubernetes.io/projected/6946cb58-b181-4207-94e3-02bca6a030b2-kube-api-access-zzpm4\") pod \"network-check-target-gh5vx\" (UID: \"6946cb58-b181-4207-94e3-02bca6a030b2\") " pod="openshift-network-diagnostics/network-check-target-gh5vx" Apr 17 14:08:36.975967 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:36.975947 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 17 14:08:36.985152 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:36.985129 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 17 14:08:36.995708 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:36.995676 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzpm4\" (UniqueName: \"kubernetes.io/projected/6946cb58-b181-4207-94e3-02bca6a030b2-kube-api-access-zzpm4\") pod \"network-check-target-gh5vx\" (UID: \"6946cb58-b181-4207-94e3-02bca6a030b2\") " pod="openshift-network-diagnostics/network-check-target-gh5vx" Apr 17 14:08:37.080402 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:37.080372 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-x9vkl\"" Apr 17 14:08:37.088162 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:37.088134 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gh5vx" Apr 17 14:08:37.211402 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:37.211370 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-gh5vx"] Apr 17 14:08:37.214902 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:08:37.214845 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6946cb58_b181_4207_94e3_02bca6a030b2.slice/crio-13cbe3be9307ae37f083920283addb34db9fdd464ab1db58fd9babb0728dba0f WatchSource:0}: Error finding container 13cbe3be9307ae37f083920283addb34db9fdd464ab1db58fd9babb0728dba0f: Status 404 returned error can't find the container with id 13cbe3be9307ae37f083920283addb34db9fdd464ab1db58fd9babb0728dba0f Apr 17 14:08:37.497987 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:37.497946 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-gh5vx" event={"ID":"6946cb58-b181-4207-94e3-02bca6a030b2","Type":"ContainerStarted","Data":"13cbe3be9307ae37f083920283addb34db9fdd464ab1db58fd9babb0728dba0f"} Apr 17 14:08:38.384719 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:38.384686 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c99bcc58-e14e-4455-8308-f2a36ad35eff-metrics-tls\") pod \"dns-default-mrqpz\" (UID: \"c99bcc58-e14e-4455-8308-f2a36ad35eff\") " pod="openshift-dns/dns-default-mrqpz" Apr 17 14:08:38.385197 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:38.384759 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f7bc9c86-d3a7-43d7-9862-6170cb691894-cert\") pod \"ingress-canary-n4248\" (UID: \"f7bc9c86-d3a7-43d7-9862-6170cb691894\") " pod="openshift-ingress-canary/ingress-canary-n4248" Apr 17 14:08:38.385197 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:38.384875 2568 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 17 14:08:38.385197 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:38.384888 2568 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 17 14:08:38.385197 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:38.384950 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7bc9c86-d3a7-43d7-9862-6170cb691894-cert podName:f7bc9c86-d3a7-43d7-9862-6170cb691894 nodeName:}" failed. No retries permitted until 2026-04-17 14:09:10.384928265 +0000 UTC m=+99.722719355 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f7bc9c86-d3a7-43d7-9862-6170cb691894-cert") pod "ingress-canary-n4248" (UID: "f7bc9c86-d3a7-43d7-9862-6170cb691894") : secret "canary-serving-cert" not found Apr 17 14:08:38.385197 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:08:38.384965 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c99bcc58-e14e-4455-8308-f2a36ad35eff-metrics-tls podName:c99bcc58-e14e-4455-8308-f2a36ad35eff nodeName:}" failed. No retries permitted until 2026-04-17 14:09:10.384959097 +0000 UTC m=+99.722750186 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c99bcc58-e14e-4455-8308-f2a36ad35eff-metrics-tls") pod "dns-default-mrqpz" (UID: "c99bcc58-e14e-4455-8308-f2a36ad35eff") : secret "dns-default-metrics-tls" not found Apr 17 14:08:40.506242 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:40.506202 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-gh5vx" event={"ID":"6946cb58-b181-4207-94e3-02bca6a030b2","Type":"ContainerStarted","Data":"f514e89e2fe55048b445fb5f56e589459e65b0e6e32225dcb5fb81737d6dff82"} Apr 17 14:08:40.506691 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:40.506319 2568 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-gh5vx" Apr 17 14:08:40.522452 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:08:40.522405 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-target-gh5vx" podStartSLOduration=66.882872606 podStartE2EDuration="1m9.522392563s" podCreationTimestamp="2026-04-17 14:07:31 +0000 UTC" firstStartedPulling="2026-04-17 14:08:37.216720811 +0000 UTC m=+66.554511905" lastFinishedPulling="2026-04-17 14:08:39.85624077 +0000 UTC m=+69.194031862" observedRunningTime="2026-04-17 14:08:40.521337743 +0000 UTC m=+69.859128865" watchObservedRunningTime="2026-04-17 14:08:40.522392563 +0000 UTC m=+69.860183721" Apr 17 14:09:10.413351 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:10.413317 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c99bcc58-e14e-4455-8308-f2a36ad35eff-metrics-tls\") pod \"dns-default-mrqpz\" (UID: \"c99bcc58-e14e-4455-8308-f2a36ad35eff\") " pod="openshift-dns/dns-default-mrqpz" Apr 17 14:09:10.413762 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:10.413375 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f7bc9c86-d3a7-43d7-9862-6170cb691894-cert\") pod \"ingress-canary-n4248\" (UID: \"f7bc9c86-d3a7-43d7-9862-6170cb691894\") " pod="openshift-ingress-canary/ingress-canary-n4248" Apr 17 14:09:10.413762 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:09:10.413464 2568 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 17 14:09:10.413762 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:09:10.413469 2568 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 17 14:09:10.413762 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:09:10.413527 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7bc9c86-d3a7-43d7-9862-6170cb691894-cert podName:f7bc9c86-d3a7-43d7-9862-6170cb691894 nodeName:}" failed. No retries permitted until 2026-04-17 14:10:14.413512976 +0000 UTC m=+163.751304065 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f7bc9c86-d3a7-43d7-9862-6170cb691894-cert") pod "ingress-canary-n4248" (UID: "f7bc9c86-d3a7-43d7-9862-6170cb691894") : secret "canary-serving-cert" not found Apr 17 14:09:10.413762 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:09:10.413541 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c99bcc58-e14e-4455-8308-f2a36ad35eff-metrics-tls podName:c99bcc58-e14e-4455-8308-f2a36ad35eff nodeName:}" failed. No retries permitted until 2026-04-17 14:10:14.413534543 +0000 UTC m=+163.751325632 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c99bcc58-e14e-4455-8308-f2a36ad35eff-metrics-tls") pod "dns-default-mrqpz" (UID: "c99bcc58-e14e-4455-8308-f2a36ad35eff") : secret "dns-default-metrics-tls" not found Apr 17 14:09:11.510328 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:11.510297 2568 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-gh5vx" Apr 17 14:09:12.171853 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:12.171816 2568 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-75587bd455-b6tjh"] Apr 17 14:09:12.176603 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:12.176586 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-b6tjh" Apr 17 14:09:12.178907 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:12.178876 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"cluster-monitoring-operator-dockercfg-flwp6\"" Apr 17 14:09:12.179072 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:12.179032 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"openshift-service-ca.crt\"" Apr 17 14:09:12.180164 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:12.180149 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kube-root-ca.crt\"" Apr 17 14:09:12.181730 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:12.181712 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"telemetry-config\"" Apr 17 14:09:12.181730 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:12.181724 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"cluster-monitoring-operator-tls\"" Apr 17 14:09:12.182789 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:12.182766 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-75587bd455-b6tjh"] Apr 17 14:09:12.227845 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:12.227812 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/d3d37f50-e860-4ca0-9480-8785e349ad48-telemetry-config\") pod \"cluster-monitoring-operator-75587bd455-b6tjh\" (UID: \"d3d37f50-e860-4ca0-9480-8785e349ad48\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-b6tjh" Apr 17 14:09:12.227845 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:12.227849 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjqgv\" (UniqueName: \"kubernetes.io/projected/d3d37f50-e860-4ca0-9480-8785e349ad48-kube-api-access-fjqgv\") pod \"cluster-monitoring-operator-75587bd455-b6tjh\" (UID: \"d3d37f50-e860-4ca0-9480-8785e349ad48\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-b6tjh" Apr 17 14:09:12.228061 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:12.227907 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/d3d37f50-e860-4ca0-9480-8785e349ad48-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-75587bd455-b6tjh\" (UID: \"d3d37f50-e860-4ca0-9480-8785e349ad48\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-b6tjh" Apr 17 14:09:12.328832 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:12.328794 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/d3d37f50-e860-4ca0-9480-8785e349ad48-telemetry-config\") pod \"cluster-monitoring-operator-75587bd455-b6tjh\" (UID: \"d3d37f50-e860-4ca0-9480-8785e349ad48\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-b6tjh" Apr 17 14:09:12.328832 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:12.328833 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fjqgv\" (UniqueName: \"kubernetes.io/projected/d3d37f50-e860-4ca0-9480-8785e349ad48-kube-api-access-fjqgv\") pod \"cluster-monitoring-operator-75587bd455-b6tjh\" (UID: \"d3d37f50-e860-4ca0-9480-8785e349ad48\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-b6tjh" Apr 17 14:09:12.329120 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:12.329002 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/d3d37f50-e860-4ca0-9480-8785e349ad48-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-75587bd455-b6tjh\" (UID: \"d3d37f50-e860-4ca0-9480-8785e349ad48\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-b6tjh" Apr 17 14:09:12.329183 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:09:12.329131 2568 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Apr 17 14:09:12.329234 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:09:12.329203 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3d37f50-e860-4ca0-9480-8785e349ad48-cluster-monitoring-operator-tls podName:d3d37f50-e860-4ca0-9480-8785e349ad48 nodeName:}" failed. No retries permitted until 2026-04-17 14:09:12.829179038 +0000 UTC m=+102.166970138 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/d3d37f50-e860-4ca0-9480-8785e349ad48-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-75587bd455-b6tjh" (UID: "d3d37f50-e860-4ca0-9480-8785e349ad48") : secret "cluster-monitoring-operator-tls" not found Apr 17 14:09:12.329602 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:12.329579 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/d3d37f50-e860-4ca0-9480-8785e349ad48-telemetry-config\") pod \"cluster-monitoring-operator-75587bd455-b6tjh\" (UID: \"d3d37f50-e860-4ca0-9480-8785e349ad48\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-b6tjh" Apr 17 14:09:12.339430 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:12.339394 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjqgv\" (UniqueName: \"kubernetes.io/projected/d3d37f50-e860-4ca0-9480-8785e349ad48-kube-api-access-fjqgv\") pod \"cluster-monitoring-operator-75587bd455-b6tjh\" (UID: \"d3d37f50-e860-4ca0-9480-8785e349ad48\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-b6tjh" Apr 17 14:09:12.833576 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:12.833518 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/d3d37f50-e860-4ca0-9480-8785e349ad48-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-75587bd455-b6tjh\" (UID: \"d3d37f50-e860-4ca0-9480-8785e349ad48\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-b6tjh" Apr 17 14:09:12.833996 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:09:12.833638 2568 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Apr 17 14:09:12.833996 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:09:12.833698 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3d37f50-e860-4ca0-9480-8785e349ad48-cluster-monitoring-operator-tls podName:d3d37f50-e860-4ca0-9480-8785e349ad48 nodeName:}" failed. No retries permitted until 2026-04-17 14:09:13.833684658 +0000 UTC m=+103.171475746 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/d3d37f50-e860-4ca0-9480-8785e349ad48-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-75587bd455-b6tjh" (UID: "d3d37f50-e860-4ca0-9480-8785e349ad48") : secret "cluster-monitoring-operator-tls" not found Apr 17 14:09:13.841031 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:13.840989 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/d3d37f50-e860-4ca0-9480-8785e349ad48-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-75587bd455-b6tjh\" (UID: \"d3d37f50-e860-4ca0-9480-8785e349ad48\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-b6tjh" Apr 17 14:09:13.841412 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:09:13.841140 2568 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Apr 17 14:09:13.841412 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:09:13.841209 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3d37f50-e860-4ca0-9480-8785e349ad48-cluster-monitoring-operator-tls podName:d3d37f50-e860-4ca0-9480-8785e349ad48 nodeName:}" failed. No retries permitted until 2026-04-17 14:09:15.8411917 +0000 UTC m=+105.178982791 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/d3d37f50-e860-4ca0-9480-8785e349ad48-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-75587bd455-b6tjh" (UID: "d3d37f50-e860-4ca0-9480-8785e349ad48") : secret "cluster-monitoring-operator-tls" not found Apr 17 14:09:15.855378 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:15.855324 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/d3d37f50-e860-4ca0-9480-8785e349ad48-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-75587bd455-b6tjh\" (UID: \"d3d37f50-e860-4ca0-9480-8785e349ad48\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-b6tjh" Apr 17 14:09:15.855833 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:09:15.855473 2568 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Apr 17 14:09:15.855833 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:09:15.855538 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3d37f50-e860-4ca0-9480-8785e349ad48-cluster-monitoring-operator-tls podName:d3d37f50-e860-4ca0-9480-8785e349ad48 nodeName:}" failed. No retries permitted until 2026-04-17 14:09:19.855523208 +0000 UTC m=+109.193314297 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/d3d37f50-e860-4ca0-9480-8785e349ad48-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-75587bd455-b6tjh" (UID: "d3d37f50-e860-4ca0-9480-8785e349ad48") : secret "cluster-monitoring-operator-tls" not found Apr 17 14:09:16.224595 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:16.224561 2568 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6qfvr"] Apr 17 14:09:16.227534 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:16.227518 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6qfvr" Apr 17 14:09:16.229988 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:16.229966 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Apr 17 14:09:16.230110 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:16.230092 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Apr 17 14:09:16.231063 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:16.231045 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-d8kc6\"" Apr 17 14:09:16.231063 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:16.231060 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Apr 17 14:09:16.236284 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:16.236264 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6qfvr"] Apr 17 14:09:16.328340 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:16.328308 2568 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-9rmr5"] Apr 17 14:09:16.331132 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:16.331116 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-9rmr5" Apr 17 14:09:16.333607 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:16.333584 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-storage-operator\"/\"kube-root-ca.crt\"" Apr 17 14:09:16.333607 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:16.333604 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-storage-operator\"/\"volume-data-source-validator-dockercfg-sdfhg\"" Apr 17 14:09:16.333772 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:16.333608 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-storage-operator\"/\"openshift-service-ca.crt\"" Apr 17 14:09:16.338280 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:16.338256 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-9rmr5"] Apr 17 14:09:16.359450 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:16.359416 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a93e218-4c76-4f41-ac1c-71595ca764d4-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-6qfvr\" (UID: \"6a93e218-4c76-4f41-ac1c-71595ca764d4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6qfvr" Apr 17 14:09:16.359608 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:16.359520 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssvf5\" (UniqueName: \"kubernetes.io/projected/6a93e218-4c76-4f41-ac1c-71595ca764d4-kube-api-access-ssvf5\") pod \"cluster-samples-operator-6dc5bdb6b4-6qfvr\" (UID: \"6a93e218-4c76-4f41-ac1c-71595ca764d4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6qfvr" Apr 17 14:09:16.460927 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:16.460882 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrx62\" (UniqueName: \"kubernetes.io/projected/9d9069f6-f802-49ac-ae51-9f6f9fac00d2-kube-api-access-xrx62\") pod \"volume-data-source-validator-7c6cbb6c87-9rmr5\" (UID: \"9d9069f6-f802-49ac-ae51-9f6f9fac00d2\") " pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-9rmr5" Apr 17 14:09:16.461062 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:16.460949 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ssvf5\" (UniqueName: \"kubernetes.io/projected/6a93e218-4c76-4f41-ac1c-71595ca764d4-kube-api-access-ssvf5\") pod \"cluster-samples-operator-6dc5bdb6b4-6qfvr\" (UID: \"6a93e218-4c76-4f41-ac1c-71595ca764d4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6qfvr" Apr 17 14:09:16.461062 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:16.460992 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a93e218-4c76-4f41-ac1c-71595ca764d4-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-6qfvr\" (UID: \"6a93e218-4c76-4f41-ac1c-71595ca764d4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6qfvr" Apr 17 14:09:16.461129 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:09:16.461074 2568 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 17 14:09:16.461129 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:09:16.461119 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a93e218-4c76-4f41-ac1c-71595ca764d4-samples-operator-tls podName:6a93e218-4c76-4f41-ac1c-71595ca764d4 nodeName:}" failed. No retries permitted until 2026-04-17 14:09:16.961105008 +0000 UTC m=+106.298896097 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/6a93e218-4c76-4f41-ac1c-71595ca764d4-samples-operator-tls") pod "cluster-samples-operator-6dc5bdb6b4-6qfvr" (UID: "6a93e218-4c76-4f41-ac1c-71595ca764d4") : secret "samples-operator-tls" not found Apr 17 14:09:16.469464 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:16.469442 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssvf5\" (UniqueName: \"kubernetes.io/projected/6a93e218-4c76-4f41-ac1c-71595ca764d4-kube-api-access-ssvf5\") pod \"cluster-samples-operator-6dc5bdb6b4-6qfvr\" (UID: \"6a93e218-4c76-4f41-ac1c-71595ca764d4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6qfvr" Apr 17 14:09:16.561977 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:16.561852 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xrx62\" (UniqueName: \"kubernetes.io/projected/9d9069f6-f802-49ac-ae51-9f6f9fac00d2-kube-api-access-xrx62\") pod \"volume-data-source-validator-7c6cbb6c87-9rmr5\" (UID: \"9d9069f6-f802-49ac-ae51-9f6f9fac00d2\") " pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-9rmr5" Apr 17 14:09:16.570234 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:16.570214 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrx62\" (UniqueName: \"kubernetes.io/projected/9d9069f6-f802-49ac-ae51-9f6f9fac00d2-kube-api-access-xrx62\") pod \"volume-data-source-validator-7c6cbb6c87-9rmr5\" (UID: \"9d9069f6-f802-49ac-ae51-9f6f9fac00d2\") " pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-9rmr5" Apr 17 14:09:16.640517 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:16.640469 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-9rmr5" Apr 17 14:09:16.770315 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:16.770283 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-9rmr5"] Apr 17 14:09:16.774003 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:09:16.773977 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d9069f6_f802_49ac_ae51_9f6f9fac00d2.slice/crio-6a76f38a2159a65cace1f66baa2784523692605de4be9744e3415a3fb7898ac8 WatchSource:0}: Error finding container 6a76f38a2159a65cace1f66baa2784523692605de4be9744e3415a3fb7898ac8: Status 404 returned error can't find the container with id 6a76f38a2159a65cace1f66baa2784523692605de4be9744e3415a3fb7898ac8 Apr 17 14:09:16.963742 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:16.963705 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a93e218-4c76-4f41-ac1c-71595ca764d4-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-6qfvr\" (UID: \"6a93e218-4c76-4f41-ac1c-71595ca764d4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6qfvr" Apr 17 14:09:16.964134 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:09:16.963826 2568 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 17 14:09:16.964134 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:09:16.963904 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a93e218-4c76-4f41-ac1c-71595ca764d4-samples-operator-tls podName:6a93e218-4c76-4f41-ac1c-71595ca764d4 nodeName:}" failed. No retries permitted until 2026-04-17 14:09:17.963889219 +0000 UTC m=+107.301680308 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/6a93e218-4c76-4f41-ac1c-71595ca764d4-samples-operator-tls") pod "cluster-samples-operator-6dc5bdb6b4-6qfvr" (UID: "6a93e218-4c76-4f41-ac1c-71595ca764d4") : secret "samples-operator-tls" not found Apr 17 14:09:17.577517 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:17.577474 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-9rmr5" event={"ID":"9d9069f6-f802-49ac-ae51-9f6f9fac00d2","Type":"ContainerStarted","Data":"6a76f38a2159a65cace1f66baa2784523692605de4be9744e3415a3fb7898ac8"} Apr 17 14:09:17.970617 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:17.970572 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a93e218-4c76-4f41-ac1c-71595ca764d4-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-6qfvr\" (UID: \"6a93e218-4c76-4f41-ac1c-71595ca764d4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6qfvr" Apr 17 14:09:17.971049 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:09:17.970720 2568 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 17 14:09:17.971049 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:09:17.970786 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a93e218-4c76-4f41-ac1c-71595ca764d4-samples-operator-tls podName:6a93e218-4c76-4f41-ac1c-71595ca764d4 nodeName:}" failed. No retries permitted until 2026-04-17 14:09:19.970770298 +0000 UTC m=+109.308561387 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/6a93e218-4c76-4f41-ac1c-71595ca764d4-samples-operator-tls") pod "cluster-samples-operator-6dc5bdb6b4-6qfvr" (UID: "6a93e218-4c76-4f41-ac1c-71595ca764d4") : secret "samples-operator-tls" not found Apr 17 14:09:18.230048 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:18.229953 2568 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-9d4b6777b-s6z2b"] Apr 17 14:09:18.233390 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:18.233368 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-9d4b6777b-s6z2b" Apr 17 14:09:18.236101 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:18.236068 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Apr 17 14:09:18.236228 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:18.236071 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Apr 17 14:09:18.237273 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:18.237235 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-bm5hg\"" Apr 17 14:09:18.237393 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:18.237286 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Apr 17 14:09:18.237521 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:18.237501 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Apr 17 14:09:18.242288 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:18.241975 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Apr 17 14:09:18.242879 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:18.242709 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-9d4b6777b-s6z2b"] Apr 17 14:09:18.372980 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:18.372950 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12933e12-58b2-4d0c-993a-ab690936a989-trusted-ca\") pod \"console-operator-9d4b6777b-s6z2b\" (UID: \"12933e12-58b2-4d0c-993a-ab690936a989\") " pod="openshift-console-operator/console-operator-9d4b6777b-s6z2b" Apr 17 14:09:18.373122 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:18.372999 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12933e12-58b2-4d0c-993a-ab690936a989-config\") pod \"console-operator-9d4b6777b-s6z2b\" (UID: \"12933e12-58b2-4d0c-993a-ab690936a989\") " pod="openshift-console-operator/console-operator-9d4b6777b-s6z2b" Apr 17 14:09:18.373122 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:18.373034 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2xg6\" (UniqueName: \"kubernetes.io/projected/12933e12-58b2-4d0c-993a-ab690936a989-kube-api-access-w2xg6\") pod \"console-operator-9d4b6777b-s6z2b\" (UID: \"12933e12-58b2-4d0c-993a-ab690936a989\") " pod="openshift-console-operator/console-operator-9d4b6777b-s6z2b" Apr 17 14:09:18.373122 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:18.373080 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12933e12-58b2-4d0c-993a-ab690936a989-serving-cert\") pod \"console-operator-9d4b6777b-s6z2b\" (UID: \"12933e12-58b2-4d0c-993a-ab690936a989\") " pod="openshift-console-operator/console-operator-9d4b6777b-s6z2b" Apr 17 14:09:18.473905 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:18.473837 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12933e12-58b2-4d0c-993a-ab690936a989-config\") pod \"console-operator-9d4b6777b-s6z2b\" (UID: \"12933e12-58b2-4d0c-993a-ab690936a989\") " pod="openshift-console-operator/console-operator-9d4b6777b-s6z2b" Apr 17 14:09:18.474076 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:18.473918 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w2xg6\" (UniqueName: \"kubernetes.io/projected/12933e12-58b2-4d0c-993a-ab690936a989-kube-api-access-w2xg6\") pod \"console-operator-9d4b6777b-s6z2b\" (UID: \"12933e12-58b2-4d0c-993a-ab690936a989\") " pod="openshift-console-operator/console-operator-9d4b6777b-s6z2b" Apr 17 14:09:18.474076 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:18.473972 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12933e12-58b2-4d0c-993a-ab690936a989-serving-cert\") pod \"console-operator-9d4b6777b-s6z2b\" (UID: \"12933e12-58b2-4d0c-993a-ab690936a989\") " pod="openshift-console-operator/console-operator-9d4b6777b-s6z2b" Apr 17 14:09:18.474076 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:18.474043 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12933e12-58b2-4d0c-993a-ab690936a989-trusted-ca\") pod \"console-operator-9d4b6777b-s6z2b\" (UID: \"12933e12-58b2-4d0c-993a-ab690936a989\") " pod="openshift-console-operator/console-operator-9d4b6777b-s6z2b" Apr 17 14:09:18.474834 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:18.474812 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12933e12-58b2-4d0c-993a-ab690936a989-config\") pod \"console-operator-9d4b6777b-s6z2b\" (UID: \"12933e12-58b2-4d0c-993a-ab690936a989\") " pod="openshift-console-operator/console-operator-9d4b6777b-s6z2b" Apr 17 14:09:18.474904 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:18.474812 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12933e12-58b2-4d0c-993a-ab690936a989-trusted-ca\") pod \"console-operator-9d4b6777b-s6z2b\" (UID: \"12933e12-58b2-4d0c-993a-ab690936a989\") " pod="openshift-console-operator/console-operator-9d4b6777b-s6z2b" Apr 17 14:09:18.476418 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:18.476387 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12933e12-58b2-4d0c-993a-ab690936a989-serving-cert\") pod \"console-operator-9d4b6777b-s6z2b\" (UID: \"12933e12-58b2-4d0c-993a-ab690936a989\") " pod="openshift-console-operator/console-operator-9d4b6777b-s6z2b" Apr 17 14:09:18.481797 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:18.481736 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2xg6\" (UniqueName: \"kubernetes.io/projected/12933e12-58b2-4d0c-993a-ab690936a989-kube-api-access-w2xg6\") pod \"console-operator-9d4b6777b-s6z2b\" (UID: \"12933e12-58b2-4d0c-993a-ab690936a989\") " pod="openshift-console-operator/console-operator-9d4b6777b-s6z2b" Apr 17 14:09:18.545763 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:18.545713 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-9d4b6777b-s6z2b" Apr 17 14:09:18.551585 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:18.551559 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-sh5zz_0ca24926-e680-41ea-85df-e8d6ba856597/dns-node-resolver/0.log" Apr 17 14:09:18.581422 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:18.581393 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-9rmr5" event={"ID":"9d9069f6-f802-49ac-ae51-9f6f9fac00d2","Type":"ContainerStarted","Data":"32a97bd681aaf704d4016e6a902c5f34b02684229edeb3e968584ed329b6e32b"} Apr 17 14:09:18.597689 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:18.596676 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-9rmr5" podStartSLOduration=1.051435596 podStartE2EDuration="2.596654531s" podCreationTimestamp="2026-04-17 14:09:16 +0000 UTC" firstStartedPulling="2026-04-17 14:09:16.776176591 +0000 UTC m=+106.113967681" lastFinishedPulling="2026-04-17 14:09:18.321395524 +0000 UTC m=+107.659186616" observedRunningTime="2026-04-17 14:09:18.595570632 +0000 UTC m=+107.933361743" watchObservedRunningTime="2026-04-17 14:09:18.596654531 +0000 UTC m=+107.934445642" Apr 17 14:09:18.659234 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:18.659199 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-9d4b6777b-s6z2b"] Apr 17 14:09:18.662728 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:09:18.662703 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12933e12_58b2_4d0c_993a_ab690936a989.slice/crio-835eccde002a13f5ffb83d0bfab66ebf7c1b51181288777cd8a5652dc4218510 WatchSource:0}: Error finding container 835eccde002a13f5ffb83d0bfab66ebf7c1b51181288777cd8a5652dc4218510: Status 404 returned error can't find the container with id 835eccde002a13f5ffb83d0bfab66ebf7c1b51181288777cd8a5652dc4218510 Apr 17 14:09:19.551120 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:19.551087 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-6nflw_dd5f44ea-0850-4ff4-8f21-1f4135fc02ca/node-ca/0.log" Apr 17 14:09:19.584783 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:19.584749 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-9d4b6777b-s6z2b" event={"ID":"12933e12-58b2-4d0c-993a-ab690936a989","Type":"ContainerStarted","Data":"835eccde002a13f5ffb83d0bfab66ebf7c1b51181288777cd8a5652dc4218510"} Apr 17 14:09:19.886502 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:19.886416 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/d3d37f50-e860-4ca0-9480-8785e349ad48-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-75587bd455-b6tjh\" (UID: \"d3d37f50-e860-4ca0-9480-8785e349ad48\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-b6tjh" Apr 17 14:09:19.886665 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:09:19.886541 2568 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Apr 17 14:09:19.886665 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:09:19.886622 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3d37f50-e860-4ca0-9480-8785e349ad48-cluster-monitoring-operator-tls podName:d3d37f50-e860-4ca0-9480-8785e349ad48 nodeName:}" failed. No retries permitted until 2026-04-17 14:09:27.886602902 +0000 UTC m=+117.224393998 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/d3d37f50-e860-4ca0-9480-8785e349ad48-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-75587bd455-b6tjh" (UID: "d3d37f50-e860-4ca0-9480-8785e349ad48") : secret "cluster-monitoring-operator-tls" not found Apr 17 14:09:19.987411 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:19.987373 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a93e218-4c76-4f41-ac1c-71595ca764d4-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-6qfvr\" (UID: \"6a93e218-4c76-4f41-ac1c-71595ca764d4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6qfvr" Apr 17 14:09:19.987572 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:09:19.987524 2568 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 17 14:09:19.987614 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:09:19.987591 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a93e218-4c76-4f41-ac1c-71595ca764d4-samples-operator-tls podName:6a93e218-4c76-4f41-ac1c-71595ca764d4 nodeName:}" failed. No retries permitted until 2026-04-17 14:09:23.987575547 +0000 UTC m=+113.325366637 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/6a93e218-4c76-4f41-ac1c-71595ca764d4-samples-operator-tls") pod "cluster-samples-operator-6dc5bdb6b4-6qfvr" (UID: "6a93e218-4c76-4f41-ac1c-71595ca764d4") : secret "samples-operator-tls" not found Apr 17 14:09:21.589572 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:21.589540 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-s6z2b_12933e12-58b2-4d0c-993a-ab690936a989/console-operator/0.log" Apr 17 14:09:21.589948 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:21.589579 2568 generic.go:358] "Generic (PLEG): container finished" podID="12933e12-58b2-4d0c-993a-ab690936a989" containerID="6a5c4155761149488a2ee1e9365356c3f8da51d4e24ad7e8fe26e5364aca151b" exitCode=255 Apr 17 14:09:21.589948 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:21.589612 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-9d4b6777b-s6z2b" event={"ID":"12933e12-58b2-4d0c-993a-ab690936a989","Type":"ContainerDied","Data":"6a5c4155761149488a2ee1e9365356c3f8da51d4e24ad7e8fe26e5364aca151b"} Apr 17 14:09:21.589948 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:21.589883 2568 scope.go:117] "RemoveContainer" containerID="6a5c4155761149488a2ee1e9365356c3f8da51d4e24ad7e8fe26e5364aca151b" Apr 17 14:09:22.277890 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:22.277841 2568 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-d6fc45fc5-qvzpd"] Apr 17 14:09:22.280792 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:22.280776 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-qvzpd" Apr 17 14:09:22.283287 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:22.283258 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Apr 17 14:09:22.283413 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:22.283264 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Apr 17 14:09:22.284381 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:22.284361 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Apr 17 14:09:22.284481 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:22.284365 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-hlr7j\"" Apr 17 14:09:22.284481 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:22.284401 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Apr 17 14:09:22.287652 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:22.287631 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-d6fc45fc5-qvzpd"] Apr 17 14:09:22.405806 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:22.405764 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8fqm\" (UniqueName: \"kubernetes.io/projected/fa9a1ab0-741d-44b8-9de1-9a0b296aee9c-kube-api-access-s8fqm\") pod \"service-ca-operator-d6fc45fc5-qvzpd\" (UID: \"fa9a1ab0-741d-44b8-9de1-9a0b296aee9c\") " pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-qvzpd" Apr 17 14:09:22.405997 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:22.405825 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa9a1ab0-741d-44b8-9de1-9a0b296aee9c-config\") pod \"service-ca-operator-d6fc45fc5-qvzpd\" (UID: \"fa9a1ab0-741d-44b8-9de1-9a0b296aee9c\") " pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-qvzpd" Apr 17 14:09:22.405997 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:22.405877 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa9a1ab0-741d-44b8-9de1-9a0b296aee9c-serving-cert\") pod \"service-ca-operator-d6fc45fc5-qvzpd\" (UID: \"fa9a1ab0-741d-44b8-9de1-9a0b296aee9c\") " pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-qvzpd" Apr 17 14:09:22.506373 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:22.506325 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa9a1ab0-741d-44b8-9de1-9a0b296aee9c-config\") pod \"service-ca-operator-d6fc45fc5-qvzpd\" (UID: \"fa9a1ab0-741d-44b8-9de1-9a0b296aee9c\") " pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-qvzpd" Apr 17 14:09:22.506373 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:22.506384 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa9a1ab0-741d-44b8-9de1-9a0b296aee9c-serving-cert\") pod \"service-ca-operator-d6fc45fc5-qvzpd\" (UID: \"fa9a1ab0-741d-44b8-9de1-9a0b296aee9c\") " pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-qvzpd" Apr 17 14:09:22.506534 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:22.506447 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s8fqm\" (UniqueName: \"kubernetes.io/projected/fa9a1ab0-741d-44b8-9de1-9a0b296aee9c-kube-api-access-s8fqm\") pod \"service-ca-operator-d6fc45fc5-qvzpd\" (UID: \"fa9a1ab0-741d-44b8-9de1-9a0b296aee9c\") " pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-qvzpd" Apr 17 14:09:22.506849 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:22.506830 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa9a1ab0-741d-44b8-9de1-9a0b296aee9c-config\") pod \"service-ca-operator-d6fc45fc5-qvzpd\" (UID: \"fa9a1ab0-741d-44b8-9de1-9a0b296aee9c\") " pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-qvzpd" Apr 17 14:09:22.508630 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:22.508612 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa9a1ab0-741d-44b8-9de1-9a0b296aee9c-serving-cert\") pod \"service-ca-operator-d6fc45fc5-qvzpd\" (UID: \"fa9a1ab0-741d-44b8-9de1-9a0b296aee9c\") " pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-qvzpd" Apr 17 14:09:22.514899 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:22.514876 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8fqm\" (UniqueName: \"kubernetes.io/projected/fa9a1ab0-741d-44b8-9de1-9a0b296aee9c-kube-api-access-s8fqm\") pod \"service-ca-operator-d6fc45fc5-qvzpd\" (UID: \"fa9a1ab0-741d-44b8-9de1-9a0b296aee9c\") " pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-qvzpd" Apr 17 14:09:22.590461 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:22.590357 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-qvzpd" Apr 17 14:09:22.593108 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:22.593085 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-s6z2b_12933e12-58b2-4d0c-993a-ab690936a989/console-operator/1.log" Apr 17 14:09:22.593426 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:22.593408 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-s6z2b_12933e12-58b2-4d0c-993a-ab690936a989/console-operator/0.log" Apr 17 14:09:22.593503 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:22.593442 2568 generic.go:358] "Generic (PLEG): container finished" podID="12933e12-58b2-4d0c-993a-ab690936a989" containerID="c45b7e9e805662af6935bf96596aa88764d22993219c62b5af0b102ce927a52c" exitCode=255 Apr 17 14:09:22.593503 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:22.593492 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-9d4b6777b-s6z2b" event={"ID":"12933e12-58b2-4d0c-993a-ab690936a989","Type":"ContainerDied","Data":"c45b7e9e805662af6935bf96596aa88764d22993219c62b5af0b102ce927a52c"} Apr 17 14:09:22.593597 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:22.593525 2568 scope.go:117] "RemoveContainer" containerID="6a5c4155761149488a2ee1e9365356c3f8da51d4e24ad7e8fe26e5364aca151b" Apr 17 14:09:22.593787 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:22.593763 2568 scope.go:117] "RemoveContainer" containerID="c45b7e9e805662af6935bf96596aa88764d22993219c62b5af0b102ce927a52c" Apr 17 14:09:22.593971 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:09:22.593955 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console-operator pod=console-operator-9d4b6777b-s6z2b_openshift-console-operator(12933e12-58b2-4d0c-993a-ab690936a989)\"" pod="openshift-console-operator/console-operator-9d4b6777b-s6z2b" podUID="12933e12-58b2-4d0c-993a-ab690936a989" Apr 17 14:09:22.704951 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:22.704915 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-d6fc45fc5-qvzpd"] Apr 17 14:09:22.707538 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:09:22.707507 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa9a1ab0_741d_44b8_9de1_9a0b296aee9c.slice/crio-92cf551e7ea5e33350168d1fa8a689bfc26c8484aa9af23925521d5f78ee76ee WatchSource:0}: Error finding container 92cf551e7ea5e33350168d1fa8a689bfc26c8484aa9af23925521d5f78ee76ee: Status 404 returned error can't find the container with id 92cf551e7ea5e33350168d1fa8a689bfc26c8484aa9af23925521d5f78ee76ee Apr 17 14:09:23.596439 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:23.596403 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-qvzpd" event={"ID":"fa9a1ab0-741d-44b8-9de1-9a0b296aee9c","Type":"ContainerStarted","Data":"92cf551e7ea5e33350168d1fa8a689bfc26c8484aa9af23925521d5f78ee76ee"} Apr 17 14:09:23.597921 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:23.597896 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-s6z2b_12933e12-58b2-4d0c-993a-ab690936a989/console-operator/1.log" Apr 17 14:09:23.598302 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:23.598280 2568 scope.go:117] "RemoveContainer" containerID="c45b7e9e805662af6935bf96596aa88764d22993219c62b5af0b102ce927a52c" Apr 17 14:09:23.598476 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:09:23.598455 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console-operator pod=console-operator-9d4b6777b-s6z2b_openshift-console-operator(12933e12-58b2-4d0c-993a-ab690936a989)\"" pod="openshift-console-operator/console-operator-9d4b6777b-s6z2b" podUID="12933e12-58b2-4d0c-993a-ab690936a989" Apr 17 14:09:24.017422 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:24.017388 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a93e218-4c76-4f41-ac1c-71595ca764d4-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-6qfvr\" (UID: \"6a93e218-4c76-4f41-ac1c-71595ca764d4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6qfvr" Apr 17 14:09:24.017619 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:09:24.017552 2568 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 17 14:09:24.017674 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:09:24.017624 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a93e218-4c76-4f41-ac1c-71595ca764d4-samples-operator-tls podName:6a93e218-4c76-4f41-ac1c-71595ca764d4 nodeName:}" failed. No retries permitted until 2026-04-17 14:09:32.017605975 +0000 UTC m=+121.355397069 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/6a93e218-4c76-4f41-ac1c-71595ca764d4-samples-operator-tls") pod "cluster-samples-operator-6dc5bdb6b4-6qfvr" (UID: "6a93e218-4c76-4f41-ac1c-71595ca764d4") : secret "samples-operator-tls" not found Apr 17 14:09:25.192380 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:25.192350 2568 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-74bb7799d9-nctkf"] Apr 17 14:09:25.195398 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:25.195382 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-74bb7799d9-nctkf" Apr 17 14:09:25.199012 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:25.198987 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Apr 17 14:09:25.199012 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:25.199004 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-vm9b2\"" Apr 17 14:09:25.199207 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:25.198996 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Apr 17 14:09:25.203588 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:25.203567 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-74bb7799d9-nctkf"] Apr 17 14:09:25.328709 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:25.328673 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmfv6\" (UniqueName: \"kubernetes.io/projected/1a5d1cc2-214d-4ed2-a8be-4b70dbf4c772-kube-api-access-wmfv6\") pod \"migrator-74bb7799d9-nctkf\" (UID: \"1a5d1cc2-214d-4ed2-a8be-4b70dbf4c772\") " pod="openshift-kube-storage-version-migrator/migrator-74bb7799d9-nctkf" Apr 17 14:09:25.429394 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:25.429364 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wmfv6\" (UniqueName: \"kubernetes.io/projected/1a5d1cc2-214d-4ed2-a8be-4b70dbf4c772-kube-api-access-wmfv6\") pod \"migrator-74bb7799d9-nctkf\" (UID: \"1a5d1cc2-214d-4ed2-a8be-4b70dbf4c772\") " pod="openshift-kube-storage-version-migrator/migrator-74bb7799d9-nctkf" Apr 17 14:09:25.437912 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:25.437820 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmfv6\" (UniqueName: \"kubernetes.io/projected/1a5d1cc2-214d-4ed2-a8be-4b70dbf4c772-kube-api-access-wmfv6\") pod \"migrator-74bb7799d9-nctkf\" (UID: \"1a5d1cc2-214d-4ed2-a8be-4b70dbf4c772\") " pod="openshift-kube-storage-version-migrator/migrator-74bb7799d9-nctkf" Apr 17 14:09:25.504744 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:25.504636 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-74bb7799d9-nctkf" Apr 17 14:09:25.603675 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:25.603643 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-qvzpd" event={"ID":"fa9a1ab0-741d-44b8-9de1-9a0b296aee9c","Type":"ContainerStarted","Data":"5d3c72dad9f0ec124222e36b684f9f38003b2b84696a255cae7c5392b119946d"} Apr 17 14:09:25.618250 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:25.618200 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-qvzpd" podStartSLOduration=1.739950527 podStartE2EDuration="3.618184024s" podCreationTimestamp="2026-04-17 14:09:22 +0000 UTC" firstStartedPulling="2026-04-17 14:09:22.709394012 +0000 UTC m=+112.047185101" lastFinishedPulling="2026-04-17 14:09:24.587627508 +0000 UTC m=+113.925418598" observedRunningTime="2026-04-17 14:09:25.61754559 +0000 UTC m=+114.955336718" watchObservedRunningTime="2026-04-17 14:09:25.618184024 +0000 UTC m=+114.955975135" Apr 17 14:09:25.630064 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:25.630035 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-74bb7799d9-nctkf"] Apr 17 14:09:25.633240 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:09:25.633204 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a5d1cc2_214d_4ed2_a8be_4b70dbf4c772.slice/crio-308c46087dc4508067e26b68076dc974aa47eea8c50039b3a6eeeaff86b3a21e WatchSource:0}: Error finding container 308c46087dc4508067e26b68076dc974aa47eea8c50039b3a6eeeaff86b3a21e: Status 404 returned error can't find the container with id 308c46087dc4508067e26b68076dc974aa47eea8c50039b3a6eeeaff86b3a21e Apr 17 14:09:26.606325 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:26.606291 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-74bb7799d9-nctkf" event={"ID":"1a5d1cc2-214d-4ed2-a8be-4b70dbf4c772","Type":"ContainerStarted","Data":"308c46087dc4508067e26b68076dc974aa47eea8c50039b3a6eeeaff86b3a21e"} Apr 17 14:09:27.610628 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:27.610593 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-74bb7799d9-nctkf" event={"ID":"1a5d1cc2-214d-4ed2-a8be-4b70dbf4c772","Type":"ContainerStarted","Data":"a8f93a663580b1c056b2806a1eca6f13a986ab06d8a04d81d7e8e61770e291ca"} Apr 17 14:09:27.610628 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:27.610629 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-74bb7799d9-nctkf" event={"ID":"1a5d1cc2-214d-4ed2-a8be-4b70dbf4c772","Type":"ContainerStarted","Data":"0ce77cad40b494d67875c90435723cd034592300da299b395a76820d357e0aae"} Apr 17 14:09:27.627305 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:27.627248 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-74bb7799d9-nctkf" podStartSLOduration=1.2881212309999999 podStartE2EDuration="2.627228929s" podCreationTimestamp="2026-04-17 14:09:25 +0000 UTC" firstStartedPulling="2026-04-17 14:09:25.635270026 +0000 UTC m=+114.973061129" lastFinishedPulling="2026-04-17 14:09:26.974377737 +0000 UTC m=+116.312168827" observedRunningTime="2026-04-17 14:09:27.62668732 +0000 UTC m=+116.964478431" watchObservedRunningTime="2026-04-17 14:09:27.627228929 +0000 UTC m=+116.965020040" Apr 17 14:09:27.948015 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:27.947980 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/d3d37f50-e860-4ca0-9480-8785e349ad48-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-75587bd455-b6tjh\" (UID: \"d3d37f50-e860-4ca0-9480-8785e349ad48\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-b6tjh" Apr 17 14:09:27.948167 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:09:27.948143 2568 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Apr 17 14:09:27.948230 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:09:27.948221 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3d37f50-e860-4ca0-9480-8785e349ad48-cluster-monitoring-operator-tls podName:d3d37f50-e860-4ca0-9480-8785e349ad48 nodeName:}" failed. No retries permitted until 2026-04-17 14:09:43.948203191 +0000 UTC m=+133.285994285 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/d3d37f50-e860-4ca0-9480-8785e349ad48-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-75587bd455-b6tjh" (UID: "d3d37f50-e860-4ca0-9480-8785e349ad48") : secret "cluster-monitoring-operator-tls" not found Apr 17 14:09:28.546362 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:28.546323 2568 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-9d4b6777b-s6z2b" Apr 17 14:09:28.546362 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:28.546359 2568 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-9d4b6777b-s6z2b" Apr 17 14:09:28.546802 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:28.546782 2568 scope.go:117] "RemoveContainer" containerID="c45b7e9e805662af6935bf96596aa88764d22993219c62b5af0b102ce927a52c" Apr 17 14:09:28.547023 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:09:28.547002 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console-operator pod=console-operator-9d4b6777b-s6z2b_openshift-console-operator(12933e12-58b2-4d0c-993a-ab690936a989)\"" pod="openshift-console-operator/console-operator-9d4b6777b-s6z2b" podUID="12933e12-58b2-4d0c-993a-ab690936a989" Apr 17 14:09:32.080025 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:32.079984 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a93e218-4c76-4f41-ac1c-71595ca764d4-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-6qfvr\" (UID: \"6a93e218-4c76-4f41-ac1c-71595ca764d4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6qfvr" Apr 17 14:09:32.080381 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:09:32.080137 2568 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 17 14:09:32.080381 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:09:32.080205 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a93e218-4c76-4f41-ac1c-71595ca764d4-samples-operator-tls podName:6a93e218-4c76-4f41-ac1c-71595ca764d4 nodeName:}" failed. No retries permitted until 2026-04-17 14:09:48.080188216 +0000 UTC m=+137.417979314 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/6a93e218-4c76-4f41-ac1c-71595ca764d4-samples-operator-tls") pod "cluster-samples-operator-6dc5bdb6b4-6qfvr" (UID: "6a93e218-4c76-4f41-ac1c-71595ca764d4") : secret "samples-operator-tls" not found Apr 17 14:09:40.259308 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:40.259279 2568 scope.go:117] "RemoveContainer" containerID="c45b7e9e805662af6935bf96596aa88764d22993219c62b5af0b102ce927a52c" Apr 17 14:09:40.644105 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:40.644031 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-s6z2b_12933e12-58b2-4d0c-993a-ab690936a989/console-operator/1.log" Apr 17 14:09:40.644238 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:40.644110 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-9d4b6777b-s6z2b" event={"ID":"12933e12-58b2-4d0c-993a-ab690936a989","Type":"ContainerStarted","Data":"ae6104fb2df308681c225e7d5b1be3e716985077cd216956e0845e47beb422c2"} Apr 17 14:09:40.644377 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:40.644361 2568 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-9d4b6777b-s6z2b" Apr 17 14:09:40.662770 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:40.662724 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-9d4b6777b-s6z2b" podStartSLOduration=20.404360618 podStartE2EDuration="22.662711171s" podCreationTimestamp="2026-04-17 14:09:18 +0000 UTC" firstStartedPulling="2026-04-17 14:09:18.664518537 +0000 UTC m=+108.002309629" lastFinishedPulling="2026-04-17 14:09:20.922869077 +0000 UTC m=+110.260660182" observedRunningTime="2026-04-17 14:09:40.66088262 +0000 UTC m=+129.998673732" watchObservedRunningTime="2026-04-17 14:09:40.662711171 +0000 UTC m=+130.000502282" Apr 17 14:09:40.948525 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:40.948484 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/30cbf063-628a-472d-981e-312f5bea1f7f-metrics-certs\") pod \"network-metrics-daemon-vgr2h\" (UID: \"30cbf063-628a-472d-981e-312f5bea1f7f\") " pod="openshift-multus/network-metrics-daemon-vgr2h" Apr 17 14:09:40.950827 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:40.950797 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/30cbf063-628a-472d-981e-312f5bea1f7f-metrics-certs\") pod \"network-metrics-daemon-vgr2h\" (UID: \"30cbf063-628a-472d-981e-312f5bea1f7f\") " pod="openshift-multus/network-metrics-daemon-vgr2h" Apr 17 14:09:40.985358 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:40.985329 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-bfspf\"" Apr 17 14:09:40.993510 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:40.993489 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vgr2h" Apr 17 14:09:41.108446 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:41.108400 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-vgr2h"] Apr 17 14:09:41.111724 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:09:41.111700 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30cbf063_628a_472d_981e_312f5bea1f7f.slice/crio-289bf9ff6e1d99e085de769fd7a124c2db0e128a73ca10cb2f3f902d5b203b6c WatchSource:0}: Error finding container 289bf9ff6e1d99e085de769fd7a124c2db0e128a73ca10cb2f3f902d5b203b6c: Status 404 returned error can't find the container with id 289bf9ff6e1d99e085de769fd7a124c2db0e128a73ca10cb2f3f902d5b203b6c Apr 17 14:09:41.194430 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:41.194399 2568 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-9d4b6777b-s6z2b" Apr 17 14:09:41.648022 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:41.647982 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vgr2h" event={"ID":"30cbf063-628a-472d-981e-312f5bea1f7f","Type":"ContainerStarted","Data":"289bf9ff6e1d99e085de769fd7a124c2db0e128a73ca10cb2f3f902d5b203b6c"} Apr 17 14:09:43.654973 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:43.654939 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vgr2h" event={"ID":"30cbf063-628a-472d-981e-312f5bea1f7f","Type":"ContainerStarted","Data":"c699966b5b3112f6fbb2598b91f1c0cec840c5e46b8ff8bdf1eed7f6bc4b1958"} Apr 17 14:09:43.654973 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:43.654976 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vgr2h" event={"ID":"30cbf063-628a-472d-981e-312f5bea1f7f","Type":"ContainerStarted","Data":"8f050eaf05ff947bc61954279eb4eaf801ba82b359ca934f4bfcd06f8a416cbf"} Apr 17 14:09:43.669874 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:43.669811 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-vgr2h" podStartSLOduration=130.941963617 podStartE2EDuration="2m12.669793722s" podCreationTimestamp="2026-04-17 14:07:31 +0000 UTC" firstStartedPulling="2026-04-17 14:09:41.113676288 +0000 UTC m=+130.451467377" lastFinishedPulling="2026-04-17 14:09:42.841506388 +0000 UTC m=+132.179297482" observedRunningTime="2026-04-17 14:09:43.669117908 +0000 UTC m=+133.006909018" watchObservedRunningTime="2026-04-17 14:09:43.669793722 +0000 UTC m=+133.007584833" Apr 17 14:09:43.970699 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:43.970664 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/d3d37f50-e860-4ca0-9480-8785e349ad48-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-75587bd455-b6tjh\" (UID: \"d3d37f50-e860-4ca0-9480-8785e349ad48\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-b6tjh" Apr 17 14:09:43.973215 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:43.973191 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/d3d37f50-e860-4ca0-9480-8785e349ad48-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-75587bd455-b6tjh\" (UID: \"d3d37f50-e860-4ca0-9480-8785e349ad48\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-b6tjh" Apr 17 14:09:43.988362 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:43.988330 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"cluster-monitoring-operator-dockercfg-flwp6\"" Apr 17 14:09:43.996213 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:43.996195 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-b6tjh" Apr 17 14:09:44.107504 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:44.107473 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-75587bd455-b6tjh"] Apr 17 14:09:44.110339 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:09:44.110309 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3d37f50_e860_4ca0_9480_8785e349ad48.slice/crio-bf44ba3025d5eaa85ed8eb8f467475f40ee122044a82a41a228c4bce738a329d WatchSource:0}: Error finding container bf44ba3025d5eaa85ed8eb8f467475f40ee122044a82a41a228c4bce738a329d: Status 404 returned error can't find the container with id bf44ba3025d5eaa85ed8eb8f467475f40ee122044a82a41a228c4bce738a329d Apr 17 14:09:44.658458 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:44.658419 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-b6tjh" event={"ID":"d3d37f50-e860-4ca0-9480-8785e349ad48","Type":"ContainerStarted","Data":"bf44ba3025d5eaa85ed8eb8f467475f40ee122044a82a41a228c4bce738a329d"} Apr 17 14:09:46.664250 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:46.664220 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-b6tjh" event={"ID":"d3d37f50-e860-4ca0-9480-8785e349ad48","Type":"ContainerStarted","Data":"9a460bcc77bfcbc98aa3192d79b3ec487a574235a0bc5cb11062bf1f67766429"} Apr 17 14:09:46.680157 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:46.680108 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-b6tjh" podStartSLOduration=32.874967989 podStartE2EDuration="34.680094903s" podCreationTimestamp="2026-04-17 14:09:12 +0000 UTC" firstStartedPulling="2026-04-17 14:09:44.112187377 +0000 UTC m=+133.449978467" lastFinishedPulling="2026-04-17 14:09:45.917314278 +0000 UTC m=+135.255105381" observedRunningTime="2026-04-17 14:09:46.678792863 +0000 UTC m=+136.016583974" watchObservedRunningTime="2026-04-17 14:09:46.680094903 +0000 UTC m=+136.017886011" Apr 17 14:09:48.101732 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:48.101680 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a93e218-4c76-4f41-ac1c-71595ca764d4-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-6qfvr\" (UID: \"6a93e218-4c76-4f41-ac1c-71595ca764d4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6qfvr" Apr 17 14:09:48.104095 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:48.104075 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a93e218-4c76-4f41-ac1c-71595ca764d4-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-6qfvr\" (UID: \"6a93e218-4c76-4f41-ac1c-71595ca764d4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6qfvr" Apr 17 14:09:48.338417 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:48.338381 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-d8kc6\"" Apr 17 14:09:48.346850 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:48.346820 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6qfvr" Apr 17 14:09:48.495250 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:48.495214 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6qfvr"] Apr 17 14:09:48.499703 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:48.499679 2568 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-runtime-extractor-qnjxh"] Apr 17 14:09:48.504100 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:48.504079 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-qnjxh" Apr 17 14:09:48.506712 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:48.506694 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-rbac-proxy\"" Apr 17 14:09:48.508118 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:48.508098 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-tls\"" Apr 17 14:09:48.508222 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:48.508151 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-sa-dockercfg-cl66l\"" Apr 17 14:09:48.508543 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:48.508527 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"openshift-service-ca.crt\"" Apr 17 14:09:48.508666 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:48.508651 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-root-ca.crt\"" Apr 17 14:09:48.520465 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:48.520444 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-qnjxh"] Apr 17 14:09:48.606007 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:48.605919 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/69839ef7-4c5c-4b64-95e0-5187708bda48-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-qnjxh\" (UID: \"69839ef7-4c5c-4b64-95e0-5187708bda48\") " pod="openshift-insights/insights-runtime-extractor-qnjxh" Apr 17 14:09:48.606007 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:48.605950 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcvfz\" (UniqueName: \"kubernetes.io/projected/69839ef7-4c5c-4b64-95e0-5187708bda48-kube-api-access-qcvfz\") pod \"insights-runtime-extractor-qnjxh\" (UID: \"69839ef7-4c5c-4b64-95e0-5187708bda48\") " pod="openshift-insights/insights-runtime-extractor-qnjxh" Apr 17 14:09:48.606007 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:48.605973 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/69839ef7-4c5c-4b64-95e0-5187708bda48-data-volume\") pod \"insights-runtime-extractor-qnjxh\" (UID: \"69839ef7-4c5c-4b64-95e0-5187708bda48\") " pod="openshift-insights/insights-runtime-extractor-qnjxh" Apr 17 14:09:48.606263 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:48.606050 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/69839ef7-4c5c-4b64-95e0-5187708bda48-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-qnjxh\" (UID: \"69839ef7-4c5c-4b64-95e0-5187708bda48\") " pod="openshift-insights/insights-runtime-extractor-qnjxh" Apr 17 14:09:48.606263 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:48.606114 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/69839ef7-4c5c-4b64-95e0-5187708bda48-crio-socket\") pod \"insights-runtime-extractor-qnjxh\" (UID: \"69839ef7-4c5c-4b64-95e0-5187708bda48\") " pod="openshift-insights/insights-runtime-extractor-qnjxh" Apr 17 14:09:48.672917 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:48.672886 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6qfvr" event={"ID":"6a93e218-4c76-4f41-ac1c-71595ca764d4","Type":"ContainerStarted","Data":"39d69df2a984da6f5b9e6322f9b73904517a33a8143287f85e19f391384c7845"} Apr 17 14:09:48.707375 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:48.707345 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/69839ef7-4c5c-4b64-95e0-5187708bda48-crio-socket\") pod \"insights-runtime-extractor-qnjxh\" (UID: \"69839ef7-4c5c-4b64-95e0-5187708bda48\") " pod="openshift-insights/insights-runtime-extractor-qnjxh" Apr 17 14:09:48.707611 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:48.707434 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/69839ef7-4c5c-4b64-95e0-5187708bda48-crio-socket\") pod \"insights-runtime-extractor-qnjxh\" (UID: \"69839ef7-4c5c-4b64-95e0-5187708bda48\") " pod="openshift-insights/insights-runtime-extractor-qnjxh" Apr 17 14:09:48.707611 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:48.707444 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/69839ef7-4c5c-4b64-95e0-5187708bda48-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-qnjxh\" (UID: \"69839ef7-4c5c-4b64-95e0-5187708bda48\") " pod="openshift-insights/insights-runtime-extractor-qnjxh" Apr 17 14:09:48.707611 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:48.707470 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qcvfz\" (UniqueName: \"kubernetes.io/projected/69839ef7-4c5c-4b64-95e0-5187708bda48-kube-api-access-qcvfz\") pod \"insights-runtime-extractor-qnjxh\" (UID: \"69839ef7-4c5c-4b64-95e0-5187708bda48\") " pod="openshift-insights/insights-runtime-extractor-qnjxh" Apr 17 14:09:48.707611 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:48.707498 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/69839ef7-4c5c-4b64-95e0-5187708bda48-data-volume\") pod \"insights-runtime-extractor-qnjxh\" (UID: \"69839ef7-4c5c-4b64-95e0-5187708bda48\") " pod="openshift-insights/insights-runtime-extractor-qnjxh" Apr 17 14:09:48.707611 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:48.707534 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/69839ef7-4c5c-4b64-95e0-5187708bda48-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-qnjxh\" (UID: \"69839ef7-4c5c-4b64-95e0-5187708bda48\") " pod="openshift-insights/insights-runtime-extractor-qnjxh" Apr 17 14:09:48.708028 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:48.707998 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/69839ef7-4c5c-4b64-95e0-5187708bda48-data-volume\") pod \"insights-runtime-extractor-qnjxh\" (UID: \"69839ef7-4c5c-4b64-95e0-5187708bda48\") " pod="openshift-insights/insights-runtime-extractor-qnjxh" Apr 17 14:09:48.708157 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:48.708141 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/69839ef7-4c5c-4b64-95e0-5187708bda48-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-qnjxh\" (UID: \"69839ef7-4c5c-4b64-95e0-5187708bda48\") " pod="openshift-insights/insights-runtime-extractor-qnjxh" Apr 17 14:09:48.710510 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:48.710490 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/69839ef7-4c5c-4b64-95e0-5187708bda48-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-qnjxh\" (UID: \"69839ef7-4c5c-4b64-95e0-5187708bda48\") " pod="openshift-insights/insights-runtime-extractor-qnjxh" Apr 17 14:09:48.715077 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:48.715057 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcvfz\" (UniqueName: \"kubernetes.io/projected/69839ef7-4c5c-4b64-95e0-5187708bda48-kube-api-access-qcvfz\") pod \"insights-runtime-extractor-qnjxh\" (UID: \"69839ef7-4c5c-4b64-95e0-5187708bda48\") " pod="openshift-insights/insights-runtime-extractor-qnjxh" Apr 17 14:09:48.837127 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:48.837095 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-qnjxh" Apr 17 14:09:48.965731 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:48.965625 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-qnjxh"] Apr 17 14:09:48.968179 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:09:48.968152 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69839ef7_4c5c_4b64_95e0_5187708bda48.slice/crio-d05ee75f5fe9b9f9ad088101f59657ea79c6cb9f4f4623c534ba30d6aa35184c WatchSource:0}: Error finding container d05ee75f5fe9b9f9ad088101f59657ea79c6cb9f4f4623c534ba30d6aa35184c: Status 404 returned error can't find the container with id d05ee75f5fe9b9f9ad088101f59657ea79c6cb9f4f4623c534ba30d6aa35184c Apr 17 14:09:49.495083 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:49.495048 2568 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-865878b56c-zchzt"] Apr 17 14:09:49.503430 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:49.503398 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-865878b56c-zchzt" Apr 17 14:09:49.506064 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:49.506034 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-865878b56c-zchzt"] Apr 17 14:09:49.507346 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:49.507104 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Apr 17 14:09:49.507346 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:49.507153 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Apr 17 14:09:49.507346 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:49.507176 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Apr 17 14:09:49.507346 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:49.507225 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Apr 17 14:09:49.507346 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:49.507256 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Apr 17 14:09:49.507346 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:49.507323 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-gwdpx\"" Apr 17 14:09:49.507716 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:49.507428 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Apr 17 14:09:49.507716 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:49.507591 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Apr 17 14:09:49.616135 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:49.616096 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6c48a882-c4d6-401e-8b76-fdfdcfd63b2a-oauth-serving-cert\") pod \"console-865878b56c-zchzt\" (UID: \"6c48a882-c4d6-401e-8b76-fdfdcfd63b2a\") " pod="openshift-console/console-865878b56c-zchzt" Apr 17 14:09:49.616306 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:49.616153 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kd9l\" (UniqueName: \"kubernetes.io/projected/6c48a882-c4d6-401e-8b76-fdfdcfd63b2a-kube-api-access-8kd9l\") pod \"console-865878b56c-zchzt\" (UID: \"6c48a882-c4d6-401e-8b76-fdfdcfd63b2a\") " pod="openshift-console/console-865878b56c-zchzt" Apr 17 14:09:49.616306 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:49.616244 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6c48a882-c4d6-401e-8b76-fdfdcfd63b2a-console-config\") pod \"console-865878b56c-zchzt\" (UID: \"6c48a882-c4d6-401e-8b76-fdfdcfd63b2a\") " pod="openshift-console/console-865878b56c-zchzt" Apr 17 14:09:49.616306 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:49.616278 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6c48a882-c4d6-401e-8b76-fdfdcfd63b2a-service-ca\") pod \"console-865878b56c-zchzt\" (UID: \"6c48a882-c4d6-401e-8b76-fdfdcfd63b2a\") " pod="openshift-console/console-865878b56c-zchzt" Apr 17 14:09:49.616418 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:49.616312 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c48a882-c4d6-401e-8b76-fdfdcfd63b2a-console-serving-cert\") pod \"console-865878b56c-zchzt\" (UID: \"6c48a882-c4d6-401e-8b76-fdfdcfd63b2a\") " pod="openshift-console/console-865878b56c-zchzt" Apr 17 14:09:49.616418 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:49.616327 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6c48a882-c4d6-401e-8b76-fdfdcfd63b2a-console-oauth-config\") pod \"console-865878b56c-zchzt\" (UID: \"6c48a882-c4d6-401e-8b76-fdfdcfd63b2a\") " pod="openshift-console/console-865878b56c-zchzt" Apr 17 14:09:49.676443 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:49.676411 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-qnjxh" event={"ID":"69839ef7-4c5c-4b64-95e0-5187708bda48","Type":"ContainerStarted","Data":"0b2458c9208f34b329214e52a232583e9d706dc16af6102bc40d6b612b9d323d"} Apr 17 14:09:49.676443 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:49.676444 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-qnjxh" event={"ID":"69839ef7-4c5c-4b64-95e0-5187708bda48","Type":"ContainerStarted","Data":"d05ee75f5fe9b9f9ad088101f59657ea79c6cb9f4f4623c534ba30d6aa35184c"} Apr 17 14:09:49.716707 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:49.716676 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8kd9l\" (UniqueName: \"kubernetes.io/projected/6c48a882-c4d6-401e-8b76-fdfdcfd63b2a-kube-api-access-8kd9l\") pod \"console-865878b56c-zchzt\" (UID: \"6c48a882-c4d6-401e-8b76-fdfdcfd63b2a\") " pod="openshift-console/console-865878b56c-zchzt" Apr 17 14:09:49.716904 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:49.716735 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6c48a882-c4d6-401e-8b76-fdfdcfd63b2a-console-config\") pod \"console-865878b56c-zchzt\" (UID: \"6c48a882-c4d6-401e-8b76-fdfdcfd63b2a\") " pod="openshift-console/console-865878b56c-zchzt" Apr 17 14:09:49.716904 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:49.716782 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6c48a882-c4d6-401e-8b76-fdfdcfd63b2a-service-ca\") pod \"console-865878b56c-zchzt\" (UID: \"6c48a882-c4d6-401e-8b76-fdfdcfd63b2a\") " pod="openshift-console/console-865878b56c-zchzt" Apr 17 14:09:49.716904 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:49.716823 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c48a882-c4d6-401e-8b76-fdfdcfd63b2a-console-serving-cert\") pod \"console-865878b56c-zchzt\" (UID: \"6c48a882-c4d6-401e-8b76-fdfdcfd63b2a\") " pod="openshift-console/console-865878b56c-zchzt" Apr 17 14:09:49.716904 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:49.716847 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6c48a882-c4d6-401e-8b76-fdfdcfd63b2a-console-oauth-config\") pod \"console-865878b56c-zchzt\" (UID: \"6c48a882-c4d6-401e-8b76-fdfdcfd63b2a\") " pod="openshift-console/console-865878b56c-zchzt" Apr 17 14:09:49.716904 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:49.716899 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6c48a882-c4d6-401e-8b76-fdfdcfd63b2a-oauth-serving-cert\") pod \"console-865878b56c-zchzt\" (UID: \"6c48a882-c4d6-401e-8b76-fdfdcfd63b2a\") " pod="openshift-console/console-865878b56c-zchzt" Apr 17 14:09:49.717668 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:49.717604 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6c48a882-c4d6-401e-8b76-fdfdcfd63b2a-service-ca\") pod \"console-865878b56c-zchzt\" (UID: \"6c48a882-c4d6-401e-8b76-fdfdcfd63b2a\") " pod="openshift-console/console-865878b56c-zchzt" Apr 17 14:09:49.717668 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:49.717608 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6c48a882-c4d6-401e-8b76-fdfdcfd63b2a-console-config\") pod \"console-865878b56c-zchzt\" (UID: \"6c48a882-c4d6-401e-8b76-fdfdcfd63b2a\") " pod="openshift-console/console-865878b56c-zchzt" Apr 17 14:09:49.717853 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:49.717673 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6c48a882-c4d6-401e-8b76-fdfdcfd63b2a-oauth-serving-cert\") pod \"console-865878b56c-zchzt\" (UID: \"6c48a882-c4d6-401e-8b76-fdfdcfd63b2a\") " pod="openshift-console/console-865878b56c-zchzt" Apr 17 14:09:49.719635 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:49.719612 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6c48a882-c4d6-401e-8b76-fdfdcfd63b2a-console-oauth-config\") pod \"console-865878b56c-zchzt\" (UID: \"6c48a882-c4d6-401e-8b76-fdfdcfd63b2a\") " pod="openshift-console/console-865878b56c-zchzt" Apr 17 14:09:49.720472 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:49.720453 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c48a882-c4d6-401e-8b76-fdfdcfd63b2a-console-serving-cert\") pod \"console-865878b56c-zchzt\" (UID: \"6c48a882-c4d6-401e-8b76-fdfdcfd63b2a\") " pod="openshift-console/console-865878b56c-zchzt" Apr 17 14:09:49.724484 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:49.724461 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kd9l\" (UniqueName: \"kubernetes.io/projected/6c48a882-c4d6-401e-8b76-fdfdcfd63b2a-kube-api-access-8kd9l\") pod \"console-865878b56c-zchzt\" (UID: \"6c48a882-c4d6-401e-8b76-fdfdcfd63b2a\") " pod="openshift-console/console-865878b56c-zchzt" Apr 17 14:09:49.816800 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:49.816708 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-865878b56c-zchzt" Apr 17 14:09:49.982376 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:49.982353 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-865878b56c-zchzt"] Apr 17 14:09:49.986081 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:09:49.986057 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c48a882_c4d6_401e_8b76_fdfdcfd63b2a.slice/crio-a3bba40b123d98c85bd18f6074979f7866eaa4fc5d82d2ce065c92deddf78eaf WatchSource:0}: Error finding container a3bba40b123d98c85bd18f6074979f7866eaa4fc5d82d2ce065c92deddf78eaf: Status 404 returned error can't find the container with id a3bba40b123d98c85bd18f6074979f7866eaa4fc5d82d2ce065c92deddf78eaf Apr 17 14:09:50.681832 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:50.681796 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-qnjxh" event={"ID":"69839ef7-4c5c-4b64-95e0-5187708bda48","Type":"ContainerStarted","Data":"2a028ffce6a43df94d6b6c0f3f26ea43dd9d7321dfbb5bf2dbdc76e34f1f500a"} Apr 17 14:09:50.682811 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:50.682789 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-865878b56c-zchzt" event={"ID":"6c48a882-c4d6-401e-8b76-fdfdcfd63b2a","Type":"ContainerStarted","Data":"a3bba40b123d98c85bd18f6074979f7866eaa4fc5d82d2ce065c92deddf78eaf"} Apr 17 14:09:51.433628 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:51.433588 2568 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-5676c8c784-45424"] Apr 17 14:09:51.437230 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:51.437204 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-5676c8c784-45424" Apr 17 14:09:51.439836 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:51.439804 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-kube-rbac-proxy-config\"" Apr 17 14:09:51.439994 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:51.439837 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"metrics-client-ca\"" Apr 17 14:09:51.439994 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:51.439845 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-tls\"" Apr 17 14:09:51.440280 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:51.440255 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-dockercfg-7bg7t\"" Apr 17 14:09:51.447197 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:51.446616 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-5676c8c784-45424"] Apr 17 14:09:51.531310 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:51.531152 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/56bb2c90-d8f5-4f9a-9e91-ff9a83a6f890-prometheus-operator-tls\") pod \"prometheus-operator-5676c8c784-45424\" (UID: \"56bb2c90-d8f5-4f9a-9e91-ff9a83a6f890\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-45424" Apr 17 14:09:51.531471 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:51.531354 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xss6t\" (UniqueName: \"kubernetes.io/projected/56bb2c90-d8f5-4f9a-9e91-ff9a83a6f890-kube-api-access-xss6t\") pod \"prometheus-operator-5676c8c784-45424\" (UID: \"56bb2c90-d8f5-4f9a-9e91-ff9a83a6f890\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-45424" Apr 17 14:09:51.531471 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:51.531426 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/56bb2c90-d8f5-4f9a-9e91-ff9a83a6f890-metrics-client-ca\") pod \"prometheus-operator-5676c8c784-45424\" (UID: \"56bb2c90-d8f5-4f9a-9e91-ff9a83a6f890\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-45424" Apr 17 14:09:51.531564 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:51.531475 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/56bb2c90-d8f5-4f9a-9e91-ff9a83a6f890-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5676c8c784-45424\" (UID: \"56bb2c90-d8f5-4f9a-9e91-ff9a83a6f890\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-45424" Apr 17 14:09:51.632704 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:51.632662 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/56bb2c90-d8f5-4f9a-9e91-ff9a83a6f890-metrics-client-ca\") pod \"prometheus-operator-5676c8c784-45424\" (UID: \"56bb2c90-d8f5-4f9a-9e91-ff9a83a6f890\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-45424" Apr 17 14:09:51.632904 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:51.632729 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/56bb2c90-d8f5-4f9a-9e91-ff9a83a6f890-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5676c8c784-45424\" (UID: \"56bb2c90-d8f5-4f9a-9e91-ff9a83a6f890\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-45424" Apr 17 14:09:51.632904 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:51.632786 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/56bb2c90-d8f5-4f9a-9e91-ff9a83a6f890-prometheus-operator-tls\") pod \"prometheus-operator-5676c8c784-45424\" (UID: \"56bb2c90-d8f5-4f9a-9e91-ff9a83a6f890\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-45424" Apr 17 14:09:51.633029 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:51.632929 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xss6t\" (UniqueName: \"kubernetes.io/projected/56bb2c90-d8f5-4f9a-9e91-ff9a83a6f890-kube-api-access-xss6t\") pod \"prometheus-operator-5676c8c784-45424\" (UID: \"56bb2c90-d8f5-4f9a-9e91-ff9a83a6f890\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-45424" Apr 17 14:09:51.633568 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:51.633542 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/56bb2c90-d8f5-4f9a-9e91-ff9a83a6f890-metrics-client-ca\") pod \"prometheus-operator-5676c8c784-45424\" (UID: \"56bb2c90-d8f5-4f9a-9e91-ff9a83a6f890\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-45424" Apr 17 14:09:51.635530 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:51.635500 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/56bb2c90-d8f5-4f9a-9e91-ff9a83a6f890-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5676c8c784-45424\" (UID: \"56bb2c90-d8f5-4f9a-9e91-ff9a83a6f890\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-45424" Apr 17 14:09:51.635530 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:51.635519 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/56bb2c90-d8f5-4f9a-9e91-ff9a83a6f890-prometheus-operator-tls\") pod \"prometheus-operator-5676c8c784-45424\" (UID: \"56bb2c90-d8f5-4f9a-9e91-ff9a83a6f890\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-45424" Apr 17 14:09:51.640559 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:51.640536 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xss6t\" (UniqueName: \"kubernetes.io/projected/56bb2c90-d8f5-4f9a-9e91-ff9a83a6f890-kube-api-access-xss6t\") pod \"prometheus-operator-5676c8c784-45424\" (UID: \"56bb2c90-d8f5-4f9a-9e91-ff9a83a6f890\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-45424" Apr 17 14:09:51.688257 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:51.688139 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6qfvr" event={"ID":"6a93e218-4c76-4f41-ac1c-71595ca764d4","Type":"ContainerStarted","Data":"311232e2b8a3817f73d7754ec436a956aa2ca08f987cbd0d9c3e57cbcc888eaf"} Apr 17 14:09:51.688257 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:51.688184 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6qfvr" event={"ID":"6a93e218-4c76-4f41-ac1c-71595ca764d4","Type":"ContainerStarted","Data":"ec3ae781f6dfb1a6c5d898b54c5522ec01263e54faf707ce0775940a9ba7a95b"} Apr 17 14:09:51.704695 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:51.704644 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6qfvr" podStartSLOduration=33.543495367 podStartE2EDuration="35.704625667s" podCreationTimestamp="2026-04-17 14:09:16 +0000 UTC" firstStartedPulling="2026-04-17 14:09:48.547653523 +0000 UTC m=+137.885444614" lastFinishedPulling="2026-04-17 14:09:50.708783821 +0000 UTC m=+140.046574914" observedRunningTime="2026-04-17 14:09:51.703126426 +0000 UTC m=+141.040917538" watchObservedRunningTime="2026-04-17 14:09:51.704625667 +0000 UTC m=+141.042416779" Apr 17 14:09:51.751871 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:51.751795 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-5676c8c784-45424" Apr 17 14:09:52.327179 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:52.327145 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-5676c8c784-45424"] Apr 17 14:09:52.330385 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:09:52.330357 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56bb2c90_d8f5_4f9a_9e91_ff9a83a6f890.slice/crio-8aa498d7937c345b15cbec584a819e33e352e184fba7b79dacc769fa75a9cd22 WatchSource:0}: Error finding container 8aa498d7937c345b15cbec584a819e33e352e184fba7b79dacc769fa75a9cd22: Status 404 returned error can't find the container with id 8aa498d7937c345b15cbec584a819e33e352e184fba7b79dacc769fa75a9cd22 Apr 17 14:09:52.693297 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:52.693257 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-qnjxh" event={"ID":"69839ef7-4c5c-4b64-95e0-5187708bda48","Type":"ContainerStarted","Data":"a8dff460a1f0c550846dcf4e0411420d7269f29006d6b3f67dc30b3d66d4b7cb"} Apr 17 14:09:52.694652 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:52.694619 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5676c8c784-45424" event={"ID":"56bb2c90-d8f5-4f9a-9e91-ff9a83a6f890","Type":"ContainerStarted","Data":"8aa498d7937c345b15cbec584a819e33e352e184fba7b79dacc769fa75a9cd22"} Apr 17 14:09:52.711574 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:52.711527 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-runtime-extractor-qnjxh" podStartSLOduration=1.5075731669999999 podStartE2EDuration="4.711511929s" podCreationTimestamp="2026-04-17 14:09:48 +0000 UTC" firstStartedPulling="2026-04-17 14:09:49.034360398 +0000 UTC m=+138.372151487" lastFinishedPulling="2026-04-17 14:09:52.238299161 +0000 UTC m=+141.576090249" observedRunningTime="2026-04-17 14:09:52.710807318 +0000 UTC m=+142.048598430" watchObservedRunningTime="2026-04-17 14:09:52.711511929 +0000 UTC m=+142.049303073" Apr 17 14:09:53.698937 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:53.698888 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-865878b56c-zchzt" event={"ID":"6c48a882-c4d6-401e-8b76-fdfdcfd63b2a","Type":"ContainerStarted","Data":"492bf9df86536fdc585ab8a6527b6fb41b26298f96c8a8d338d91e0f67ec6434"} Apr 17 14:09:53.716080 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:53.716026 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-865878b56c-zchzt" podStartSLOduration=1.234451369 podStartE2EDuration="4.716012273s" podCreationTimestamp="2026-04-17 14:09:49 +0000 UTC" firstStartedPulling="2026-04-17 14:09:49.988005849 +0000 UTC m=+139.325796938" lastFinishedPulling="2026-04-17 14:09:53.46956675 +0000 UTC m=+142.807357842" observedRunningTime="2026-04-17 14:09:53.714616085 +0000 UTC m=+143.052407198" watchObservedRunningTime="2026-04-17 14:09:53.716012273 +0000 UTC m=+143.053803383" Apr 17 14:09:55.705348 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:55.705313 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5676c8c784-45424" event={"ID":"56bb2c90-d8f5-4f9a-9e91-ff9a83a6f890","Type":"ContainerStarted","Data":"070f3c4f87cedbcfdfc03b5bf8de517aafad97ebc1eae958cd4c2b1263ac3408"} Apr 17 14:09:55.705348 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:55.705348 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5676c8c784-45424" event={"ID":"56bb2c90-d8f5-4f9a-9e91-ff9a83a6f890","Type":"ContainerStarted","Data":"22b3797e6c4f8060da3d241329bca9294319ced341c9ee2a14d55daa93fb2f6b"} Apr 17 14:09:55.721527 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:55.721477 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-5676c8c784-45424" podStartSLOduration=2.367937835 podStartE2EDuration="4.721460701s" podCreationTimestamp="2026-04-17 14:09:51 +0000 UTC" firstStartedPulling="2026-04-17 14:09:52.332667917 +0000 UTC m=+141.670459010" lastFinishedPulling="2026-04-17 14:09:54.686190771 +0000 UTC m=+144.023981876" observedRunningTime="2026-04-17 14:09:55.721203759 +0000 UTC m=+145.058994896" watchObservedRunningTime="2026-04-17 14:09:55.721460701 +0000 UTC m=+145.059251813" Apr 17 14:09:57.768978 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.768939 2568 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-9d44df66c-4zmlh"] Apr 17 14:09:57.772260 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.772241 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-9d44df66c-4zmlh" Apr 17 14:09:57.774876 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.774837 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"openshift-state-metrics-kube-rbac-proxy-config\"" Apr 17 14:09:57.774983 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.774888 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"openshift-state-metrics-tls\"" Apr 17 14:09:57.776582 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.776566 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"openshift-state-metrics-dockercfg-grglj\"" Apr 17 14:09:57.785279 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.785261 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/efeaf65b-a5ac-4bbd-bc74-77d92f56b365-openshift-state-metrics-tls\") pod \"openshift-state-metrics-9d44df66c-4zmlh\" (UID: \"efeaf65b-a5ac-4bbd-bc74-77d92f56b365\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-4zmlh" Apr 17 14:09:57.785381 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.785299 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/efeaf65b-a5ac-4bbd-bc74-77d92f56b365-metrics-client-ca\") pod \"openshift-state-metrics-9d44df66c-4zmlh\" (UID: \"efeaf65b-a5ac-4bbd-bc74-77d92f56b365\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-4zmlh" Apr 17 14:09:57.785381 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.785324 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4dbl\" (UniqueName: \"kubernetes.io/projected/efeaf65b-a5ac-4bbd-bc74-77d92f56b365-kube-api-access-j4dbl\") pod \"openshift-state-metrics-9d44df66c-4zmlh\" (UID: \"efeaf65b-a5ac-4bbd-bc74-77d92f56b365\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-4zmlh" Apr 17 14:09:57.785381 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.785354 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/efeaf65b-a5ac-4bbd-bc74-77d92f56b365-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-9d44df66c-4zmlh\" (UID: \"efeaf65b-a5ac-4bbd-bc74-77d92f56b365\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-4zmlh" Apr 17 14:09:57.788991 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.788968 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-9d44df66c-4zmlh"] Apr 17 14:09:57.808421 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.808397 2568 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-m6s4q"] Apr 17 14:09:57.814111 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.814084 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-m6s4q" Apr 17 14:09:57.816362 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.816339 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-tls\"" Apr 17 14:09:57.816495 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.816369 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"node-exporter-accelerators-collector-config\"" Apr 17 14:09:57.816495 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.816430 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-kube-rbac-proxy-config\"" Apr 17 14:09:57.816495 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.816433 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-dockercfg-t554z\"" Apr 17 14:09:57.886628 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.886591 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j4dbl\" (UniqueName: \"kubernetes.io/projected/efeaf65b-a5ac-4bbd-bc74-77d92f56b365-kube-api-access-j4dbl\") pod \"openshift-state-metrics-9d44df66c-4zmlh\" (UID: \"efeaf65b-a5ac-4bbd-bc74-77d92f56b365\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-4zmlh" Apr 17 14:09:57.886800 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.886639 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/efeaf65b-a5ac-4bbd-bc74-77d92f56b365-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-9d44df66c-4zmlh\" (UID: \"efeaf65b-a5ac-4bbd-bc74-77d92f56b365\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-4zmlh" Apr 17 14:09:57.886800 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.886661 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbx8m\" (UniqueName: \"kubernetes.io/projected/51af9648-c2cc-494b-bd12-803fa91c0c24-kube-api-access-sbx8m\") pod \"node-exporter-m6s4q\" (UID: \"51af9648-c2cc-494b-bd12-803fa91c0c24\") " pod="openshift-monitoring/node-exporter-m6s4q" Apr 17 14:09:57.886800 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.886679 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/51af9648-c2cc-494b-bd12-803fa91c0c24-root\") pod \"node-exporter-m6s4q\" (UID: \"51af9648-c2cc-494b-bd12-803fa91c0c24\") " pod="openshift-monitoring/node-exporter-m6s4q" Apr 17 14:09:57.886800 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.886699 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/51af9648-c2cc-494b-bd12-803fa91c0c24-node-exporter-wtmp\") pod \"node-exporter-m6s4q\" (UID: \"51af9648-c2cc-494b-bd12-803fa91c0c24\") " pod="openshift-monitoring/node-exporter-m6s4q" Apr 17 14:09:57.886800 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.886715 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/51af9648-c2cc-494b-bd12-803fa91c0c24-sys\") pod \"node-exporter-m6s4q\" (UID: \"51af9648-c2cc-494b-bd12-803fa91c0c24\") " pod="openshift-monitoring/node-exporter-m6s4q" Apr 17 14:09:57.886800 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.886732 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/51af9648-c2cc-494b-bd12-803fa91c0c24-node-exporter-textfile\") pod \"node-exporter-m6s4q\" (UID: \"51af9648-c2cc-494b-bd12-803fa91c0c24\") " pod="openshift-monitoring/node-exporter-m6s4q" Apr 17 14:09:57.887129 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.886846 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/51af9648-c2cc-494b-bd12-803fa91c0c24-node-exporter-tls\") pod \"node-exporter-m6s4q\" (UID: \"51af9648-c2cc-494b-bd12-803fa91c0c24\") " pod="openshift-monitoring/node-exporter-m6s4q" Apr 17 14:09:57.887129 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.886921 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/51af9648-c2cc-494b-bd12-803fa91c0c24-metrics-client-ca\") pod \"node-exporter-m6s4q\" (UID: \"51af9648-c2cc-494b-bd12-803fa91c0c24\") " pod="openshift-monitoring/node-exporter-m6s4q" Apr 17 14:09:57.887129 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.886967 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/51af9648-c2cc-494b-bd12-803fa91c0c24-node-exporter-accelerators-collector-config\") pod \"node-exporter-m6s4q\" (UID: \"51af9648-c2cc-494b-bd12-803fa91c0c24\") " pod="openshift-monitoring/node-exporter-m6s4q" Apr 17 14:09:57.887129 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.887008 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/efeaf65b-a5ac-4bbd-bc74-77d92f56b365-openshift-state-metrics-tls\") pod \"openshift-state-metrics-9d44df66c-4zmlh\" (UID: \"efeaf65b-a5ac-4bbd-bc74-77d92f56b365\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-4zmlh" Apr 17 14:09:57.887129 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.887045 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/51af9648-c2cc-494b-bd12-803fa91c0c24-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-m6s4q\" (UID: \"51af9648-c2cc-494b-bd12-803fa91c0c24\") " pod="openshift-monitoring/node-exporter-m6s4q" Apr 17 14:09:57.887129 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.887088 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/efeaf65b-a5ac-4bbd-bc74-77d92f56b365-metrics-client-ca\") pod \"openshift-state-metrics-9d44df66c-4zmlh\" (UID: \"efeaf65b-a5ac-4bbd-bc74-77d92f56b365\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-4zmlh" Apr 17 14:09:57.887129 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:09:57.887113 2568 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: secret "openshift-state-metrics-tls" not found Apr 17 14:09:57.887447 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:09:57.887189 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/efeaf65b-a5ac-4bbd-bc74-77d92f56b365-openshift-state-metrics-tls podName:efeaf65b-a5ac-4bbd-bc74-77d92f56b365 nodeName:}" failed. No retries permitted until 2026-04-17 14:09:58.38717008 +0000 UTC m=+147.724961173 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/efeaf65b-a5ac-4bbd-bc74-77d92f56b365-openshift-state-metrics-tls") pod "openshift-state-metrics-9d44df66c-4zmlh" (UID: "efeaf65b-a5ac-4bbd-bc74-77d92f56b365") : secret "openshift-state-metrics-tls" not found Apr 17 14:09:57.887746 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.887724 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/efeaf65b-a5ac-4bbd-bc74-77d92f56b365-metrics-client-ca\") pod \"openshift-state-metrics-9d44df66c-4zmlh\" (UID: \"efeaf65b-a5ac-4bbd-bc74-77d92f56b365\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-4zmlh" Apr 17 14:09:57.889078 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.889061 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/efeaf65b-a5ac-4bbd-bc74-77d92f56b365-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-9d44df66c-4zmlh\" (UID: \"efeaf65b-a5ac-4bbd-bc74-77d92f56b365\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-4zmlh" Apr 17 14:09:57.897999 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.897974 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4dbl\" (UniqueName: \"kubernetes.io/projected/efeaf65b-a5ac-4bbd-bc74-77d92f56b365-kube-api-access-j4dbl\") pod \"openshift-state-metrics-9d44df66c-4zmlh\" (UID: \"efeaf65b-a5ac-4bbd-bc74-77d92f56b365\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-4zmlh" Apr 17 14:09:57.988146 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.988107 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/51af9648-c2cc-494b-bd12-803fa91c0c24-metrics-client-ca\") pod \"node-exporter-m6s4q\" (UID: \"51af9648-c2cc-494b-bd12-803fa91c0c24\") " pod="openshift-monitoring/node-exporter-m6s4q" Apr 17 14:09:57.988146 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.988152 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/51af9648-c2cc-494b-bd12-803fa91c0c24-node-exporter-accelerators-collector-config\") pod \"node-exporter-m6s4q\" (UID: \"51af9648-c2cc-494b-bd12-803fa91c0c24\") " pod="openshift-monitoring/node-exporter-m6s4q" Apr 17 14:09:57.988360 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.988186 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/51af9648-c2cc-494b-bd12-803fa91c0c24-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-m6s4q\" (UID: \"51af9648-c2cc-494b-bd12-803fa91c0c24\") " pod="openshift-monitoring/node-exporter-m6s4q" Apr 17 14:09:57.988360 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.988237 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sbx8m\" (UniqueName: \"kubernetes.io/projected/51af9648-c2cc-494b-bd12-803fa91c0c24-kube-api-access-sbx8m\") pod \"node-exporter-m6s4q\" (UID: \"51af9648-c2cc-494b-bd12-803fa91c0c24\") " pod="openshift-monitoring/node-exporter-m6s4q" Apr 17 14:09:57.988360 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.988259 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/51af9648-c2cc-494b-bd12-803fa91c0c24-root\") pod \"node-exporter-m6s4q\" (UID: \"51af9648-c2cc-494b-bd12-803fa91c0c24\") " pod="openshift-monitoring/node-exporter-m6s4q" Apr 17 14:09:57.988360 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.988286 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/51af9648-c2cc-494b-bd12-803fa91c0c24-node-exporter-wtmp\") pod \"node-exporter-m6s4q\" (UID: \"51af9648-c2cc-494b-bd12-803fa91c0c24\") " pod="openshift-monitoring/node-exporter-m6s4q" Apr 17 14:09:57.988360 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.988311 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/51af9648-c2cc-494b-bd12-803fa91c0c24-sys\") pod \"node-exporter-m6s4q\" (UID: \"51af9648-c2cc-494b-bd12-803fa91c0c24\") " pod="openshift-monitoring/node-exporter-m6s4q" Apr 17 14:09:57.988360 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.988344 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/51af9648-c2cc-494b-bd12-803fa91c0c24-node-exporter-textfile\") pod \"node-exporter-m6s4q\" (UID: \"51af9648-c2cc-494b-bd12-803fa91c0c24\") " pod="openshift-monitoring/node-exporter-m6s4q" Apr 17 14:09:57.988629 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.988350 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/51af9648-c2cc-494b-bd12-803fa91c0c24-root\") pod \"node-exporter-m6s4q\" (UID: \"51af9648-c2cc-494b-bd12-803fa91c0c24\") " pod="openshift-monitoring/node-exporter-m6s4q" Apr 17 14:09:57.988629 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.988410 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/51af9648-c2cc-494b-bd12-803fa91c0c24-node-exporter-tls\") pod \"node-exporter-m6s4q\" (UID: \"51af9648-c2cc-494b-bd12-803fa91c0c24\") " pod="openshift-monitoring/node-exporter-m6s4q" Apr 17 14:09:57.988721 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.988654 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/51af9648-c2cc-494b-bd12-803fa91c0c24-node-exporter-textfile\") pod \"node-exporter-m6s4q\" (UID: \"51af9648-c2cc-494b-bd12-803fa91c0c24\") " pod="openshift-monitoring/node-exporter-m6s4q" Apr 17 14:09:57.988721 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.988678 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/51af9648-c2cc-494b-bd12-803fa91c0c24-node-exporter-wtmp\") pod \"node-exporter-m6s4q\" (UID: \"51af9648-c2cc-494b-bd12-803fa91c0c24\") " pod="openshift-monitoring/node-exporter-m6s4q" Apr 17 14:09:57.988721 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.988694 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/51af9648-c2cc-494b-bd12-803fa91c0c24-sys\") pod \"node-exporter-m6s4q\" (UID: \"51af9648-c2cc-494b-bd12-803fa91c0c24\") " pod="openshift-monitoring/node-exporter-m6s4q" Apr 17 14:09:57.988896 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.988777 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/51af9648-c2cc-494b-bd12-803fa91c0c24-metrics-client-ca\") pod \"node-exporter-m6s4q\" (UID: \"51af9648-c2cc-494b-bd12-803fa91c0c24\") " pod="openshift-monitoring/node-exporter-m6s4q" Apr 17 14:09:57.988896 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.988788 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/51af9648-c2cc-494b-bd12-803fa91c0c24-node-exporter-accelerators-collector-config\") pod \"node-exporter-m6s4q\" (UID: \"51af9648-c2cc-494b-bd12-803fa91c0c24\") " pod="openshift-monitoring/node-exporter-m6s4q" Apr 17 14:09:57.990529 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.990510 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/51af9648-c2cc-494b-bd12-803fa91c0c24-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-m6s4q\" (UID: \"51af9648-c2cc-494b-bd12-803fa91c0c24\") " pod="openshift-monitoring/node-exporter-m6s4q" Apr 17 14:09:57.990625 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:57.990588 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/51af9648-c2cc-494b-bd12-803fa91c0c24-node-exporter-tls\") pod \"node-exporter-m6s4q\" (UID: \"51af9648-c2cc-494b-bd12-803fa91c0c24\") " pod="openshift-monitoring/node-exporter-m6s4q" Apr 17 14:09:58.001393 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:58.001370 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbx8m\" (UniqueName: \"kubernetes.io/projected/51af9648-c2cc-494b-bd12-803fa91c0c24-kube-api-access-sbx8m\") pod \"node-exporter-m6s4q\" (UID: \"51af9648-c2cc-494b-bd12-803fa91c0c24\") " pod="openshift-monitoring/node-exporter-m6s4q" Apr 17 14:09:58.123299 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:58.123217 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-m6s4q" Apr 17 14:09:58.131749 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:09:58.131715 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51af9648_c2cc_494b_bd12_803fa91c0c24.slice/crio-d24d5251110cc0b25a91173c182b9399dc32e172a4734c09ba3fb2f58f79749b WatchSource:0}: Error finding container d24d5251110cc0b25a91173c182b9399dc32e172a4734c09ba3fb2f58f79749b: Status 404 returned error can't find the container with id d24d5251110cc0b25a91173c182b9399dc32e172a4734c09ba3fb2f58f79749b Apr 17 14:09:58.392020 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:58.391939 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/efeaf65b-a5ac-4bbd-bc74-77d92f56b365-openshift-state-metrics-tls\") pod \"openshift-state-metrics-9d44df66c-4zmlh\" (UID: \"efeaf65b-a5ac-4bbd-bc74-77d92f56b365\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-4zmlh" Apr 17 14:09:58.394551 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:58.394521 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/efeaf65b-a5ac-4bbd-bc74-77d92f56b365-openshift-state-metrics-tls\") pod \"openshift-state-metrics-9d44df66c-4zmlh\" (UID: \"efeaf65b-a5ac-4bbd-bc74-77d92f56b365\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-4zmlh" Apr 17 14:09:58.680960 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:58.680914 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-9d44df66c-4zmlh" Apr 17 14:09:58.716925 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:58.716294 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-m6s4q" event={"ID":"51af9648-c2cc-494b-bd12-803fa91c0c24","Type":"ContainerStarted","Data":"d24d5251110cc0b25a91173c182b9399dc32e172a4734c09ba3fb2f58f79749b"} Apr 17 14:09:58.834940 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:58.834893 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-9d44df66c-4zmlh"] Apr 17 14:09:58.845347 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:58.845312 2568 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 17 14:09:58.849738 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:58.849711 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:58.852383 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:58.852357 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy\"" Apr 17 14:09:58.852554 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:58.852381 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-tls\"" Apr 17 14:09:58.852639 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:58.852572 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy-metric\"" Apr 17 14:09:58.852726 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:58.852652 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-generated\"" Apr 17 14:09:58.852815 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:58.852784 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy-web\"" Apr 17 14:09:58.853049 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:58.853025 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"alertmanager-trusted-ca-bundle\"" Apr 17 14:09:58.853139 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:58.853088 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-tls-assets-0\"" Apr 17 14:09:58.853244 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:58.853228 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-cluster-tls-config\"" Apr 17 14:09:58.853308 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:58.853275 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-dockercfg-gxxxk\"" Apr 17 14:09:58.853437 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:58.853420 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-web-config\"" Apr 17 14:09:58.862461 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:58.862401 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 17 14:09:58.899532 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:58.899437 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-tls-assets\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:58.899532 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:58.899506 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn2ws\" (UniqueName: \"kubernetes.io/projected/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-kube-api-access-pn2ws\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:58.899532 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:58.899537 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:58.899800 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:58.899566 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-web-config\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:58.899800 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:58.899590 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-cluster-tls-config\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:58.899800 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:58.899621 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-config-volume\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:58.899800 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:58.899641 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:58.899800 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:58.899683 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-config-out\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:58.899800 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:58.899710 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:58.899800 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:58.899742 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:58.899800 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:58.899765 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:58.899800 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:58.899802 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:58.900276 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:58.899838 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:59.001409 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.001327 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-config-out\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:59.001409 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.001363 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:59.001409 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.001393 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:59.001665 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.001472 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:59.001665 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.001533 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:59.001665 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.001573 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:59.001665 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:09:59.001598 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-alertmanager-trusted-ca-bundle podName:9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03 nodeName:}" failed. No retries permitted until 2026-04-17 14:09:59.501575323 +0000 UTC m=+148.839366431 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03") : configmap references non-existent config key: ca-bundle.crt Apr 17 14:09:59.001950 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.001672 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-tls-assets\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:59.001950 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.001733 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pn2ws\" (UniqueName: \"kubernetes.io/projected/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-kube-api-access-pn2ws\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:59.001950 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.001769 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:59.001950 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.001800 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-web-config\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:59.001950 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.001828 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-cluster-tls-config\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:59.001950 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.001884 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-config-volume\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:59.001950 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.001910 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:59.002271 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.002048 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:59.002271 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.002168 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:59.004644 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.004586 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:59.006705 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.006539 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:59.007070 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.007002 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-web-config\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:59.007207 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.007187 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-config-out\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:59.007269 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.007227 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:59.007269 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.007229 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-tls-assets\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:59.007437 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.007416 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-cluster-tls-config\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:59.007560 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.007543 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:59.008549 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.008522 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-config-volume\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:59.010130 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.010107 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn2ws\" (UniqueName: \"kubernetes.io/projected/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-kube-api-access-pn2ws\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:59.010643 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:09:59.010622 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefeaf65b_a5ac_4bbd_bc74_77d92f56b365.slice/crio-6cef80d4da4405a1db3c9b659a7e8e0162439d5c05357a21d34963da88487643 WatchSource:0}: Error finding container 6cef80d4da4405a1db3c9b659a7e8e0162439d5c05357a21d34963da88487643: Status 404 returned error can't find the container with id 6cef80d4da4405a1db3c9b659a7e8e0162439d5c05357a21d34963da88487643 Apr 17 14:09:59.506262 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.506220 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:59.507071 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.507045 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:59.721385 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.721347 2568 generic.go:358] "Generic (PLEG): container finished" podID="51af9648-c2cc-494b-bd12-803fa91c0c24" containerID="e626fd057060782c9c22c11188699e59de539822a9786eb7ba962fb586c6443e" exitCode=0 Apr 17 14:09:59.721567 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.721439 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-m6s4q" event={"ID":"51af9648-c2cc-494b-bd12-803fa91c0c24","Type":"ContainerDied","Data":"e626fd057060782c9c22c11188699e59de539822a9786eb7ba962fb586c6443e"} Apr 17 14:09:59.723316 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.723287 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-9d44df66c-4zmlh" event={"ID":"efeaf65b-a5ac-4bbd-bc74-77d92f56b365","Type":"ContainerStarted","Data":"e8acf8e49f8e02536a73c57654483ac3bbfe67c897fec80d9072fcdb475c1c89"} Apr 17 14:09:59.723443 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.723320 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-9d44df66c-4zmlh" event={"ID":"efeaf65b-a5ac-4bbd-bc74-77d92f56b365","Type":"ContainerStarted","Data":"b111f90a92dfeca6a9345084d3bc1b7b89a2e777cbf2da412fe8c049a413a0d8"} Apr 17 14:09:59.723443 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.723330 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-9d44df66c-4zmlh" event={"ID":"efeaf65b-a5ac-4bbd-bc74-77d92f56b365","Type":"ContainerStarted","Data":"6cef80d4da4405a1db3c9b659a7e8e0162439d5c05357a21d34963da88487643"} Apr 17 14:09:59.747712 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.747682 2568 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-9475c9469-l2zbd"] Apr 17 14:09:59.755466 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.755432 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-9475c9469-l2zbd" Apr 17 14:09:59.762239 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.759673 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-dockercfg-fq747\"" Apr 17 14:09:59.762239 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.759895 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-kube-rbac-proxy-web\"" Apr 17 14:09:59.762239 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.759941 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-grpc-tls-8nn6lmkkoa50a\"" Apr 17 14:09:59.762239 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.759948 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-9475c9469-l2zbd"] Apr 17 14:09:59.762239 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.760085 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-kube-rbac-proxy-metrics\"" Apr 17 14:09:59.762239 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.760412 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-kube-rbac-proxy-rules\"" Apr 17 14:09:59.762239 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.760614 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-kube-rbac-proxy\"" Apr 17 14:09:59.762239 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.760841 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-tls\"" Apr 17 14:09:59.767298 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.767273 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:09:59.808835 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.808516 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8vfk\" (UniqueName: \"kubernetes.io/projected/6ec54637-4eaa-4c78-9ade-06fa724dd4b9-kube-api-access-x8vfk\") pod \"thanos-querier-9475c9469-l2zbd\" (UID: \"6ec54637-4eaa-4c78-9ade-06fa724dd4b9\") " pod="openshift-monitoring/thanos-querier-9475c9469-l2zbd" Apr 17 14:09:59.808835 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.808619 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6ec54637-4eaa-4c78-9ade-06fa724dd4b9-metrics-client-ca\") pod \"thanos-querier-9475c9469-l2zbd\" (UID: \"6ec54637-4eaa-4c78-9ade-06fa724dd4b9\") " pod="openshift-monitoring/thanos-querier-9475c9469-l2zbd" Apr 17 14:09:59.808835 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.808670 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6ec54637-4eaa-4c78-9ade-06fa724dd4b9-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-9475c9469-l2zbd\" (UID: \"6ec54637-4eaa-4c78-9ade-06fa724dd4b9\") " pod="openshift-monitoring/thanos-querier-9475c9469-l2zbd" Apr 17 14:09:59.808835 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.808706 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/6ec54637-4eaa-4c78-9ade-06fa724dd4b9-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-9475c9469-l2zbd\" (UID: \"6ec54637-4eaa-4c78-9ade-06fa724dd4b9\") " pod="openshift-monitoring/thanos-querier-9475c9469-l2zbd" Apr 17 14:09:59.808835 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.808760 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6ec54637-4eaa-4c78-9ade-06fa724dd4b9-secret-grpc-tls\") pod \"thanos-querier-9475c9469-l2zbd\" (UID: \"6ec54637-4eaa-4c78-9ade-06fa724dd4b9\") " pod="openshift-monitoring/thanos-querier-9475c9469-l2zbd" Apr 17 14:09:59.809177 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.809066 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/6ec54637-4eaa-4c78-9ade-06fa724dd4b9-secret-thanos-querier-tls\") pod \"thanos-querier-9475c9469-l2zbd\" (UID: \"6ec54637-4eaa-4c78-9ade-06fa724dd4b9\") " pod="openshift-monitoring/thanos-querier-9475c9469-l2zbd" Apr 17 14:09:59.809177 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.809128 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6ec54637-4eaa-4c78-9ade-06fa724dd4b9-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-9475c9469-l2zbd\" (UID: \"6ec54637-4eaa-4c78-9ade-06fa724dd4b9\") " pod="openshift-monitoring/thanos-querier-9475c9469-l2zbd" Apr 17 14:09:59.809288 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.809199 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/6ec54637-4eaa-4c78-9ade-06fa724dd4b9-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-9475c9469-l2zbd\" (UID: \"6ec54637-4eaa-4c78-9ade-06fa724dd4b9\") " pod="openshift-monitoring/thanos-querier-9475c9469-l2zbd" Apr 17 14:09:59.818048 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.817939 2568 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-865878b56c-zchzt" Apr 17 14:09:59.818172 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.818100 2568 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-865878b56c-zchzt" Apr 17 14:09:59.821197 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.820999 2568 patch_prober.go:28] interesting pod/console-865878b56c-zchzt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.133.0.15:8443/health\": dial tcp 10.133.0.15:8443: connect: connection refused" start-of-body= Apr 17 14:09:59.821197 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.821053 2568 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-865878b56c-zchzt" podUID="6c48a882-c4d6-401e-8b76-fdfdcfd63b2a" containerName="console" probeResult="failure" output="Get \"https://10.133.0.15:8443/health\": dial tcp 10.133.0.15:8443: connect: connection refused" Apr 17 14:09:59.909852 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.909787 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6ec54637-4eaa-4c78-9ade-06fa724dd4b9-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-9475c9469-l2zbd\" (UID: \"6ec54637-4eaa-4c78-9ade-06fa724dd4b9\") " pod="openshift-monitoring/thanos-querier-9475c9469-l2zbd" Apr 17 14:09:59.909852 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.909845 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/6ec54637-4eaa-4c78-9ade-06fa724dd4b9-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-9475c9469-l2zbd\" (UID: \"6ec54637-4eaa-4c78-9ade-06fa724dd4b9\") " pod="openshift-monitoring/thanos-querier-9475c9469-l2zbd" Apr 17 14:09:59.910351 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.909909 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6ec54637-4eaa-4c78-9ade-06fa724dd4b9-secret-grpc-tls\") pod \"thanos-querier-9475c9469-l2zbd\" (UID: \"6ec54637-4eaa-4c78-9ade-06fa724dd4b9\") " pod="openshift-monitoring/thanos-querier-9475c9469-l2zbd" Apr 17 14:09:59.910351 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.909963 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/6ec54637-4eaa-4c78-9ade-06fa724dd4b9-secret-thanos-querier-tls\") pod \"thanos-querier-9475c9469-l2zbd\" (UID: \"6ec54637-4eaa-4c78-9ade-06fa724dd4b9\") " pod="openshift-monitoring/thanos-querier-9475c9469-l2zbd" Apr 17 14:09:59.910351 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.910008 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6ec54637-4eaa-4c78-9ade-06fa724dd4b9-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-9475c9469-l2zbd\" (UID: \"6ec54637-4eaa-4c78-9ade-06fa724dd4b9\") " pod="openshift-monitoring/thanos-querier-9475c9469-l2zbd" Apr 17 14:09:59.910351 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.910052 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/6ec54637-4eaa-4c78-9ade-06fa724dd4b9-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-9475c9469-l2zbd\" (UID: \"6ec54637-4eaa-4c78-9ade-06fa724dd4b9\") " pod="openshift-monitoring/thanos-querier-9475c9469-l2zbd" Apr 17 14:09:59.910351 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.910114 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x8vfk\" (UniqueName: \"kubernetes.io/projected/6ec54637-4eaa-4c78-9ade-06fa724dd4b9-kube-api-access-x8vfk\") pod \"thanos-querier-9475c9469-l2zbd\" (UID: \"6ec54637-4eaa-4c78-9ade-06fa724dd4b9\") " pod="openshift-monitoring/thanos-querier-9475c9469-l2zbd" Apr 17 14:09:59.910351 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.910260 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6ec54637-4eaa-4c78-9ade-06fa724dd4b9-metrics-client-ca\") pod \"thanos-querier-9475c9469-l2zbd\" (UID: \"6ec54637-4eaa-4c78-9ade-06fa724dd4b9\") " pod="openshift-monitoring/thanos-querier-9475c9469-l2zbd" Apr 17 14:09:59.911554 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.911504 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6ec54637-4eaa-4c78-9ade-06fa724dd4b9-metrics-client-ca\") pod \"thanos-querier-9475c9469-l2zbd\" (UID: \"6ec54637-4eaa-4c78-9ade-06fa724dd4b9\") " pod="openshift-monitoring/thanos-querier-9475c9469-l2zbd" Apr 17 14:09:59.912935 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.912885 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/6ec54637-4eaa-4c78-9ade-06fa724dd4b9-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-9475c9469-l2zbd\" (UID: \"6ec54637-4eaa-4c78-9ade-06fa724dd4b9\") " pod="openshift-monitoring/thanos-querier-9475c9469-l2zbd" Apr 17 14:09:59.913463 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.913282 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6ec54637-4eaa-4c78-9ade-06fa724dd4b9-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-9475c9469-l2zbd\" (UID: \"6ec54637-4eaa-4c78-9ade-06fa724dd4b9\") " pod="openshift-monitoring/thanos-querier-9475c9469-l2zbd" Apr 17 14:09:59.914207 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.914163 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/6ec54637-4eaa-4c78-9ade-06fa724dd4b9-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-9475c9469-l2zbd\" (UID: \"6ec54637-4eaa-4c78-9ade-06fa724dd4b9\") " pod="openshift-monitoring/thanos-querier-9475c9469-l2zbd" Apr 17 14:09:59.915612 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.915567 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6ec54637-4eaa-4c78-9ade-06fa724dd4b9-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-9475c9469-l2zbd\" (UID: \"6ec54637-4eaa-4c78-9ade-06fa724dd4b9\") " pod="openshift-monitoring/thanos-querier-9475c9469-l2zbd" Apr 17 14:09:59.916834 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.916763 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/6ec54637-4eaa-4c78-9ade-06fa724dd4b9-secret-thanos-querier-tls\") pod \"thanos-querier-9475c9469-l2zbd\" (UID: \"6ec54637-4eaa-4c78-9ade-06fa724dd4b9\") " pod="openshift-monitoring/thanos-querier-9475c9469-l2zbd" Apr 17 14:09:59.917113 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.917040 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6ec54637-4eaa-4c78-9ade-06fa724dd4b9-secret-grpc-tls\") pod \"thanos-querier-9475c9469-l2zbd\" (UID: \"6ec54637-4eaa-4c78-9ade-06fa724dd4b9\") " pod="openshift-monitoring/thanos-querier-9475c9469-l2zbd" Apr 17 14:09:59.917745 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.917721 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 17 14:09:59.921193 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:09:59.921132 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8vfk\" (UniqueName: \"kubernetes.io/projected/6ec54637-4eaa-4c78-9ade-06fa724dd4b9-kube-api-access-x8vfk\") pod \"thanos-querier-9475c9469-l2zbd\" (UID: \"6ec54637-4eaa-4c78-9ade-06fa724dd4b9\") " pod="openshift-monitoring/thanos-querier-9475c9469-l2zbd" Apr 17 14:09:59.922661 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:09:59.922622 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bbb9e8f_45ed_4e69_b0ef_fe4f9829df03.slice/crio-63140b79589a4433552ae0db3103f518fc4e03bdab9b1ac67b0c8df8acf75143 WatchSource:0}: Error finding container 63140b79589a4433552ae0db3103f518fc4e03bdab9b1ac67b0c8df8acf75143: Status 404 returned error can't find the container with id 63140b79589a4433552ae0db3103f518fc4e03bdab9b1ac67b0c8df8acf75143 Apr 17 14:10:00.084359 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:00.084257 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-9475c9469-l2zbd" Apr 17 14:10:00.369775 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:00.369687 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-9475c9469-l2zbd"] Apr 17 14:10:00.371870 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:10:00.371829 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ec54637_4eaa_4c78_9ade_06fa724dd4b9.slice/crio-2c201be3d70717a7394c711197b565094408bb9ab3b1d70c847480d3a73b3a3d WatchSource:0}: Error finding container 2c201be3d70717a7394c711197b565094408bb9ab3b1d70c847480d3a73b3a3d: Status 404 returned error can't find the container with id 2c201be3d70717a7394c711197b565094408bb9ab3b1d70c847480d3a73b3a3d Apr 17 14:10:00.728947 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:00.728892 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-9475c9469-l2zbd" event={"ID":"6ec54637-4eaa-4c78-9ade-06fa724dd4b9","Type":"ContainerStarted","Data":"2c201be3d70717a7394c711197b565094408bb9ab3b1d70c847480d3a73b3a3d"} Apr 17 14:10:00.731047 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:00.731017 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-9d44df66c-4zmlh" event={"ID":"efeaf65b-a5ac-4bbd-bc74-77d92f56b365","Type":"ContainerStarted","Data":"ada8cbfc16a617e36aae89b4b16bcdae76ba1b65f9648fdeece7556e955c67c7"} Apr 17 14:10:00.733104 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:00.733085 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-m6s4q" event={"ID":"51af9648-c2cc-494b-bd12-803fa91c0c24","Type":"ContainerStarted","Data":"cfb00cac01e2347d1ecf03d1cb322182bad9b0b9f81b150fb62eb95103b63f46"} Apr 17 14:10:00.733104 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:00.733109 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-m6s4q" event={"ID":"51af9648-c2cc-494b-bd12-803fa91c0c24","Type":"ContainerStarted","Data":"bd1f07abe8a87427edda180c685385ef57cd6825ab4c0394bbfdb3acd338551d"} Apr 17 14:10:00.734210 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:00.734171 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03","Type":"ContainerStarted","Data":"63140b79589a4433552ae0db3103f518fc4e03bdab9b1ac67b0c8df8acf75143"} Apr 17 14:10:00.752999 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:00.752949 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-9d44df66c-4zmlh" podStartSLOduration=2.60952232 podStartE2EDuration="3.752935227s" podCreationTimestamp="2026-04-17 14:09:57 +0000 UTC" firstStartedPulling="2026-04-17 14:09:59.149273203 +0000 UTC m=+148.487064303" lastFinishedPulling="2026-04-17 14:10:00.292686104 +0000 UTC m=+149.630477210" observedRunningTime="2026-04-17 14:10:00.752298559 +0000 UTC m=+150.090089670" watchObservedRunningTime="2026-04-17 14:10:00.752935227 +0000 UTC m=+150.090726337" Apr 17 14:10:00.775672 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:00.775603 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-m6s4q" podStartSLOduration=2.845220443 podStartE2EDuration="3.775583807s" podCreationTimestamp="2026-04-17 14:09:57 +0000 UTC" firstStartedPulling="2026-04-17 14:09:58.133374265 +0000 UTC m=+147.471165354" lastFinishedPulling="2026-04-17 14:09:59.063737629 +0000 UTC m=+148.401528718" observedRunningTime="2026-04-17 14:10:00.774354962 +0000 UTC m=+150.112146075" watchObservedRunningTime="2026-04-17 14:10:00.775583807 +0000 UTC m=+150.113374920" Apr 17 14:10:01.738781 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:01.738738 2568 generic.go:358] "Generic (PLEG): container finished" podID="9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" containerID="166bd4961e7c244d65fe7706b95b80f1674842a8cd842b0978d8ca7d769343b0" exitCode=0 Apr 17 14:10:01.739260 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:01.738824 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03","Type":"ContainerDied","Data":"166bd4961e7c244d65fe7706b95b80f1674842a8cd842b0978d8ca7d769343b0"} Apr 17 14:10:02.173588 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.172609 2568 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-74c596c9bf-ckdfj"] Apr 17 14:10:02.176313 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.176296 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-74c596c9bf-ckdfj" Apr 17 14:10:02.178847 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.178817 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"metrics-server-client-certs\"" Apr 17 14:10:02.178982 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.178821 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kubelet-serving-ca-bundle\"" Apr 17 14:10:02.178982 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.178824 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"metrics-server-audit-profiles\"" Apr 17 14:10:02.180023 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.180006 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"metrics-server-dockercfg-r7dgq\"" Apr 17 14:10:02.180112 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.180025 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"metrics-server-tls\"" Apr 17 14:10:02.180112 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.180104 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"metrics-server-6e7bieradgrsn\"" Apr 17 14:10:02.182961 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.182939 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-74c596c9bf-ckdfj"] Apr 17 14:10:02.231271 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.231228 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-client-certs\" (UniqueName: \"kubernetes.io/secret/1ef97ae2-261e-4072-8667-f11c1fbf0024-secret-metrics-server-client-certs\") pod \"metrics-server-74c596c9bf-ckdfj\" (UID: \"1ef97ae2-261e-4072-8667-f11c1fbf0024\") " pod="openshift-monitoring/metrics-server-74c596c9bf-ckdfj" Apr 17 14:10:02.231465 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.231327 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/1ef97ae2-261e-4072-8667-f11c1fbf0024-audit-log\") pod \"metrics-server-74c596c9bf-ckdfj\" (UID: \"1ef97ae2-261e-4072-8667-f11c1fbf0024\") " pod="openshift-monitoring/metrics-server-74c596c9bf-ckdfj" Apr 17 14:10:02.231465 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.231350 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2vwh\" (UniqueName: \"kubernetes.io/projected/1ef97ae2-261e-4072-8667-f11c1fbf0024-kube-api-access-l2vwh\") pod \"metrics-server-74c596c9bf-ckdfj\" (UID: \"1ef97ae2-261e-4072-8667-f11c1fbf0024\") " pod="openshift-monitoring/metrics-server-74c596c9bf-ckdfj" Apr 17 14:10:02.231465 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.231446 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef97ae2-261e-4072-8667-f11c1fbf0024-client-ca-bundle\") pod \"metrics-server-74c596c9bf-ckdfj\" (UID: \"1ef97ae2-261e-4072-8667-f11c1fbf0024\") " pod="openshift-monitoring/metrics-server-74c596c9bf-ckdfj" Apr 17 14:10:02.231626 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.231545 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ef97ae2-261e-4072-8667-f11c1fbf0024-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-74c596c9bf-ckdfj\" (UID: \"1ef97ae2-261e-4072-8667-f11c1fbf0024\") " pod="openshift-monitoring/metrics-server-74c596c9bf-ckdfj" Apr 17 14:10:02.231626 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.231573 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/1ef97ae2-261e-4072-8667-f11c1fbf0024-metrics-server-audit-profiles\") pod \"metrics-server-74c596c9bf-ckdfj\" (UID: \"1ef97ae2-261e-4072-8667-f11c1fbf0024\") " pod="openshift-monitoring/metrics-server-74c596c9bf-ckdfj" Apr 17 14:10:02.231626 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.231592 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/1ef97ae2-261e-4072-8667-f11c1fbf0024-secret-metrics-server-tls\") pod \"metrics-server-74c596c9bf-ckdfj\" (UID: \"1ef97ae2-261e-4072-8667-f11c1fbf0024\") " pod="openshift-monitoring/metrics-server-74c596c9bf-ckdfj" Apr 17 14:10:02.332184 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.332144 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/1ef97ae2-261e-4072-8667-f11c1fbf0024-audit-log\") pod \"metrics-server-74c596c9bf-ckdfj\" (UID: \"1ef97ae2-261e-4072-8667-f11c1fbf0024\") " pod="openshift-monitoring/metrics-server-74c596c9bf-ckdfj" Apr 17 14:10:02.332344 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.332198 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l2vwh\" (UniqueName: \"kubernetes.io/projected/1ef97ae2-261e-4072-8667-f11c1fbf0024-kube-api-access-l2vwh\") pod \"metrics-server-74c596c9bf-ckdfj\" (UID: \"1ef97ae2-261e-4072-8667-f11c1fbf0024\") " pod="openshift-monitoring/metrics-server-74c596c9bf-ckdfj" Apr 17 14:10:02.332344 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.332234 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef97ae2-261e-4072-8667-f11c1fbf0024-client-ca-bundle\") pod \"metrics-server-74c596c9bf-ckdfj\" (UID: \"1ef97ae2-261e-4072-8667-f11c1fbf0024\") " pod="openshift-monitoring/metrics-server-74c596c9bf-ckdfj" Apr 17 14:10:02.332344 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.332307 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ef97ae2-261e-4072-8667-f11c1fbf0024-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-74c596c9bf-ckdfj\" (UID: \"1ef97ae2-261e-4072-8667-f11c1fbf0024\") " pod="openshift-monitoring/metrics-server-74c596c9bf-ckdfj" Apr 17 14:10:02.332344 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.332332 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/1ef97ae2-261e-4072-8667-f11c1fbf0024-metrics-server-audit-profiles\") pod \"metrics-server-74c596c9bf-ckdfj\" (UID: \"1ef97ae2-261e-4072-8667-f11c1fbf0024\") " pod="openshift-monitoring/metrics-server-74c596c9bf-ckdfj" Apr 17 14:10:02.332568 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.332362 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/1ef97ae2-261e-4072-8667-f11c1fbf0024-secret-metrics-server-tls\") pod \"metrics-server-74c596c9bf-ckdfj\" (UID: \"1ef97ae2-261e-4072-8667-f11c1fbf0024\") " pod="openshift-monitoring/metrics-server-74c596c9bf-ckdfj" Apr 17 14:10:02.332568 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.332414 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-metrics-server-client-certs\" (UniqueName: \"kubernetes.io/secret/1ef97ae2-261e-4072-8667-f11c1fbf0024-secret-metrics-server-client-certs\") pod \"metrics-server-74c596c9bf-ckdfj\" (UID: \"1ef97ae2-261e-4072-8667-f11c1fbf0024\") " pod="openshift-monitoring/metrics-server-74c596c9bf-ckdfj" Apr 17 14:10:02.332667 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.332561 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/1ef97ae2-261e-4072-8667-f11c1fbf0024-audit-log\") pod \"metrics-server-74c596c9bf-ckdfj\" (UID: \"1ef97ae2-261e-4072-8667-f11c1fbf0024\") " pod="openshift-monitoring/metrics-server-74c596c9bf-ckdfj" Apr 17 14:10:02.333285 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.333235 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ef97ae2-261e-4072-8667-f11c1fbf0024-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-74c596c9bf-ckdfj\" (UID: \"1ef97ae2-261e-4072-8667-f11c1fbf0024\") " pod="openshift-monitoring/metrics-server-74c596c9bf-ckdfj" Apr 17 14:10:02.333684 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.333661 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/1ef97ae2-261e-4072-8667-f11c1fbf0024-metrics-server-audit-profiles\") pod \"metrics-server-74c596c9bf-ckdfj\" (UID: \"1ef97ae2-261e-4072-8667-f11c1fbf0024\") " pod="openshift-monitoring/metrics-server-74c596c9bf-ckdfj" Apr 17 14:10:02.335233 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.335208 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-client-certs\" (UniqueName: \"kubernetes.io/secret/1ef97ae2-261e-4072-8667-f11c1fbf0024-secret-metrics-server-client-certs\") pod \"metrics-server-74c596c9bf-ckdfj\" (UID: \"1ef97ae2-261e-4072-8667-f11c1fbf0024\") " pod="openshift-monitoring/metrics-server-74c596c9bf-ckdfj" Apr 17 14:10:02.335448 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.335420 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef97ae2-261e-4072-8667-f11c1fbf0024-client-ca-bundle\") pod \"metrics-server-74c596c9bf-ckdfj\" (UID: \"1ef97ae2-261e-4072-8667-f11c1fbf0024\") " pod="openshift-monitoring/metrics-server-74c596c9bf-ckdfj" Apr 17 14:10:02.335572 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.335527 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/1ef97ae2-261e-4072-8667-f11c1fbf0024-secret-metrics-server-tls\") pod \"metrics-server-74c596c9bf-ckdfj\" (UID: \"1ef97ae2-261e-4072-8667-f11c1fbf0024\") " pod="openshift-monitoring/metrics-server-74c596c9bf-ckdfj" Apr 17 14:10:02.340046 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.340021 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2vwh\" (UniqueName: \"kubernetes.io/projected/1ef97ae2-261e-4072-8667-f11c1fbf0024-kube-api-access-l2vwh\") pod \"metrics-server-74c596c9bf-ckdfj\" (UID: \"1ef97ae2-261e-4072-8667-f11c1fbf0024\") " pod="openshift-monitoring/metrics-server-74c596c9bf-ckdfj" Apr 17 14:10:02.487819 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.486853 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-74c596c9bf-ckdfj" Apr 17 14:10:02.552567 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.552506 2568 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-7dccd58f55-qplhm"] Apr 17 14:10:02.557244 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.557213 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-7dccd58f55-qplhm" Apr 17 14:10:02.559949 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.559795 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"monitoring-plugin-cert\"" Apr 17 14:10:02.559949 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.559816 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"default-dockercfg-4xvpp\"" Apr 17 14:10:02.565647 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.565593 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-7dccd58f55-qplhm"] Apr 17 14:10:02.636191 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.636144 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/7e9b4543-ab44-4004-8de1-369205a75e30-monitoring-plugin-cert\") pod \"monitoring-plugin-7dccd58f55-qplhm\" (UID: \"7e9b4543-ab44-4004-8de1-369205a75e30\") " pod="openshift-monitoring/monitoring-plugin-7dccd58f55-qplhm" Apr 17 14:10:02.737494 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:02.737461 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/7e9b4543-ab44-4004-8de1-369205a75e30-monitoring-plugin-cert\") pod \"monitoring-plugin-7dccd58f55-qplhm\" (UID: \"7e9b4543-ab44-4004-8de1-369205a75e30\") " pod="openshift-monitoring/monitoring-plugin-7dccd58f55-qplhm" Apr 17 14:10:02.737671 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:10:02.737631 2568 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: secret "monitoring-plugin-cert" not found Apr 17 14:10:02.737722 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:10:02.737707 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e9b4543-ab44-4004-8de1-369205a75e30-monitoring-plugin-cert podName:7e9b4543-ab44-4004-8de1-369205a75e30 nodeName:}" failed. No retries permitted until 2026-04-17 14:10:03.237686271 +0000 UTC m=+152.575477362 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/7e9b4543-ab44-4004-8de1-369205a75e30-monitoring-plugin-cert") pod "monitoring-plugin-7dccd58f55-qplhm" (UID: "7e9b4543-ab44-4004-8de1-369205a75e30") : secret "monitoring-plugin-cert" not found Apr 17 14:10:03.046984 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:03.046956 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-74c596c9bf-ckdfj"] Apr 17 14:10:03.049800 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:10:03.049769 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ef97ae2_261e_4072_8667_f11c1fbf0024.slice/crio-b95daaf711162f6724dc974a5ac9fbcf45397b7c56735dceef4304ec1ce6463f WatchSource:0}: Error finding container b95daaf711162f6724dc974a5ac9fbcf45397b7c56735dceef4304ec1ce6463f: Status 404 returned error can't find the container with id b95daaf711162f6724dc974a5ac9fbcf45397b7c56735dceef4304ec1ce6463f Apr 17 14:10:03.243341 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:03.243298 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/7e9b4543-ab44-4004-8de1-369205a75e30-monitoring-plugin-cert\") pod \"monitoring-plugin-7dccd58f55-qplhm\" (UID: \"7e9b4543-ab44-4004-8de1-369205a75e30\") " pod="openshift-monitoring/monitoring-plugin-7dccd58f55-qplhm" Apr 17 14:10:03.246038 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:03.246005 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/7e9b4543-ab44-4004-8de1-369205a75e30-monitoring-plugin-cert\") pod \"monitoring-plugin-7dccd58f55-qplhm\" (UID: \"7e9b4543-ab44-4004-8de1-369205a75e30\") " pod="openshift-monitoring/monitoring-plugin-7dccd58f55-qplhm" Apr 17 14:10:03.473576 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:03.473489 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-7dccd58f55-qplhm" Apr 17 14:10:03.713939 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:03.713882 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-7dccd58f55-qplhm"] Apr 17 14:10:03.716502 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:10:03.716470 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e9b4543_ab44_4004_8de1_369205a75e30.slice/crio-3affe72076cf5ad93cfb885daf02fd75f90cff3818093c99e4a152ccbe9bc221 WatchSource:0}: Error finding container 3affe72076cf5ad93cfb885daf02fd75f90cff3818093c99e4a152ccbe9bc221: Status 404 returned error can't find the container with id 3affe72076cf5ad93cfb885daf02fd75f90cff3818093c99e4a152ccbe9bc221 Apr 17 14:10:03.748598 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:03.748551 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-7dccd58f55-qplhm" event={"ID":"7e9b4543-ab44-4004-8de1-369205a75e30","Type":"ContainerStarted","Data":"3affe72076cf5ad93cfb885daf02fd75f90cff3818093c99e4a152ccbe9bc221"} Apr 17 14:10:03.750552 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:03.750521 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03","Type":"ContainerStarted","Data":"66795af62eb0978f637d7ffedbfddd72d673fe29746bc9c0009e771e3b8ed8ad"} Apr 17 14:10:03.752756 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:03.752733 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-9475c9469-l2zbd" event={"ID":"6ec54637-4eaa-4c78-9ade-06fa724dd4b9","Type":"ContainerStarted","Data":"b1aa210c894c3fa522e7563a7e64897eeaf728f9a2004ee0c9b63d6557897af3"} Apr 17 14:10:03.752889 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:03.752773 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-9475c9469-l2zbd" event={"ID":"6ec54637-4eaa-4c78-9ade-06fa724dd4b9","Type":"ContainerStarted","Data":"f0157f0a8a3d09dcf08eb04f187e8a5d60685a07c4833c4b76895c7cdb686945"} Apr 17 14:10:03.752889 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:03.752788 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-9475c9469-l2zbd" event={"ID":"6ec54637-4eaa-4c78-9ade-06fa724dd4b9","Type":"ContainerStarted","Data":"bc8882d0104638e019ffa09a2cf21492745a4e1a6d0b4e505ec0b80a5607b543"} Apr 17 14:10:03.755489 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:03.754469 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-74c596c9bf-ckdfj" event={"ID":"1ef97ae2-261e-4072-8667-f11c1fbf0024","Type":"ContainerStarted","Data":"b95daaf711162f6724dc974a5ac9fbcf45397b7c56735dceef4304ec1ce6463f"} Apr 17 14:10:03.983776 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:03.983737 2568 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Apr 17 14:10:03.990134 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:03.990115 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:03.992909 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:03.992868 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-tls\"" Apr 17 14:10:03.993748 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:03.993360 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-thanos-sidecar-tls\"" Apr 17 14:10:03.993748 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:03.993387 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-dockercfg-xjch6\"" Apr 17 14:10:03.993748 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:03.993602 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-web-config\"" Apr 17 14:10:03.993748 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:03.993637 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"kube-rbac-proxy\"" Apr 17 14:10:03.994072 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:03.993906 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s\"" Apr 17 14:10:03.998770 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:03.998246 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"metrics-client-certs\"" Apr 17 14:10:04.004483 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:03.999400 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"prometheus-trusted-ca-bundle\"" Apr 17 14:10:04.004483 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:03.999487 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-thanos-prometheus-http-client-file\"" Apr 17 14:10:04.004483 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:03.999649 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-tls-assets-0\"" Apr 17 14:10:04.004483 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.000048 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-kube-rbac-proxy-web\"" Apr 17 14:10:04.004483 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.000354 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"serving-certs-ca-bundle\"" Apr 17 14:10:04.004483 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.000719 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-grpc-tls-14bn2bg7nbca6\"" Apr 17 14:10:04.004483 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.002626 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-rulefiles-0\"" Apr 17 14:10:04.004483 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.002643 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Apr 17 14:10:04.050528 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.050492 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmc9c\" (UniqueName: \"kubernetes.io/projected/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-kube-api-access-qmc9c\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.051020 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.050595 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.051020 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.050659 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.051020 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.050685 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.051020 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.050816 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-config-out\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.051020 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.050848 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.051422 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.051398 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.051501 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.051459 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-web-config\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.051571 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.051553 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.051656 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.051591 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.051717 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.051667 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.051771 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.051725 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.051822 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.051789 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.051899 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.051843 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-config\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.051948 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.051907 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.052007 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.051962 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.052059 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.052049 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.052204 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.052187 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.095995 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.095772 2568 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-console/console-865878b56c-zchzt"] Apr 17 14:10:04.153712 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.153662 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.153712 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.153715 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.153983 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.153756 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.153983 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.153791 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-config\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.153983 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.153819 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.153983 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.153845 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.153983 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.153924 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.154232 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.153987 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.154232 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.154016 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qmc9c\" (UniqueName: \"kubernetes.io/projected/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-kube-api-access-qmc9c\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.154232 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.154044 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.155801 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.154669 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.155801 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.154713 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.155801 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.154774 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-config-out\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.155801 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.154803 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.155801 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.154840 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.155801 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.154895 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-web-config\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.155801 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.154942 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.155801 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.154976 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.155801 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.155359 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.155801 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.155686 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.155801 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.155710 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.155801 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.155804 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.156914 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.156696 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.160165 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.160139 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.160353 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.160328 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-web-config\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.161336 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.161301 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.161445 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.161421 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-config\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.162289 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.162264 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.162782 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.162690 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.162782 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.162757 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.162782 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.162779 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.163070 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.163019 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-config-out\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.163464 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.163425 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.164107 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.163899 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmc9c\" (UniqueName: \"kubernetes.io/projected/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-kube-api-access-qmc9c\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.164305 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.164285 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.164401 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.164295 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.309303 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.309225 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:04.483770 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.483706 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Apr 17 14:10:04.490055 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:10:04.490026 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf0cb3b3_39c8_4938_87dd_d3e263fbb0cf.slice/crio-7dc67697e27af8991296a18d27fe1384d3c64ed61358a18b48d955cb2f05d6cf WatchSource:0}: Error finding container 7dc67697e27af8991296a18d27fe1384d3c64ed61358a18b48d955cb2f05d6cf: Status 404 returned error can't find the container with id 7dc67697e27af8991296a18d27fe1384d3c64ed61358a18b48d955cb2f05d6cf Apr 17 14:10:04.764619 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.764512 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03","Type":"ContainerStarted","Data":"b1802803934e1dc7ee96af34dd2b3ca7948da362928c4d11e0761db38a5b0567"} Apr 17 14:10:04.764619 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.764556 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03","Type":"ContainerStarted","Data":"45202397e0a58f429b699cf04b80c08e0ae79b5e9906e9230dd2bd1d6189502c"} Apr 17 14:10:04.764619 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.764571 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03","Type":"ContainerStarted","Data":"1664d2cbeec44fde6c75a111fa65b2014f307b04d0c239e62ebe7de406a043c9"} Apr 17 14:10:04.764619 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.764583 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03","Type":"ContainerStarted","Data":"16239c5ce4f92bd30653f46ebcaf79bd878b36339bb861529c88950c4c492b95"} Apr 17 14:10:04.768156 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.768127 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-9475c9469-l2zbd" event={"ID":"6ec54637-4eaa-4c78-9ade-06fa724dd4b9","Type":"ContainerStarted","Data":"91386c37ed190d67f28083ff7022ebe47e1367c8896a56c0ad81fa0519f50d8d"} Apr 17 14:10:04.768296 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.768163 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-9475c9469-l2zbd" event={"ID":"6ec54637-4eaa-4c78-9ade-06fa724dd4b9","Type":"ContainerStarted","Data":"a26c14be472f9b0b7a7e183cee251f8e78df954cd69a469931a36011c7977bb9"} Apr 17 14:10:04.768296 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.768175 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-9475c9469-l2zbd" event={"ID":"6ec54637-4eaa-4c78-9ade-06fa724dd4b9","Type":"ContainerStarted","Data":"aead8e87f8e49f450ee5e2213d11e32cc8e9d91aee9bba49eaac6e15d395d02d"} Apr 17 14:10:04.768296 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.768278 2568 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-monitoring/thanos-querier-9475c9469-l2zbd" Apr 17 14:10:04.770238 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.770212 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf","Type":"ContainerStarted","Data":"af824c226a1007ad1cddb051ffeffc02fe98b8026cd252871819278630e72757"} Apr 17 14:10:04.770364 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.770245 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf","Type":"ContainerStarted","Data":"7dc67697e27af8991296a18d27fe1384d3c64ed61358a18b48d955cb2f05d6cf"} Apr 17 14:10:04.795189 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:04.795118 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-9475c9469-l2zbd" podStartSLOduration=1.815121672 podStartE2EDuration="5.795097508s" podCreationTimestamp="2026-04-17 14:09:59 +0000 UTC" firstStartedPulling="2026-04-17 14:10:00.373681003 +0000 UTC m=+149.711472092" lastFinishedPulling="2026-04-17 14:10:04.353656825 +0000 UTC m=+153.691447928" observedRunningTime="2026-04-17 14:10:04.793229601 +0000 UTC m=+154.131020714" watchObservedRunningTime="2026-04-17 14:10:04.795097508 +0000 UTC m=+154.132888620" Apr 17 14:10:05.775591 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:05.775496 2568 generic.go:358] "Generic (PLEG): container finished" podID="af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" containerID="af824c226a1007ad1cddb051ffeffc02fe98b8026cd252871819278630e72757" exitCode=0 Apr 17 14:10:05.776036 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:05.775586 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf","Type":"ContainerDied","Data":"af824c226a1007ad1cddb051ffeffc02fe98b8026cd252871819278630e72757"} Apr 17 14:10:05.777006 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:05.776982 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-7dccd58f55-qplhm" event={"ID":"7e9b4543-ab44-4004-8de1-369205a75e30","Type":"ContainerStarted","Data":"2cc4f59446e30b0b59290aa30824fcedbf7d07aebdc2e9556a2b257aaff87688"} Apr 17 14:10:05.777256 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:05.777181 2568 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-monitoring/monitoring-plugin-7dccd58f55-qplhm" Apr 17 14:10:05.780505 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:05.780470 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03","Type":"ContainerStarted","Data":"cb668cb872d1e7a56195fcc38736fa580cce412fa6b0b4302c4a63ecbd99caa6"} Apr 17 14:10:05.782087 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:05.782061 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-74c596c9bf-ckdfj" event={"ID":"1ef97ae2-261e-4072-8667-f11c1fbf0024","Type":"ContainerStarted","Data":"ba5a8f34718ea5f48cc5d18631caed09bdce0d8bb3b5ab3e80941b3e39c6c2bb"} Apr 17 14:10:05.783041 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:05.783023 2568 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-7dccd58f55-qplhm" Apr 17 14:10:05.819260 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:05.819208 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-74c596c9bf-ckdfj" podStartSLOduration=1.436565968 podStartE2EDuration="3.819188787s" podCreationTimestamp="2026-04-17 14:10:02 +0000 UTC" firstStartedPulling="2026-04-17 14:10:03.05217481 +0000 UTC m=+152.389965899" lastFinishedPulling="2026-04-17 14:10:05.434797623 +0000 UTC m=+154.772588718" observedRunningTime="2026-04-17 14:10:05.818572231 +0000 UTC m=+155.156363341" watchObservedRunningTime="2026-04-17 14:10:05.819188787 +0000 UTC m=+155.156979899" Apr 17 14:10:05.842401 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:05.842344 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=2.3406410859999998 podStartE2EDuration="7.842328822s" podCreationTimestamp="2026-04-17 14:09:58 +0000 UTC" firstStartedPulling="2026-04-17 14:09:59.925020639 +0000 UTC m=+149.262811728" lastFinishedPulling="2026-04-17 14:10:05.426708358 +0000 UTC m=+154.764499464" observedRunningTime="2026-04-17 14:10:05.840801729 +0000 UTC m=+155.178592839" watchObservedRunningTime="2026-04-17 14:10:05.842328822 +0000 UTC m=+155.180119931" Apr 17 14:10:05.857531 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:05.857474 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-7dccd58f55-qplhm" podStartSLOduration=2.138167344 podStartE2EDuration="3.857454922s" podCreationTimestamp="2026-04-17 14:10:02 +0000 UTC" firstStartedPulling="2026-04-17 14:10:03.718778523 +0000 UTC m=+153.056569632" lastFinishedPulling="2026-04-17 14:10:05.438066121 +0000 UTC m=+154.775857210" observedRunningTime="2026-04-17 14:10:05.856039679 +0000 UTC m=+155.193830792" watchObservedRunningTime="2026-04-17 14:10:05.857454922 +0000 UTC m=+155.195246034" Apr 17 14:10:08.793746 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:08.793711 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf","Type":"ContainerStarted","Data":"d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217"} Apr 17 14:10:09.528565 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:10:09.528520 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[metrics-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-dns/dns-default-mrqpz" podUID="c99bcc58-e14e-4455-8308-f2a36ad35eff" Apr 17 14:10:09.542693 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:10:09.542657 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-ingress-canary/ingress-canary-n4248" podUID="f7bc9c86-d3a7-43d7-9862-6170cb691894" Apr 17 14:10:09.802186 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:09.802094 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf","Type":"ContainerStarted","Data":"9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396"} Apr 17 14:10:09.802186 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:09.802122 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-n4248" Apr 17 14:10:09.802186 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:09.802144 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf","Type":"ContainerStarted","Data":"5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e"} Apr 17 14:10:09.802186 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:09.802157 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mrqpz" Apr 17 14:10:09.802186 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:09.802159 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf","Type":"ContainerStarted","Data":"033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60"} Apr 17 14:10:09.802633 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:09.802193 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf","Type":"ContainerStarted","Data":"747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564"} Apr 17 14:10:09.802633 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:09.802202 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf","Type":"ContainerStarted","Data":"7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591"} Apr 17 14:10:09.832209 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:09.832146 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=3.936616248 podStartE2EDuration="6.832128582s" podCreationTimestamp="2026-04-17 14:10:03 +0000 UTC" firstStartedPulling="2026-04-17 14:10:05.776891055 +0000 UTC m=+155.114682148" lastFinishedPulling="2026-04-17 14:10:08.672403377 +0000 UTC m=+158.010194482" observedRunningTime="2026-04-17 14:10:09.829917613 +0000 UTC m=+159.167708749" watchObservedRunningTime="2026-04-17 14:10:09.832128582 +0000 UTC m=+159.169919693" Apr 17 14:10:10.789643 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:10.789577 2568 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-9475c9469-l2zbd" Apr 17 14:10:14.309991 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:14.309943 2568 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:10:14.471803 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:14.471765 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c99bcc58-e14e-4455-8308-f2a36ad35eff-metrics-tls\") pod \"dns-default-mrqpz\" (UID: \"c99bcc58-e14e-4455-8308-f2a36ad35eff\") " pod="openshift-dns/dns-default-mrqpz" Apr 17 14:10:14.472214 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:14.472187 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f7bc9c86-d3a7-43d7-9862-6170cb691894-cert\") pod \"ingress-canary-n4248\" (UID: \"f7bc9c86-d3a7-43d7-9862-6170cb691894\") " pod="openshift-ingress-canary/ingress-canary-n4248" Apr 17 14:10:14.474407 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:14.474381 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c99bcc58-e14e-4455-8308-f2a36ad35eff-metrics-tls\") pod \"dns-default-mrqpz\" (UID: \"c99bcc58-e14e-4455-8308-f2a36ad35eff\") " pod="openshift-dns/dns-default-mrqpz" Apr 17 14:10:14.475025 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:14.475007 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f7bc9c86-d3a7-43d7-9862-6170cb691894-cert\") pod \"ingress-canary-n4248\" (UID: \"f7bc9c86-d3a7-43d7-9862-6170cb691894\") " pod="openshift-ingress-canary/ingress-canary-n4248" Apr 17 14:10:14.606237 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:14.606161 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-fnmsv\"" Apr 17 14:10:14.606421 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:14.606269 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-nsnr8\"" Apr 17 14:10:14.614390 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:14.614349 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-n4248" Apr 17 14:10:14.614543 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:14.614349 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mrqpz" Apr 17 14:10:14.763196 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:14.763130 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-mrqpz"] Apr 17 14:10:14.767583 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:10:14.767553 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc99bcc58_e14e_4455_8308_f2a36ad35eff.slice/crio-60b30705bebadf51fe66c994cb94d0eea497aeab983c9bdbb3747dd63e0d8175 WatchSource:0}: Error finding container 60b30705bebadf51fe66c994cb94d0eea497aeab983c9bdbb3747dd63e0d8175: Status 404 returned error can't find the container with id 60b30705bebadf51fe66c994cb94d0eea497aeab983c9bdbb3747dd63e0d8175 Apr 17 14:10:14.783542 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:14.783515 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-n4248"] Apr 17 14:10:14.786018 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:10:14.785992 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7bc9c86_d3a7_43d7_9862_6170cb691894.slice/crio-bc699516ffa887f69d2c7b82142256c5a8179de79bbd3a33c92d6e9f6edd0539 WatchSource:0}: Error finding container bc699516ffa887f69d2c7b82142256c5a8179de79bbd3a33c92d6e9f6edd0539: Status 404 returned error can't find the container with id bc699516ffa887f69d2c7b82142256c5a8179de79bbd3a33c92d6e9f6edd0539 Apr 17 14:10:14.818630 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:14.818591 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mrqpz" event={"ID":"c99bcc58-e14e-4455-8308-f2a36ad35eff","Type":"ContainerStarted","Data":"60b30705bebadf51fe66c994cb94d0eea497aeab983c9bdbb3747dd63e0d8175"} Apr 17 14:10:14.819734 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:14.819706 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-n4248" event={"ID":"f7bc9c86-d3a7-43d7-9862-6170cb691894","Type":"ContainerStarted","Data":"bc699516ffa887f69d2c7b82142256c5a8179de79bbd3a33c92d6e9f6edd0539"} Apr 17 14:10:17.831354 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:17.831312 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mrqpz" event={"ID":"c99bcc58-e14e-4455-8308-f2a36ad35eff","Type":"ContainerStarted","Data":"9a71c2952eabedd249e4a0a1f672d0d63288abe00d2874d55694789a256f9e00"} Apr 17 14:10:17.831354 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:17.831357 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mrqpz" event={"ID":"c99bcc58-e14e-4455-8308-f2a36ad35eff","Type":"ContainerStarted","Data":"26eab33b07c508dc892dc9ff83b9b46098dbba4baa6c0e8b375f259933097c34"} Apr 17 14:10:17.831905 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:17.831466 2568 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-mrqpz" Apr 17 14:10:17.832781 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:17.832748 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-n4248" event={"ID":"f7bc9c86-d3a7-43d7-9862-6170cb691894","Type":"ContainerStarted","Data":"8d653f8421c10e615fc3455a2c4cf898f95563232fb3258b86d4000eeb45c17b"} Apr 17 14:10:17.851231 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:17.851171 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-mrqpz" podStartSLOduration=129.663455596 podStartE2EDuration="2m11.851151696s" podCreationTimestamp="2026-04-17 14:08:06 +0000 UTC" firstStartedPulling="2026-04-17 14:10:14.770656139 +0000 UTC m=+164.108447228" lastFinishedPulling="2026-04-17 14:10:16.958352236 +0000 UTC m=+166.296143328" observedRunningTime="2026-04-17 14:10:17.850003824 +0000 UTC m=+167.187794974" watchObservedRunningTime="2026-04-17 14:10:17.851151696 +0000 UTC m=+167.188942808" Apr 17 14:10:17.872367 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:17.872315 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-n4248" podStartSLOduration=129.704033207 podStartE2EDuration="2m11.87229949s" podCreationTimestamp="2026-04-17 14:08:06 +0000 UTC" firstStartedPulling="2026-04-17 14:10:14.787816243 +0000 UTC m=+164.125607335" lastFinishedPulling="2026-04-17 14:10:16.956082529 +0000 UTC m=+166.293873618" observedRunningTime="2026-04-17 14:10:17.87165561 +0000 UTC m=+167.209446735" watchObservedRunningTime="2026-04-17 14:10:17.87229949 +0000 UTC m=+167.210090602" Apr 17 14:10:22.487624 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:22.487582 2568 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-74c596c9bf-ckdfj" Apr 17 14:10:22.488009 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:22.487650 2568 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-monitoring/metrics-server-74c596c9bf-ckdfj" Apr 17 14:10:27.839019 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:27.838989 2568 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-mrqpz" Apr 17 14:10:29.126222 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:29.126153 2568 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-console/console-865878b56c-zchzt" podUID="6c48a882-c4d6-401e-8b76-fdfdcfd63b2a" containerName="console" containerID="cri-o://492bf9df86536fdc585ab8a6527b6fb41b26298f96c8a8d338d91e0f67ec6434" gracePeriod=15 Apr 17 14:10:29.369281 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:29.369257 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-865878b56c-zchzt_6c48a882-c4d6-401e-8b76-fdfdcfd63b2a/console/0.log" Apr 17 14:10:29.369469 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:29.369331 2568 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-865878b56c-zchzt" Apr 17 14:10:29.412254 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:29.412227 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6c48a882-c4d6-401e-8b76-fdfdcfd63b2a-oauth-serving-cert\") pod \"6c48a882-c4d6-401e-8b76-fdfdcfd63b2a\" (UID: \"6c48a882-c4d6-401e-8b76-fdfdcfd63b2a\") " Apr 17 14:10:29.412432 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:29.412268 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6c48a882-c4d6-401e-8b76-fdfdcfd63b2a-service-ca\") pod \"6c48a882-c4d6-401e-8b76-fdfdcfd63b2a\" (UID: \"6c48a882-c4d6-401e-8b76-fdfdcfd63b2a\") " Apr 17 14:10:29.412432 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:29.412299 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6c48a882-c4d6-401e-8b76-fdfdcfd63b2a-console-config\") pod \"6c48a882-c4d6-401e-8b76-fdfdcfd63b2a\" (UID: \"6c48a882-c4d6-401e-8b76-fdfdcfd63b2a\") " Apr 17 14:10:29.412432 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:29.412325 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6c48a882-c4d6-401e-8b76-fdfdcfd63b2a-console-oauth-config\") pod \"6c48a882-c4d6-401e-8b76-fdfdcfd63b2a\" (UID: \"6c48a882-c4d6-401e-8b76-fdfdcfd63b2a\") " Apr 17 14:10:29.412432 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:29.412388 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kd9l\" (UniqueName: \"kubernetes.io/projected/6c48a882-c4d6-401e-8b76-fdfdcfd63b2a-kube-api-access-8kd9l\") pod \"6c48a882-c4d6-401e-8b76-fdfdcfd63b2a\" (UID: \"6c48a882-c4d6-401e-8b76-fdfdcfd63b2a\") " Apr 17 14:10:29.412432 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:29.412429 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c48a882-c4d6-401e-8b76-fdfdcfd63b2a-console-serving-cert\") pod \"6c48a882-c4d6-401e-8b76-fdfdcfd63b2a\" (UID: \"6c48a882-c4d6-401e-8b76-fdfdcfd63b2a\") " Apr 17 14:10:29.412748 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:29.412700 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c48a882-c4d6-401e-8b76-fdfdcfd63b2a-service-ca" (OuterVolumeSpecName: "service-ca") pod "6c48a882-c4d6-401e-8b76-fdfdcfd63b2a" (UID: "6c48a882-c4d6-401e-8b76-fdfdcfd63b2a"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 14:10:29.412748 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:29.412710 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c48a882-c4d6-401e-8b76-fdfdcfd63b2a-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6c48a882-c4d6-401e-8b76-fdfdcfd63b2a" (UID: "6c48a882-c4d6-401e-8b76-fdfdcfd63b2a"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 14:10:29.412964 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:29.412798 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c48a882-c4d6-401e-8b76-fdfdcfd63b2a-console-config" (OuterVolumeSpecName: "console-config") pod "6c48a882-c4d6-401e-8b76-fdfdcfd63b2a" (UID: "6c48a882-c4d6-401e-8b76-fdfdcfd63b2a"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 14:10:29.415022 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:29.414974 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c48a882-c4d6-401e-8b76-fdfdcfd63b2a-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6c48a882-c4d6-401e-8b76-fdfdcfd63b2a" (UID: "6c48a882-c4d6-401e-8b76-fdfdcfd63b2a"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 14:10:29.415138 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:29.415070 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c48a882-c4d6-401e-8b76-fdfdcfd63b2a-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6c48a882-c4d6-401e-8b76-fdfdcfd63b2a" (UID: "6c48a882-c4d6-401e-8b76-fdfdcfd63b2a"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 14:10:29.415329 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:29.415297 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c48a882-c4d6-401e-8b76-fdfdcfd63b2a-kube-api-access-8kd9l" (OuterVolumeSpecName: "kube-api-access-8kd9l") pod "6c48a882-c4d6-401e-8b76-fdfdcfd63b2a" (UID: "6c48a882-c4d6-401e-8b76-fdfdcfd63b2a"). InnerVolumeSpecName "kube-api-access-8kd9l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 14:10:29.513586 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:29.513550 2568 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c48a882-c4d6-401e-8b76-fdfdcfd63b2a-console-serving-cert\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:10:29.513586 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:29.513579 2568 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6c48a882-c4d6-401e-8b76-fdfdcfd63b2a-oauth-serving-cert\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:10:29.513586 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:29.513589 2568 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6c48a882-c4d6-401e-8b76-fdfdcfd63b2a-service-ca\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:10:29.513586 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:29.513599 2568 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6c48a882-c4d6-401e-8b76-fdfdcfd63b2a-console-config\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:10:29.513841 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:29.513607 2568 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6c48a882-c4d6-401e-8b76-fdfdcfd63b2a-console-oauth-config\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:10:29.513841 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:29.513616 2568 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8kd9l\" (UniqueName: \"kubernetes.io/projected/6c48a882-c4d6-401e-8b76-fdfdcfd63b2a-kube-api-access-8kd9l\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:10:29.870759 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:29.870682 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-865878b56c-zchzt_6c48a882-c4d6-401e-8b76-fdfdcfd63b2a/console/0.log" Apr 17 14:10:29.870759 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:29.870726 2568 generic.go:358] "Generic (PLEG): container finished" podID="6c48a882-c4d6-401e-8b76-fdfdcfd63b2a" containerID="492bf9df86536fdc585ab8a6527b6fb41b26298f96c8a8d338d91e0f67ec6434" exitCode=2 Apr 17 14:10:29.870958 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:29.870779 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-865878b56c-zchzt" event={"ID":"6c48a882-c4d6-401e-8b76-fdfdcfd63b2a","Type":"ContainerDied","Data":"492bf9df86536fdc585ab8a6527b6fb41b26298f96c8a8d338d91e0f67ec6434"} Apr 17 14:10:29.870958 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:29.870786 2568 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-865878b56c-zchzt" Apr 17 14:10:29.870958 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:29.870801 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-865878b56c-zchzt" event={"ID":"6c48a882-c4d6-401e-8b76-fdfdcfd63b2a","Type":"ContainerDied","Data":"a3bba40b123d98c85bd18f6074979f7866eaa4fc5d82d2ce065c92deddf78eaf"} Apr 17 14:10:29.870958 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:29.870818 2568 scope.go:117] "RemoveContainer" containerID="492bf9df86536fdc585ab8a6527b6fb41b26298f96c8a8d338d91e0f67ec6434" Apr 17 14:10:29.878988 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:29.878970 2568 scope.go:117] "RemoveContainer" containerID="492bf9df86536fdc585ab8a6527b6fb41b26298f96c8a8d338d91e0f67ec6434" Apr 17 14:10:29.879228 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:10:29.879210 2568 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"492bf9df86536fdc585ab8a6527b6fb41b26298f96c8a8d338d91e0f67ec6434\": container with ID starting with 492bf9df86536fdc585ab8a6527b6fb41b26298f96c8a8d338d91e0f67ec6434 not found: ID does not exist" containerID="492bf9df86536fdc585ab8a6527b6fb41b26298f96c8a8d338d91e0f67ec6434" Apr 17 14:10:29.879294 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:29.879239 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"492bf9df86536fdc585ab8a6527b6fb41b26298f96c8a8d338d91e0f67ec6434"} err="failed to get container status \"492bf9df86536fdc585ab8a6527b6fb41b26298f96c8a8d338d91e0f67ec6434\": rpc error: code = NotFound desc = could not find container \"492bf9df86536fdc585ab8a6527b6fb41b26298f96c8a8d338d91e0f67ec6434\": container with ID starting with 492bf9df86536fdc585ab8a6527b6fb41b26298f96c8a8d338d91e0f67ec6434 not found: ID does not exist" Apr 17 14:10:29.891730 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:29.891705 2568 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-console/console-865878b56c-zchzt"] Apr 17 14:10:29.894739 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:29.894716 2568 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-865878b56c-zchzt"] Apr 17 14:10:31.263265 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:31.263233 2568 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c48a882-c4d6-401e-8b76-fdfdcfd63b2a" path="/var/lib/kubelet/pods/6c48a882-c4d6-401e-8b76-fdfdcfd63b2a/volumes" Apr 17 14:10:35.893766 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:35.893733 2568 generic.go:358] "Generic (PLEG): container finished" podID="fa9a1ab0-741d-44b8-9de1-9a0b296aee9c" containerID="5d3c72dad9f0ec124222e36b684f9f38003b2b84696a255cae7c5392b119946d" exitCode=0 Apr 17 14:10:35.894252 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:35.893791 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-qvzpd" event={"ID":"fa9a1ab0-741d-44b8-9de1-9a0b296aee9c","Type":"ContainerDied","Data":"5d3c72dad9f0ec124222e36b684f9f38003b2b84696a255cae7c5392b119946d"} Apr 17 14:10:35.894252 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:35.894232 2568 scope.go:117] "RemoveContainer" containerID="5d3c72dad9f0ec124222e36b684f9f38003b2b84696a255cae7c5392b119946d" Apr 17 14:10:36.899000 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:36.898965 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-qvzpd" event={"ID":"fa9a1ab0-741d-44b8-9de1-9a0b296aee9c","Type":"ContainerStarted","Data":"9867efe03ffc492c021c4b386a7d9b86c4500cecfe1b251df17795f9710a3914"} Apr 17 14:10:42.493944 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:42.493907 2568 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-74c596c9bf-ckdfj" Apr 17 14:10:42.497897 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:42.497876 2568 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-74c596c9bf-ckdfj" Apr 17 14:10:42.506575 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:10:42.506554 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-n4248_f7bc9c86-d3a7-43d7-9862-6170cb691894/serve-healthcheck-canary/0.log" Apr 17 14:11:04.310439 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:04.310385 2568 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:04.330470 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:04.330442 2568 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:05.001522 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:05.001494 2568 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:18.039240 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:18.039205 2568 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 17 14:11:18.039708 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:18.039618 2568 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" containerName="alertmanager" containerID="cri-o://66795af62eb0978f637d7ffedbfddd72d673fe29746bc9c0009e771e3b8ed8ad" gracePeriod=120 Apr 17 14:11:18.039768 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:18.039694 2568 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" containerName="kube-rbac-proxy-metric" containerID="cri-o://b1802803934e1dc7ee96af34dd2b3ca7948da362928c4d11e0761db38a5b0567" gracePeriod=120 Apr 17 14:11:18.039768 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:18.039720 2568 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" containerName="kube-rbac-proxy-web" containerID="cri-o://1664d2cbeec44fde6c75a111fa65b2014f307b04d0c239e62ebe7de406a043c9" gracePeriod=120 Apr 17 14:11:18.039899 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:18.039747 2568 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" containerName="prom-label-proxy" containerID="cri-o://cb668cb872d1e7a56195fcc38736fa580cce412fa6b0b4302c4a63ecbd99caa6" gracePeriod=120 Apr 17 14:11:18.039899 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:18.039747 2568 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" containerName="kube-rbac-proxy" containerID="cri-o://45202397e0a58f429b699cf04b80c08e0ae79b5e9906e9230dd2bd1d6189502c" gracePeriod=120 Apr 17 14:11:18.039899 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:18.039800 2568 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" containerName="config-reloader" containerID="cri-o://16239c5ce4f92bd30653f46ebcaf79bd878b36339bb861529c88950c4c492b95" gracePeriod=120 Apr 17 14:11:19.032663 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.032626 2568 generic.go:358] "Generic (PLEG): container finished" podID="9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" containerID="cb668cb872d1e7a56195fcc38736fa580cce412fa6b0b4302c4a63ecbd99caa6" exitCode=0 Apr 17 14:11:19.032663 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.032656 2568 generic.go:358] "Generic (PLEG): container finished" podID="9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" containerID="45202397e0a58f429b699cf04b80c08e0ae79b5e9906e9230dd2bd1d6189502c" exitCode=0 Apr 17 14:11:19.032663 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.032665 2568 generic.go:358] "Generic (PLEG): container finished" podID="9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" containerID="16239c5ce4f92bd30653f46ebcaf79bd878b36339bb861529c88950c4c492b95" exitCode=0 Apr 17 14:11:19.032663 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.032673 2568 generic.go:358] "Generic (PLEG): container finished" podID="9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" containerID="66795af62eb0978f637d7ffedbfddd72d673fe29746bc9c0009e771e3b8ed8ad" exitCode=0 Apr 17 14:11:19.032972 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.032670 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03","Type":"ContainerDied","Data":"cb668cb872d1e7a56195fcc38736fa580cce412fa6b0b4302c4a63ecbd99caa6"} Apr 17 14:11:19.032972 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.032702 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03","Type":"ContainerDied","Data":"45202397e0a58f429b699cf04b80c08e0ae79b5e9906e9230dd2bd1d6189502c"} Apr 17 14:11:19.032972 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.032715 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03","Type":"ContainerDied","Data":"16239c5ce4f92bd30653f46ebcaf79bd878b36339bb861529c88950c4c492b95"} Apr 17 14:11:19.032972 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.032727 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03","Type":"ContainerDied","Data":"66795af62eb0978f637d7ffedbfddd72d673fe29746bc9c0009e771e3b8ed8ad"} Apr 17 14:11:19.285495 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.285427 2568 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:11:19.356436 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.356406 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-config-out\") pod \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " Apr 17 14:11:19.356612 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.356447 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-tls-assets\") pod \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " Apr 17 14:11:19.356612 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.356466 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-config-volume\") pod \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " Apr 17 14:11:19.356612 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.356504 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-alertmanager-trusted-ca-bundle\") pod \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " Apr 17 14:11:19.356612 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.356537 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-web-config\") pod \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " Apr 17 14:11:19.356612 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.356569 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-metrics-client-ca\") pod \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " Apr 17 14:11:19.356904 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.356648 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-secret-alertmanager-kube-rbac-proxy\") pod \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " Apr 17 14:11:19.356904 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.356692 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-secret-alertmanager-main-tls\") pod \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " Apr 17 14:11:19.356904 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.356740 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-cluster-tls-config\") pod \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " Apr 17 14:11:19.356904 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.356777 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-alertmanager-main-db\") pod \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " Apr 17 14:11:19.356904 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.356842 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn2ws\" (UniqueName: \"kubernetes.io/projected/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-kube-api-access-pn2ws\") pod \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " Apr 17 14:11:19.356904 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.356891 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-secret-alertmanager-kube-rbac-proxy-metric\") pod \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " Apr 17 14:11:19.357209 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.356956 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-secret-alertmanager-kube-rbac-proxy-web\") pod \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\" (UID: \"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03\") " Apr 17 14:11:19.357209 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.357111 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-alertmanager-trusted-ca-bundle" (OuterVolumeSpecName: "alertmanager-trusted-ca-bundle") pod "9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" (UID: "9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03"). InnerVolumeSpecName "alertmanager-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 14:11:19.357209 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.357142 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" (UID: "9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 14:11:19.357361 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.357251 2568 reconciler_common.go:299] "Volume detached for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-alertmanager-trusted-ca-bundle\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:11:19.357361 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.357269 2568 reconciler_common.go:299] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-metrics-client-ca\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:11:19.357462 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.357400 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-alertmanager-main-db" (OuterVolumeSpecName: "alertmanager-main-db") pod "9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" (UID: "9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03"). InnerVolumeSpecName "alertmanager-main-db". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 17 14:11:19.359964 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.359922 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-secret-alertmanager-main-tls" (OuterVolumeSpecName: "secret-alertmanager-main-tls") pod "9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" (UID: "9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03"). InnerVolumeSpecName "secret-alertmanager-main-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 14:11:19.360170 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.360008 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-config-volume" (OuterVolumeSpecName: "config-volume") pod "9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" (UID: "9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 14:11:19.360621 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.360566 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-config-out" (OuterVolumeSpecName: "config-out") pod "9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" (UID: "9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 17 14:11:19.361132 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.361080 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-secret-alertmanager-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-web") pod "9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" (UID: "9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 14:11:19.361622 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.361592 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-secret-alertmanager-kube-rbac-proxy-metric" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-metric") pod "9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" (UID: "9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-metric". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 14:11:19.361731 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.361707 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-secret-alertmanager-kube-rbac-proxy" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy") pod "9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" (UID: "9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 14:11:19.361786 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.361727 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" (UID: "9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 14:11:19.362177 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.362158 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-kube-api-access-pn2ws" (OuterVolumeSpecName: "kube-api-access-pn2ws") pod "9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" (UID: "9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03"). InnerVolumeSpecName "kube-api-access-pn2ws". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 14:11:19.365220 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.365190 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-cluster-tls-config" (OuterVolumeSpecName: "cluster-tls-config") pod "9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" (UID: "9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03"). InnerVolumeSpecName "cluster-tls-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 14:11:19.373287 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.373260 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-web-config" (OuterVolumeSpecName: "web-config") pod "9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" (UID: "9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 14:11:19.457966 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.457929 2568 reconciler_common.go:299] "Volume detached for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-cluster-tls-config\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:11:19.457966 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.457959 2568 reconciler_common.go:299] "Volume detached for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-alertmanager-main-db\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:11:19.457966 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.457969 2568 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pn2ws\" (UniqueName: \"kubernetes.io/projected/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-kube-api-access-pn2ws\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:11:19.458197 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.457980 2568 reconciler_common.go:299] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-secret-alertmanager-kube-rbac-proxy-metric\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:11:19.458197 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.457990 2568 reconciler_common.go:299] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-secret-alertmanager-kube-rbac-proxy-web\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:11:19.458197 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.458001 2568 reconciler_common.go:299] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-config-out\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:11:19.458197 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.458010 2568 reconciler_common.go:299] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-tls-assets\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:11:19.458197 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.458018 2568 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-config-volume\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:11:19.458197 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.458026 2568 reconciler_common.go:299] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-web-config\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:11:19.458197 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.458034 2568 reconciler_common.go:299] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-secret-alertmanager-kube-rbac-proxy\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:11:19.458197 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:19.458044 2568 reconciler_common.go:299] "Volume detached for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03-secret-alertmanager-main-tls\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:11:20.039217 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.039182 2568 generic.go:358] "Generic (PLEG): container finished" podID="9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" containerID="b1802803934e1dc7ee96af34dd2b3ca7948da362928c4d11e0761db38a5b0567" exitCode=0 Apr 17 14:11:20.039217 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.039212 2568 generic.go:358] "Generic (PLEG): container finished" podID="9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" containerID="1664d2cbeec44fde6c75a111fa65b2014f307b04d0c239e62ebe7de406a043c9" exitCode=0 Apr 17 14:11:20.039434 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.039263 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03","Type":"ContainerDied","Data":"b1802803934e1dc7ee96af34dd2b3ca7948da362928c4d11e0761db38a5b0567"} Apr 17 14:11:20.039434 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.039304 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03","Type":"ContainerDied","Data":"1664d2cbeec44fde6c75a111fa65b2014f307b04d0c239e62ebe7de406a043c9"} Apr 17 14:11:20.039434 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.039316 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03","Type":"ContainerDied","Data":"63140b79589a4433552ae0db3103f518fc4e03bdab9b1ac67b0c8df8acf75143"} Apr 17 14:11:20.039434 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.039322 2568 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Apr 17 14:11:20.039434 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.039331 2568 scope.go:117] "RemoveContainer" containerID="cb668cb872d1e7a56195fcc38736fa580cce412fa6b0b4302c4a63ecbd99caa6" Apr 17 14:11:20.047092 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.047077 2568 scope.go:117] "RemoveContainer" containerID="b1802803934e1dc7ee96af34dd2b3ca7948da362928c4d11e0761db38a5b0567" Apr 17 14:11:20.056075 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.056059 2568 scope.go:117] "RemoveContainer" containerID="45202397e0a58f429b699cf04b80c08e0ae79b5e9906e9230dd2bd1d6189502c" Apr 17 14:11:20.063737 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.063710 2568 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 17 14:11:20.063737 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.063728 2568 scope.go:117] "RemoveContainer" containerID="1664d2cbeec44fde6c75a111fa65b2014f307b04d0c239e62ebe7de406a043c9" Apr 17 14:11:20.066991 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.066968 2568 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 17 14:11:20.071587 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.071571 2568 scope.go:117] "RemoveContainer" containerID="16239c5ce4f92bd30653f46ebcaf79bd878b36339bb861529c88950c4c492b95" Apr 17 14:11:20.078308 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.078290 2568 scope.go:117] "RemoveContainer" containerID="66795af62eb0978f637d7ffedbfddd72d673fe29746bc9c0009e771e3b8ed8ad" Apr 17 14:11:20.085036 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.085019 2568 scope.go:117] "RemoveContainer" containerID="166bd4961e7c244d65fe7706b95b80f1674842a8cd842b0978d8ca7d769343b0" Apr 17 14:11:20.091935 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.091917 2568 scope.go:117] "RemoveContainer" containerID="cb668cb872d1e7a56195fcc38736fa580cce412fa6b0b4302c4a63ecbd99caa6" Apr 17 14:11:20.092216 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:11:20.092173 2568 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb668cb872d1e7a56195fcc38736fa580cce412fa6b0b4302c4a63ecbd99caa6\": container with ID starting with cb668cb872d1e7a56195fcc38736fa580cce412fa6b0b4302c4a63ecbd99caa6 not found: ID does not exist" containerID="cb668cb872d1e7a56195fcc38736fa580cce412fa6b0b4302c4a63ecbd99caa6" Apr 17 14:11:20.092271 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.092204 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb668cb872d1e7a56195fcc38736fa580cce412fa6b0b4302c4a63ecbd99caa6"} err="failed to get container status \"cb668cb872d1e7a56195fcc38736fa580cce412fa6b0b4302c4a63ecbd99caa6\": rpc error: code = NotFound desc = could not find container \"cb668cb872d1e7a56195fcc38736fa580cce412fa6b0b4302c4a63ecbd99caa6\": container with ID starting with cb668cb872d1e7a56195fcc38736fa580cce412fa6b0b4302c4a63ecbd99caa6 not found: ID does not exist" Apr 17 14:11:20.092271 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.092231 2568 scope.go:117] "RemoveContainer" containerID="b1802803934e1dc7ee96af34dd2b3ca7948da362928c4d11e0761db38a5b0567" Apr 17 14:11:20.092498 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:11:20.092477 2568 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1802803934e1dc7ee96af34dd2b3ca7948da362928c4d11e0761db38a5b0567\": container with ID starting with b1802803934e1dc7ee96af34dd2b3ca7948da362928c4d11e0761db38a5b0567 not found: ID does not exist" containerID="b1802803934e1dc7ee96af34dd2b3ca7948da362928c4d11e0761db38a5b0567" Apr 17 14:11:20.092537 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.092508 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1802803934e1dc7ee96af34dd2b3ca7948da362928c4d11e0761db38a5b0567"} err="failed to get container status \"b1802803934e1dc7ee96af34dd2b3ca7948da362928c4d11e0761db38a5b0567\": rpc error: code = NotFound desc = could not find container \"b1802803934e1dc7ee96af34dd2b3ca7948da362928c4d11e0761db38a5b0567\": container with ID starting with b1802803934e1dc7ee96af34dd2b3ca7948da362928c4d11e0761db38a5b0567 not found: ID does not exist" Apr 17 14:11:20.092537 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.092531 2568 scope.go:117] "RemoveContainer" containerID="45202397e0a58f429b699cf04b80c08e0ae79b5e9906e9230dd2bd1d6189502c" Apr 17 14:11:20.092772 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:11:20.092751 2568 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45202397e0a58f429b699cf04b80c08e0ae79b5e9906e9230dd2bd1d6189502c\": container with ID starting with 45202397e0a58f429b699cf04b80c08e0ae79b5e9906e9230dd2bd1d6189502c not found: ID does not exist" containerID="45202397e0a58f429b699cf04b80c08e0ae79b5e9906e9230dd2bd1d6189502c" Apr 17 14:11:20.092874 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.092776 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45202397e0a58f429b699cf04b80c08e0ae79b5e9906e9230dd2bd1d6189502c"} err="failed to get container status \"45202397e0a58f429b699cf04b80c08e0ae79b5e9906e9230dd2bd1d6189502c\": rpc error: code = NotFound desc = could not find container \"45202397e0a58f429b699cf04b80c08e0ae79b5e9906e9230dd2bd1d6189502c\": container with ID starting with 45202397e0a58f429b699cf04b80c08e0ae79b5e9906e9230dd2bd1d6189502c not found: ID does not exist" Apr 17 14:11:20.092874 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.092798 2568 scope.go:117] "RemoveContainer" containerID="1664d2cbeec44fde6c75a111fa65b2014f307b04d0c239e62ebe7de406a043c9" Apr 17 14:11:20.093109 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:11:20.093090 2568 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1664d2cbeec44fde6c75a111fa65b2014f307b04d0c239e62ebe7de406a043c9\": container with ID starting with 1664d2cbeec44fde6c75a111fa65b2014f307b04d0c239e62ebe7de406a043c9 not found: ID does not exist" containerID="1664d2cbeec44fde6c75a111fa65b2014f307b04d0c239e62ebe7de406a043c9" Apr 17 14:11:20.093206 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.093113 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1664d2cbeec44fde6c75a111fa65b2014f307b04d0c239e62ebe7de406a043c9"} err="failed to get container status \"1664d2cbeec44fde6c75a111fa65b2014f307b04d0c239e62ebe7de406a043c9\": rpc error: code = NotFound desc = could not find container \"1664d2cbeec44fde6c75a111fa65b2014f307b04d0c239e62ebe7de406a043c9\": container with ID starting with 1664d2cbeec44fde6c75a111fa65b2014f307b04d0c239e62ebe7de406a043c9 not found: ID does not exist" Apr 17 14:11:20.093206 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.093127 2568 scope.go:117] "RemoveContainer" containerID="16239c5ce4f92bd30653f46ebcaf79bd878b36339bb861529c88950c4c492b95" Apr 17 14:11:20.093353 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:11:20.093336 2568 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16239c5ce4f92bd30653f46ebcaf79bd878b36339bb861529c88950c4c492b95\": container with ID starting with 16239c5ce4f92bd30653f46ebcaf79bd878b36339bb861529c88950c4c492b95 not found: ID does not exist" containerID="16239c5ce4f92bd30653f46ebcaf79bd878b36339bb861529c88950c4c492b95" Apr 17 14:11:20.093394 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.093357 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16239c5ce4f92bd30653f46ebcaf79bd878b36339bb861529c88950c4c492b95"} err="failed to get container status \"16239c5ce4f92bd30653f46ebcaf79bd878b36339bb861529c88950c4c492b95\": rpc error: code = NotFound desc = could not find container \"16239c5ce4f92bd30653f46ebcaf79bd878b36339bb861529c88950c4c492b95\": container with ID starting with 16239c5ce4f92bd30653f46ebcaf79bd878b36339bb861529c88950c4c492b95 not found: ID does not exist" Apr 17 14:11:20.093432 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.093380 2568 scope.go:117] "RemoveContainer" containerID="66795af62eb0978f637d7ffedbfddd72d673fe29746bc9c0009e771e3b8ed8ad" Apr 17 14:11:20.093612 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:11:20.093597 2568 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66795af62eb0978f637d7ffedbfddd72d673fe29746bc9c0009e771e3b8ed8ad\": container with ID starting with 66795af62eb0978f637d7ffedbfddd72d673fe29746bc9c0009e771e3b8ed8ad not found: ID does not exist" containerID="66795af62eb0978f637d7ffedbfddd72d673fe29746bc9c0009e771e3b8ed8ad" Apr 17 14:11:20.093652 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.093615 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66795af62eb0978f637d7ffedbfddd72d673fe29746bc9c0009e771e3b8ed8ad"} err="failed to get container status \"66795af62eb0978f637d7ffedbfddd72d673fe29746bc9c0009e771e3b8ed8ad\": rpc error: code = NotFound desc = could not find container \"66795af62eb0978f637d7ffedbfddd72d673fe29746bc9c0009e771e3b8ed8ad\": container with ID starting with 66795af62eb0978f637d7ffedbfddd72d673fe29746bc9c0009e771e3b8ed8ad not found: ID does not exist" Apr 17 14:11:20.093652 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.093632 2568 scope.go:117] "RemoveContainer" containerID="166bd4961e7c244d65fe7706b95b80f1674842a8cd842b0978d8ca7d769343b0" Apr 17 14:11:20.093826 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:11:20.093810 2568 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"166bd4961e7c244d65fe7706b95b80f1674842a8cd842b0978d8ca7d769343b0\": container with ID starting with 166bd4961e7c244d65fe7706b95b80f1674842a8cd842b0978d8ca7d769343b0 not found: ID does not exist" containerID="166bd4961e7c244d65fe7706b95b80f1674842a8cd842b0978d8ca7d769343b0" Apr 17 14:11:20.093889 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.093831 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"166bd4961e7c244d65fe7706b95b80f1674842a8cd842b0978d8ca7d769343b0"} err="failed to get container status \"166bd4961e7c244d65fe7706b95b80f1674842a8cd842b0978d8ca7d769343b0\": rpc error: code = NotFound desc = could not find container \"166bd4961e7c244d65fe7706b95b80f1674842a8cd842b0978d8ca7d769343b0\": container with ID starting with 166bd4961e7c244d65fe7706b95b80f1674842a8cd842b0978d8ca7d769343b0 not found: ID does not exist" Apr 17 14:11:20.093889 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.093846 2568 scope.go:117] "RemoveContainer" containerID="cb668cb872d1e7a56195fcc38736fa580cce412fa6b0b4302c4a63ecbd99caa6" Apr 17 14:11:20.094118 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.094102 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb668cb872d1e7a56195fcc38736fa580cce412fa6b0b4302c4a63ecbd99caa6"} err="failed to get container status \"cb668cb872d1e7a56195fcc38736fa580cce412fa6b0b4302c4a63ecbd99caa6\": rpc error: code = NotFound desc = could not find container \"cb668cb872d1e7a56195fcc38736fa580cce412fa6b0b4302c4a63ecbd99caa6\": container with ID starting with cb668cb872d1e7a56195fcc38736fa580cce412fa6b0b4302c4a63ecbd99caa6 not found: ID does not exist" Apr 17 14:11:20.094170 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.094117 2568 scope.go:117] "RemoveContainer" containerID="b1802803934e1dc7ee96af34dd2b3ca7948da362928c4d11e0761db38a5b0567" Apr 17 14:11:20.094327 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.094310 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1802803934e1dc7ee96af34dd2b3ca7948da362928c4d11e0761db38a5b0567"} err="failed to get container status \"b1802803934e1dc7ee96af34dd2b3ca7948da362928c4d11e0761db38a5b0567\": rpc error: code = NotFound desc = could not find container \"b1802803934e1dc7ee96af34dd2b3ca7948da362928c4d11e0761db38a5b0567\": container with ID starting with b1802803934e1dc7ee96af34dd2b3ca7948da362928c4d11e0761db38a5b0567 not found: ID does not exist" Apr 17 14:11:20.094393 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.094329 2568 scope.go:117] "RemoveContainer" containerID="45202397e0a58f429b699cf04b80c08e0ae79b5e9906e9230dd2bd1d6189502c" Apr 17 14:11:20.094545 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.094524 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45202397e0a58f429b699cf04b80c08e0ae79b5e9906e9230dd2bd1d6189502c"} err="failed to get container status \"45202397e0a58f429b699cf04b80c08e0ae79b5e9906e9230dd2bd1d6189502c\": rpc error: code = NotFound desc = could not find container \"45202397e0a58f429b699cf04b80c08e0ae79b5e9906e9230dd2bd1d6189502c\": container with ID starting with 45202397e0a58f429b699cf04b80c08e0ae79b5e9906e9230dd2bd1d6189502c not found: ID does not exist" Apr 17 14:11:20.094595 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.094545 2568 scope.go:117] "RemoveContainer" containerID="1664d2cbeec44fde6c75a111fa65b2014f307b04d0c239e62ebe7de406a043c9" Apr 17 14:11:20.094716 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.094701 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1664d2cbeec44fde6c75a111fa65b2014f307b04d0c239e62ebe7de406a043c9"} err="failed to get container status \"1664d2cbeec44fde6c75a111fa65b2014f307b04d0c239e62ebe7de406a043c9\": rpc error: code = NotFound desc = could not find container \"1664d2cbeec44fde6c75a111fa65b2014f307b04d0c239e62ebe7de406a043c9\": container with ID starting with 1664d2cbeec44fde6c75a111fa65b2014f307b04d0c239e62ebe7de406a043c9 not found: ID does not exist" Apr 17 14:11:20.094755 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.094717 2568 scope.go:117] "RemoveContainer" containerID="16239c5ce4f92bd30653f46ebcaf79bd878b36339bb861529c88950c4c492b95" Apr 17 14:11:20.094910 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.094896 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16239c5ce4f92bd30653f46ebcaf79bd878b36339bb861529c88950c4c492b95"} err="failed to get container status \"16239c5ce4f92bd30653f46ebcaf79bd878b36339bb861529c88950c4c492b95\": rpc error: code = NotFound desc = could not find container \"16239c5ce4f92bd30653f46ebcaf79bd878b36339bb861529c88950c4c492b95\": container with ID starting with 16239c5ce4f92bd30653f46ebcaf79bd878b36339bb861529c88950c4c492b95 not found: ID does not exist" Apr 17 14:11:20.094956 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.094910 2568 scope.go:117] "RemoveContainer" containerID="66795af62eb0978f637d7ffedbfddd72d673fe29746bc9c0009e771e3b8ed8ad" Apr 17 14:11:20.095129 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.095112 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66795af62eb0978f637d7ffedbfddd72d673fe29746bc9c0009e771e3b8ed8ad"} err="failed to get container status \"66795af62eb0978f637d7ffedbfddd72d673fe29746bc9c0009e771e3b8ed8ad\": rpc error: code = NotFound desc = could not find container \"66795af62eb0978f637d7ffedbfddd72d673fe29746bc9c0009e771e3b8ed8ad\": container with ID starting with 66795af62eb0978f637d7ffedbfddd72d673fe29746bc9c0009e771e3b8ed8ad not found: ID does not exist" Apr 17 14:11:20.095173 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.095130 2568 scope.go:117] "RemoveContainer" containerID="166bd4961e7c244d65fe7706b95b80f1674842a8cd842b0978d8ca7d769343b0" Apr 17 14:11:20.095314 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:20.095296 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"166bd4961e7c244d65fe7706b95b80f1674842a8cd842b0978d8ca7d769343b0"} err="failed to get container status \"166bd4961e7c244d65fe7706b95b80f1674842a8cd842b0978d8ca7d769343b0\": rpc error: code = NotFound desc = could not find container \"166bd4961e7c244d65fe7706b95b80f1674842a8cd842b0978d8ca7d769343b0\": container with ID starting with 166bd4961e7c244d65fe7706b95b80f1674842a8cd842b0978d8ca7d769343b0 not found: ID does not exist" Apr 17 14:11:21.263402 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:21.263371 2568 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" path="/var/lib/kubelet/pods/9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03/volumes" Apr 17 14:11:22.266173 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.266133 2568 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Apr 17 14:11:22.266752 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.266719 2568 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" containerName="prometheus" containerID="cri-o://d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217" gracePeriod=600 Apr 17 14:11:22.266851 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.266738 2568 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" containerName="kube-rbac-proxy" containerID="cri-o://5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e" gracePeriod=600 Apr 17 14:11:22.266851 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.266788 2568 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" containerName="kube-rbac-proxy-web" containerID="cri-o://033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60" gracePeriod=600 Apr 17 14:11:22.266851 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.266840 2568 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" containerName="kube-rbac-proxy-thanos" containerID="cri-o://9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396" gracePeriod=600 Apr 17 14:11:22.267058 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.266882 2568 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" containerName="config-reloader" containerID="cri-o://7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591" gracePeriod=600 Apr 17 14:11:22.267058 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.267025 2568 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" containerName="thanos-sidecar" containerID="cri-o://747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564" gracePeriod=600 Apr 17 14:11:22.518932 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.518848 2568 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:22.587341 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.587306 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-prometheus-k8s-rulefiles-0\") pod \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " Apr 17 14:11:22.587494 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.587349 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-prometheus-k8s-db\") pod \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " Apr 17 14:11:22.587494 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.587385 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-tls-assets\") pod \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " Apr 17 14:11:22.587614 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.587551 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-prometheus-trusted-ca-bundle\") pod \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " Apr 17 14:11:22.587614 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.587584 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-web-config\") pod \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " Apr 17 14:11:22.587721 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.587625 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-secret-grpc-tls\") pod \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " Apr 17 14:11:22.587721 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.587667 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " Apr 17 14:11:22.587721 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.587706 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-config-out\") pod \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " Apr 17 14:11:22.587884 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.587759 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-configmap-serving-certs-ca-bundle\") pod \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " Apr 17 14:11:22.587884 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.587791 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-secret-kube-rbac-proxy\") pod \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " Apr 17 14:11:22.587884 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.587819 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-secret-prometheus-k8s-tls\") pod \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " Apr 17 14:11:22.587884 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.587850 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmc9c\" (UniqueName: \"kubernetes.io/projected/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-kube-api-access-qmc9c\") pod \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " Apr 17 14:11:22.588078 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.587892 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-configmap-metrics-client-ca\") pod \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " Apr 17 14:11:22.588078 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.587921 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-thanos-prometheus-http-client-file\") pod \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " Apr 17 14:11:22.588078 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.587955 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-prometheus-trusted-ca-bundle" (OuterVolumeSpecName: "prometheus-trusted-ca-bundle") pod "af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" (UID: "af0cb3b3-39c8-4938-87dd-d3e263fbb0cf"). InnerVolumeSpecName "prometheus-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 14:11:22.588078 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.587971 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " Apr 17 14:11:22.588078 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.588007 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-configmap-kubelet-serving-ca-bundle\") pod \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " Apr 17 14:11:22.588078 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.588059 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-secret-metrics-client-certs\") pod \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " Apr 17 14:11:22.588341 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.588092 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-config\") pod \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\" (UID: \"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf\") " Apr 17 14:11:22.588400 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.588340 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-configmap-serving-certs-ca-bundle" (OuterVolumeSpecName: "configmap-serving-certs-ca-bundle") pod "af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" (UID: "af0cb3b3-39c8-4938-87dd-d3e263fbb0cf"). InnerVolumeSpecName "configmap-serving-certs-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 14:11:22.588400 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.588373 2568 reconciler_common.go:299] "Volume detached for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-prometheus-trusted-ca-bundle\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:11:22.588497 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.588478 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-prometheus-k8s-db" (OuterVolumeSpecName: "prometheus-k8s-db") pod "af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" (UID: "af0cb3b3-39c8-4938-87dd-d3e263fbb0cf"). InnerVolumeSpecName "prometheus-k8s-db". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 17 14:11:22.590070 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.589689 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-prometheus-k8s-rulefiles-0" (OuterVolumeSpecName: "prometheus-k8s-rulefiles-0") pod "af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" (UID: "af0cb3b3-39c8-4938-87dd-d3e263fbb0cf"). InnerVolumeSpecName "prometheus-k8s-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 14:11:22.590070 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.589794 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" (UID: "af0cb3b3-39c8-4938-87dd-d3e263fbb0cf"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 14:11:22.590470 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.590432 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-configmap-metrics-client-ca" (OuterVolumeSpecName: "configmap-metrics-client-ca") pod "af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" (UID: "af0cb3b3-39c8-4938-87dd-d3e263fbb0cf"). InnerVolumeSpecName "configmap-metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 14:11:22.592171 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.592137 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-secret-prometheus-k8s-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-tls") pod "af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" (UID: "af0cb3b3-39c8-4938-87dd-d3e263fbb0cf"). InnerVolumeSpecName "secret-prometheus-k8s-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 14:11:22.592291 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.592226 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" (UID: "af0cb3b3-39c8-4938-87dd-d3e263fbb0cf"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 14:11:22.592670 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.592599 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-secret-grpc-tls" (OuterVolumeSpecName: "secret-grpc-tls") pod "af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" (UID: "af0cb3b3-39c8-4938-87dd-d3e263fbb0cf"). InnerVolumeSpecName "secret-grpc-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 14:11:22.592932 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.592904 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-config" (OuterVolumeSpecName: "config") pod "af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" (UID: "af0cb3b3-39c8-4938-87dd-d3e263fbb0cf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 14:11:22.593071 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.592935 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-secret-kube-rbac-proxy" (OuterVolumeSpecName: "secret-kube-rbac-proxy") pod "af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" (UID: "af0cb3b3-39c8-4938-87dd-d3e263fbb0cf"). InnerVolumeSpecName "secret-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 14:11:22.593071 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.593051 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-kube-api-access-qmc9c" (OuterVolumeSpecName: "kube-api-access-qmc9c") pod "af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" (UID: "af0cb3b3-39c8-4938-87dd-d3e263fbb0cf"). InnerVolumeSpecName "kube-api-access-qmc9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 14:11:22.593237 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.593200 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" (UID: "af0cb3b3-39c8-4938-87dd-d3e263fbb0cf"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 14:11:22.593274 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.593247 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" (UID: "af0cb3b3-39c8-4938-87dd-d3e263fbb0cf"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 14:11:22.593456 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.593423 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-secret-prometheus-k8s-thanos-sidecar-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-thanos-sidecar-tls") pod "af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" (UID: "af0cb3b3-39c8-4938-87dd-d3e263fbb0cf"). InnerVolumeSpecName "secret-prometheus-k8s-thanos-sidecar-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 14:11:22.593660 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.593641 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-config-out" (OuterVolumeSpecName: "config-out") pod "af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" (UID: "af0cb3b3-39c8-4938-87dd-d3e263fbb0cf"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 17 14:11:22.594023 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.593989 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-secret-prometheus-k8s-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-prometheus-k8s-kube-rbac-proxy-web") pod "af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" (UID: "af0cb3b3-39c8-4938-87dd-d3e263fbb0cf"). InnerVolumeSpecName "secret-prometheus-k8s-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 14:11:22.604216 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.604174 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-web-config" (OuterVolumeSpecName: "web-config") pod "af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" (UID: "af0cb3b3-39c8-4938-87dd-d3e263fbb0cf"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 14:11:22.689287 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.689250 2568 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qmc9c\" (UniqueName: \"kubernetes.io/projected/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-kube-api-access-qmc9c\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:11:22.689287 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.689281 2568 reconciler_common.go:299] "Volume detached for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-configmap-metrics-client-ca\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:11:22.689287 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.689294 2568 reconciler_common.go:299] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-thanos-prometheus-http-client-file\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:11:22.689516 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.689304 2568 reconciler_common.go:299] "Volume detached for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-secret-prometheus-k8s-thanos-sidecar-tls\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:11:22.689516 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.689316 2568 reconciler_common.go:299] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-configmap-kubelet-serving-ca-bundle\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:11:22.689516 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.689325 2568 reconciler_common.go:299] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-secret-metrics-client-certs\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:11:22.689516 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.689334 2568 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-config\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:11:22.689516 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.689343 2568 reconciler_common.go:299] "Volume detached for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-prometheus-k8s-rulefiles-0\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:11:22.689516 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.689354 2568 reconciler_common.go:299] "Volume detached for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-prometheus-k8s-db\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:11:22.689516 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.689363 2568 reconciler_common.go:299] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-tls-assets\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:11:22.689516 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.689371 2568 reconciler_common.go:299] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-web-config\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:11:22.689516 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.689378 2568 reconciler_common.go:299] "Volume detached for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-secret-grpc-tls\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:11:22.689516 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.689386 2568 reconciler_common.go:299] "Volume detached for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-secret-prometheus-k8s-kube-rbac-proxy-web\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:11:22.689516 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.689394 2568 reconciler_common.go:299] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-config-out\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:11:22.689516 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.689403 2568 reconciler_common.go:299] "Volume detached for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-configmap-serving-certs-ca-bundle\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:11:22.689516 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.689411 2568 reconciler_common.go:299] "Volume detached for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-secret-kube-rbac-proxy\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:11:22.689516 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:22.689420 2568 reconciler_common.go:299] "Volume detached for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf-secret-prometheus-k8s-tls\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:11:23.053033 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.053001 2568 generic.go:358] "Generic (PLEG): container finished" podID="af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" containerID="9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396" exitCode=0 Apr 17 14:11:23.053033 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.053027 2568 generic.go:358] "Generic (PLEG): container finished" podID="af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" containerID="5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e" exitCode=0 Apr 17 14:11:23.053033 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.053034 2568 generic.go:358] "Generic (PLEG): container finished" podID="af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" containerID="033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60" exitCode=0 Apr 17 14:11:23.053033 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.053039 2568 generic.go:358] "Generic (PLEG): container finished" podID="af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" containerID="747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564" exitCode=0 Apr 17 14:11:23.053033 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.053045 2568 generic.go:358] "Generic (PLEG): container finished" podID="af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" containerID="7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591" exitCode=0 Apr 17 14:11:23.053033 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.053050 2568 generic.go:358] "Generic (PLEG): container finished" podID="af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" containerID="d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217" exitCode=0 Apr 17 14:11:23.053401 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.053084 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf","Type":"ContainerDied","Data":"9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396"} Apr 17 14:11:23.053401 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.053100 2568 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.053401 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.053125 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf","Type":"ContainerDied","Data":"5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e"} Apr 17 14:11:23.053401 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.053136 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf","Type":"ContainerDied","Data":"033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60"} Apr 17 14:11:23.053401 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.053147 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf","Type":"ContainerDied","Data":"747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564"} Apr 17 14:11:23.053401 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.053158 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf","Type":"ContainerDied","Data":"7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591"} Apr 17 14:11:23.053401 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.053167 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf","Type":"ContainerDied","Data":"d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217"} Apr 17 14:11:23.053401 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.053176 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"af0cb3b3-39c8-4938-87dd-d3e263fbb0cf","Type":"ContainerDied","Data":"7dc67697e27af8991296a18d27fe1384d3c64ed61358a18b48d955cb2f05d6cf"} Apr 17 14:11:23.053401 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.053190 2568 scope.go:117] "RemoveContainer" containerID="9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396" Apr 17 14:11:23.060839 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.060689 2568 scope.go:117] "RemoveContainer" containerID="5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e" Apr 17 14:11:23.068075 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.068058 2568 scope.go:117] "RemoveContainer" containerID="033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60" Apr 17 14:11:23.075061 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.075042 2568 scope.go:117] "RemoveContainer" containerID="747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564" Apr 17 14:11:23.076455 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.076436 2568 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Apr 17 14:11:23.084059 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.084029 2568 scope.go:117] "RemoveContainer" containerID="7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591" Apr 17 14:11:23.084547 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.084520 2568 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Apr 17 14:11:23.091162 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.091119 2568 scope.go:117] "RemoveContainer" containerID="d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217" Apr 17 14:11:23.098806 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.098781 2568 scope.go:117] "RemoveContainer" containerID="af824c226a1007ad1cddb051ffeffc02fe98b8026cd252871819278630e72757" Apr 17 14:11:23.103577 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.103553 2568 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Apr 17 14:11:23.103906 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.103892 2568 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" containerName="kube-rbac-proxy" Apr 17 14:11:23.103906 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.103907 2568 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" containerName="kube-rbac-proxy" Apr 17 14:11:23.104005 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.103915 2568 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" containerName="prometheus" Apr 17 14:11:23.104005 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.103920 2568 state_mem.go:107] "Deleted CPUSet assignment" podUID="af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" containerName="prometheus" Apr 17 14:11:23.104005 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.103929 2568 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" containerName="config-reloader" Apr 17 14:11:23.104005 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.103935 2568 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" containerName="config-reloader" Apr 17 14:11:23.104005 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.103943 2568 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" containerName="kube-rbac-proxy-web" Apr 17 14:11:23.104005 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.103948 2568 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" containerName="kube-rbac-proxy-web" Apr 17 14:11:23.104005 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.103955 2568 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" containerName="prom-label-proxy" Apr 17 14:11:23.104005 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.103960 2568 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" containerName="prom-label-proxy" Apr 17 14:11:23.104005 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.103966 2568 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" containerName="init-config-reloader" Apr 17 14:11:23.104005 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.103971 2568 state_mem.go:107] "Deleted CPUSet assignment" podUID="af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" containerName="init-config-reloader" Apr 17 14:11:23.104005 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.103977 2568 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" containerName="thanos-sidecar" Apr 17 14:11:23.104005 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.103982 2568 state_mem.go:107] "Deleted CPUSet assignment" podUID="af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" containerName="thanos-sidecar" Apr 17 14:11:23.104005 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.103989 2568 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" containerName="config-reloader" Apr 17 14:11:23.104005 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.103994 2568 state_mem.go:107] "Deleted CPUSet assignment" podUID="af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" containerName="config-reloader" Apr 17 14:11:23.104005 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.104000 2568 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" containerName="kube-rbac-proxy-thanos" Apr 17 14:11:23.104005 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.104005 2568 state_mem.go:107] "Deleted CPUSet assignment" podUID="af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" containerName="kube-rbac-proxy-thanos" Apr 17 14:11:23.104468 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.104013 2568 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" containerName="kube-rbac-proxy" Apr 17 14:11:23.104468 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.104018 2568 state_mem.go:107] "Deleted CPUSet assignment" podUID="af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" containerName="kube-rbac-proxy" Apr 17 14:11:23.104468 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.104041 2568 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" containerName="alertmanager" Apr 17 14:11:23.104468 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.104046 2568 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" containerName="alertmanager" Apr 17 14:11:23.104468 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.104053 2568 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" containerName="kube-rbac-proxy-metric" Apr 17 14:11:23.104468 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.104058 2568 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" containerName="kube-rbac-proxy-metric" Apr 17 14:11:23.104468 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.104064 2568 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6c48a882-c4d6-401e-8b76-fdfdcfd63b2a" containerName="console" Apr 17 14:11:23.104468 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.104069 2568 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c48a882-c4d6-401e-8b76-fdfdcfd63b2a" containerName="console" Apr 17 14:11:23.104468 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.104098 2568 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" containerName="init-config-reloader" Apr 17 14:11:23.104468 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.104104 2568 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" containerName="init-config-reloader" Apr 17 14:11:23.104468 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.104112 2568 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" containerName="kube-rbac-proxy-web" Apr 17 14:11:23.104468 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.104117 2568 state_mem.go:107] "Deleted CPUSet assignment" podUID="af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" containerName="kube-rbac-proxy-web" Apr 17 14:11:23.104468 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.104162 2568 memory_manager.go:356] "RemoveStaleState removing state" podUID="6c48a882-c4d6-401e-8b76-fdfdcfd63b2a" containerName="console" Apr 17 14:11:23.104468 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.104169 2568 memory_manager.go:356] "RemoveStaleState removing state" podUID="9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" containerName="kube-rbac-proxy-web" Apr 17 14:11:23.104468 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.104174 2568 memory_manager.go:356] "RemoveStaleState removing state" podUID="9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" containerName="alertmanager" Apr 17 14:11:23.104468 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.104180 2568 memory_manager.go:356] "RemoveStaleState removing state" podUID="af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" containerName="thanos-sidecar" Apr 17 14:11:23.104468 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.104187 2568 memory_manager.go:356] "RemoveStaleState removing state" podUID="af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" containerName="kube-rbac-proxy" Apr 17 14:11:23.104468 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.104193 2568 memory_manager.go:356] "RemoveStaleState removing state" podUID="9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" containerName="kube-rbac-proxy" Apr 17 14:11:23.104468 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.104200 2568 memory_manager.go:356] "RemoveStaleState removing state" podUID="9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" containerName="config-reloader" Apr 17 14:11:23.104468 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.104206 2568 memory_manager.go:356] "RemoveStaleState removing state" podUID="af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" containerName="prometheus" Apr 17 14:11:23.104468 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.104212 2568 memory_manager.go:356] "RemoveStaleState removing state" podUID="af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" containerName="kube-rbac-proxy-thanos" Apr 17 14:11:23.104468 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.104218 2568 memory_manager.go:356] "RemoveStaleState removing state" podUID="9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" containerName="prom-label-proxy" Apr 17 14:11:23.104468 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.104223 2568 memory_manager.go:356] "RemoveStaleState removing state" podUID="af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" containerName="config-reloader" Apr 17 14:11:23.104468 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.104228 2568 memory_manager.go:356] "RemoveStaleState removing state" podUID="9bbb9e8f-45ed-4e69-b0ef-fe4f9829df03" containerName="kube-rbac-proxy-metric" Apr 17 14:11:23.104468 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.104235 2568 memory_manager.go:356] "RemoveStaleState removing state" podUID="af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" containerName="kube-rbac-proxy-web" Apr 17 14:11:23.108673 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.107083 2568 scope.go:117] "RemoveContainer" containerID="9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396" Apr 17 14:11:23.109274 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:11:23.109248 2568 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396\": container with ID starting with 9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396 not found: ID does not exist" containerID="9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396" Apr 17 14:11:23.109365 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.109283 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396"} err="failed to get container status \"9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396\": rpc error: code = NotFound desc = could not find container \"9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396\": container with ID starting with 9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396 not found: ID does not exist" Apr 17 14:11:23.109365 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.109301 2568 scope.go:117] "RemoveContainer" containerID="5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e" Apr 17 14:11:23.109589 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:11:23.109572 2568 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e\": container with ID starting with 5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e not found: ID does not exist" containerID="5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e" Apr 17 14:11:23.109636 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.109592 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e"} err="failed to get container status \"5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e\": rpc error: code = NotFound desc = could not find container \"5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e\": container with ID starting with 5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e not found: ID does not exist" Apr 17 14:11:23.109636 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.109607 2568 scope.go:117] "RemoveContainer" containerID="033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60" Apr 17 14:11:23.109852 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:11:23.109835 2568 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60\": container with ID starting with 033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60 not found: ID does not exist" containerID="033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60" Apr 17 14:11:23.109938 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.109885 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60"} err="failed to get container status \"033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60\": rpc error: code = NotFound desc = could not find container \"033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60\": container with ID starting with 033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60 not found: ID does not exist" Apr 17 14:11:23.109938 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.109902 2568 scope.go:117] "RemoveContainer" containerID="747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564" Apr 17 14:11:23.110191 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:11:23.110176 2568 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564\": container with ID starting with 747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564 not found: ID does not exist" containerID="747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564" Apr 17 14:11:23.110237 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.110194 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564"} err="failed to get container status \"747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564\": rpc error: code = NotFound desc = could not find container \"747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564\": container with ID starting with 747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564 not found: ID does not exist" Apr 17 14:11:23.110237 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.110208 2568 scope.go:117] "RemoveContainer" containerID="7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591" Apr 17 14:11:23.110424 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:11:23.110405 2568 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591\": container with ID starting with 7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591 not found: ID does not exist" containerID="7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591" Apr 17 14:11:23.110484 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.110434 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591"} err="failed to get container status \"7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591\": rpc error: code = NotFound desc = could not find container \"7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591\": container with ID starting with 7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591 not found: ID does not exist" Apr 17 14:11:23.110484 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.110457 2568 scope.go:117] "RemoveContainer" containerID="d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217" Apr 17 14:11:23.110666 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:11:23.110646 2568 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217\": container with ID starting with d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217 not found: ID does not exist" containerID="d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217" Apr 17 14:11:23.110704 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.110670 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217"} err="failed to get container status \"d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217\": rpc error: code = NotFound desc = could not find container \"d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217\": container with ID starting with d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217 not found: ID does not exist" Apr 17 14:11:23.110704 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.110685 2568 scope.go:117] "RemoveContainer" containerID="af824c226a1007ad1cddb051ffeffc02fe98b8026cd252871819278630e72757" Apr 17 14:11:23.110905 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:11:23.110888 2568 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af824c226a1007ad1cddb051ffeffc02fe98b8026cd252871819278630e72757\": container with ID starting with af824c226a1007ad1cddb051ffeffc02fe98b8026cd252871819278630e72757 not found: ID does not exist" containerID="af824c226a1007ad1cddb051ffeffc02fe98b8026cd252871819278630e72757" Apr 17 14:11:23.110956 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.110908 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af824c226a1007ad1cddb051ffeffc02fe98b8026cd252871819278630e72757"} err="failed to get container status \"af824c226a1007ad1cddb051ffeffc02fe98b8026cd252871819278630e72757\": rpc error: code = NotFound desc = could not find container \"af824c226a1007ad1cddb051ffeffc02fe98b8026cd252871819278630e72757\": container with ID starting with af824c226a1007ad1cddb051ffeffc02fe98b8026cd252871819278630e72757 not found: ID does not exist" Apr 17 14:11:23.110956 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.110920 2568 scope.go:117] "RemoveContainer" containerID="9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396" Apr 17 14:11:23.111123 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.111103 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396"} err="failed to get container status \"9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396\": rpc error: code = NotFound desc = could not find container \"9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396\": container with ID starting with 9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396 not found: ID does not exist" Apr 17 14:11:23.111186 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.111125 2568 scope.go:117] "RemoveContainer" containerID="5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e" Apr 17 14:11:23.111329 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.111310 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e"} err="failed to get container status \"5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e\": rpc error: code = NotFound desc = could not find container \"5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e\": container with ID starting with 5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e not found: ID does not exist" Apr 17 14:11:23.111394 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.111330 2568 scope.go:117] "RemoveContainer" containerID="033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60" Apr 17 14:11:23.111571 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.111545 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60"} err="failed to get container status \"033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60\": rpc error: code = NotFound desc = could not find container \"033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60\": container with ID starting with 033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60 not found: ID does not exist" Apr 17 14:11:23.111612 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.111571 2568 scope.go:117] "RemoveContainer" containerID="747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564" Apr 17 14:11:23.111705 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.111691 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.111773 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.111757 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564"} err="failed to get container status \"747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564\": rpc error: code = NotFound desc = could not find container \"747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564\": container with ID starting with 747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564 not found: ID does not exist" Apr 17 14:11:23.111826 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.111774 2568 scope.go:117] "RemoveContainer" containerID="7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591" Apr 17 14:11:23.112063 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.112019 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591"} err="failed to get container status \"7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591\": rpc error: code = NotFound desc = could not find container \"7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591\": container with ID starting with 7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591 not found: ID does not exist" Apr 17 14:11:23.112224 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.112112 2568 scope.go:117] "RemoveContainer" containerID="d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217" Apr 17 14:11:23.112377 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.112352 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217"} err="failed to get container status \"d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217\": rpc error: code = NotFound desc = could not find container \"d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217\": container with ID starting with d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217 not found: ID does not exist" Apr 17 14:11:23.112444 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.112378 2568 scope.go:117] "RemoveContainer" containerID="af824c226a1007ad1cddb051ffeffc02fe98b8026cd252871819278630e72757" Apr 17 14:11:23.112584 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.112566 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af824c226a1007ad1cddb051ffeffc02fe98b8026cd252871819278630e72757"} err="failed to get container status \"af824c226a1007ad1cddb051ffeffc02fe98b8026cd252871819278630e72757\": rpc error: code = NotFound desc = could not find container \"af824c226a1007ad1cddb051ffeffc02fe98b8026cd252871819278630e72757\": container with ID starting with af824c226a1007ad1cddb051ffeffc02fe98b8026cd252871819278630e72757 not found: ID does not exist" Apr 17 14:11:23.112623 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.112585 2568 scope.go:117] "RemoveContainer" containerID="9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396" Apr 17 14:11:23.112841 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.112807 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396"} err="failed to get container status \"9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396\": rpc error: code = NotFound desc = could not find container \"9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396\": container with ID starting with 9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396 not found: ID does not exist" Apr 17 14:11:23.112841 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.112840 2568 scope.go:117] "RemoveContainer" containerID="5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e" Apr 17 14:11:23.113115 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.113089 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e"} err="failed to get container status \"5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e\": rpc error: code = NotFound desc = could not find container \"5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e\": container with ID starting with 5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e not found: ID does not exist" Apr 17 14:11:23.113115 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.113114 2568 scope.go:117] "RemoveContainer" containerID="033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60" Apr 17 14:11:23.113325 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.113308 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60"} err="failed to get container status \"033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60\": rpc error: code = NotFound desc = could not find container \"033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60\": container with ID starting with 033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60 not found: ID does not exist" Apr 17 14:11:23.113376 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.113325 2568 scope.go:117] "RemoveContainer" containerID="747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564" Apr 17 14:11:23.113545 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.113525 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564"} err="failed to get container status \"747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564\": rpc error: code = NotFound desc = could not find container \"747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564\": container with ID starting with 747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564 not found: ID does not exist" Apr 17 14:11:23.113584 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.113545 2568 scope.go:117] "RemoveContainer" containerID="7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591" Apr 17 14:11:23.113729 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.113714 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591"} err="failed to get container status \"7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591\": rpc error: code = NotFound desc = could not find container \"7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591\": container with ID starting with 7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591 not found: ID does not exist" Apr 17 14:11:23.113772 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.113730 2568 scope.go:117] "RemoveContainer" containerID="d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217" Apr 17 14:11:23.113972 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.113951 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217"} err="failed to get container status \"d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217\": rpc error: code = NotFound desc = could not find container \"d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217\": container with ID starting with d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217 not found: ID does not exist" Apr 17 14:11:23.114036 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.113972 2568 scope.go:117] "RemoveContainer" containerID="af824c226a1007ad1cddb051ffeffc02fe98b8026cd252871819278630e72757" Apr 17 14:11:23.114200 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.114182 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af824c226a1007ad1cddb051ffeffc02fe98b8026cd252871819278630e72757"} err="failed to get container status \"af824c226a1007ad1cddb051ffeffc02fe98b8026cd252871819278630e72757\": rpc error: code = NotFound desc = could not find container \"af824c226a1007ad1cddb051ffeffc02fe98b8026cd252871819278630e72757\": container with ID starting with af824c226a1007ad1cddb051ffeffc02fe98b8026cd252871819278630e72757 not found: ID does not exist" Apr 17 14:11:23.114252 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.114200 2568 scope.go:117] "RemoveContainer" containerID="9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396" Apr 17 14:11:23.114479 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.114408 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396"} err="failed to get container status \"9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396\": rpc error: code = NotFound desc = could not find container \"9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396\": container with ID starting with 9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396 not found: ID does not exist" Apr 17 14:11:23.114479 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.114429 2568 scope.go:117] "RemoveContainer" containerID="5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e" Apr 17 14:11:23.114479 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.114451 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-dockercfg-xjch6\"" Apr 17 14:11:23.114676 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.114508 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"metrics-client-certs\"" Apr 17 14:11:23.114676 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.114524 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"kube-rbac-proxy\"" Apr 17 14:11:23.114676 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.114540 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-tls-assets-0\"" Apr 17 14:11:23.114676 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.114565 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-web-config\"" Apr 17 14:11:23.114676 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.114526 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-thanos-prometheus-http-client-file\"" Apr 17 14:11:23.114676 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.114656 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e"} err="failed to get container status \"5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e\": rpc error: code = NotFound desc = could not find container \"5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e\": container with ID starting with 5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e not found: ID does not exist" Apr 17 14:11:23.114934 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.114681 2568 scope.go:117] "RemoveContainer" containerID="033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60" Apr 17 14:11:23.114934 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.114904 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60"} err="failed to get container status \"033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60\": rpc error: code = NotFound desc = could not find container \"033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60\": container with ID starting with 033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60 not found: ID does not exist" Apr 17 14:11:23.114934 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.114926 2568 scope.go:117] "RemoveContainer" containerID="747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564" Apr 17 14:11:23.115075 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.115066 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-tls\"" Apr 17 14:11:23.115151 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.115134 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-kube-rbac-proxy-web\"" Apr 17 14:11:23.115216 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.115145 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564"} err="failed to get container status \"747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564\": rpc error: code = NotFound desc = could not find container \"747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564\": container with ID starting with 747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564 not found: ID does not exist" Apr 17 14:11:23.115216 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.115159 2568 scope.go:117] "RemoveContainer" containerID="7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591" Apr 17 14:11:23.115318 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.115265 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s\"" Apr 17 14:11:23.115429 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.115406 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"serving-certs-ca-bundle\"" Apr 17 14:11:23.115513 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.115460 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591"} err="failed to get container status \"7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591\": rpc error: code = NotFound desc = could not find container \"7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591\": container with ID starting with 7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591 not found: ID does not exist" Apr 17 14:11:23.115513 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.115476 2568 scope.go:117] "RemoveContainer" containerID="d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217" Apr 17 14:11:23.115629 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.115584 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-thanos-sidecar-tls\"" Apr 17 14:11:23.115740 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.115659 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-grpc-tls-14bn2bg7nbca6\"" Apr 17 14:11:23.115789 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.115728 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217"} err="failed to get container status \"d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217\": rpc error: code = NotFound desc = could not find container \"d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217\": container with ID starting with d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217 not found: ID does not exist" Apr 17 14:11:23.115789 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.115751 2568 scope.go:117] "RemoveContainer" containerID="af824c226a1007ad1cddb051ffeffc02fe98b8026cd252871819278630e72757" Apr 17 14:11:23.116034 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.116003 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af824c226a1007ad1cddb051ffeffc02fe98b8026cd252871819278630e72757"} err="failed to get container status \"af824c226a1007ad1cddb051ffeffc02fe98b8026cd252871819278630e72757\": rpc error: code = NotFound desc = could not find container \"af824c226a1007ad1cddb051ffeffc02fe98b8026cd252871819278630e72757\": container with ID starting with af824c226a1007ad1cddb051ffeffc02fe98b8026cd252871819278630e72757 not found: ID does not exist" Apr 17 14:11:23.116034 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.116022 2568 scope.go:117] "RemoveContainer" containerID="9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396" Apr 17 14:11:23.116247 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.116222 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396"} err="failed to get container status \"9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396\": rpc error: code = NotFound desc = could not find container \"9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396\": container with ID starting with 9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396 not found: ID does not exist" Apr 17 14:11:23.116300 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.116249 2568 scope.go:117] "RemoveContainer" containerID="5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e" Apr 17 14:11:23.116531 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.116502 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e"} err="failed to get container status \"5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e\": rpc error: code = NotFound desc = could not find container \"5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e\": container with ID starting with 5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e not found: ID does not exist" Apr 17 14:11:23.116610 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.116532 2568 scope.go:117] "RemoveContainer" containerID="033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60" Apr 17 14:11:23.116758 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.116737 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60"} err="failed to get container status \"033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60\": rpc error: code = NotFound desc = could not find container \"033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60\": container with ID starting with 033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60 not found: ID does not exist" Apr 17 14:11:23.116816 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.116761 2568 scope.go:117] "RemoveContainer" containerID="747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564" Apr 17 14:11:23.117063 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.117037 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564"} err="failed to get container status \"747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564\": rpc error: code = NotFound desc = could not find container \"747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564\": container with ID starting with 747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564 not found: ID does not exist" Apr 17 14:11:23.117163 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.117066 2568 scope.go:117] "RemoveContainer" containerID="7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591" Apr 17 14:11:23.117365 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.117337 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591"} err="failed to get container status \"7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591\": rpc error: code = NotFound desc = could not find container \"7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591\": container with ID starting with 7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591 not found: ID does not exist" Apr 17 14:11:23.117487 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.117366 2568 scope.go:117] "RemoveContainer" containerID="d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217" Apr 17 14:11:23.117690 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.117642 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217"} err="failed to get container status \"d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217\": rpc error: code = NotFound desc = could not find container \"d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217\": container with ID starting with d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217 not found: ID does not exist" Apr 17 14:11:23.117802 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.117694 2568 scope.go:117] "RemoveContainer" containerID="af824c226a1007ad1cddb051ffeffc02fe98b8026cd252871819278630e72757" Apr 17 14:11:23.117986 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.117964 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-rulefiles-0\"" Apr 17 14:11:23.118123 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.117997 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af824c226a1007ad1cddb051ffeffc02fe98b8026cd252871819278630e72757"} err="failed to get container status \"af824c226a1007ad1cddb051ffeffc02fe98b8026cd252871819278630e72757\": rpc error: code = NotFound desc = could not find container \"af824c226a1007ad1cddb051ffeffc02fe98b8026cd252871819278630e72757\": container with ID starting with af824c226a1007ad1cddb051ffeffc02fe98b8026cd252871819278630e72757 not found: ID does not exist" Apr 17 14:11:23.118123 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.118093 2568 scope.go:117] "RemoveContainer" containerID="9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396" Apr 17 14:11:23.118432 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.118333 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396"} err="failed to get container status \"9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396\": rpc error: code = NotFound desc = could not find container \"9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396\": container with ID starting with 9420620803c3fbf9d7391545cad1089ddedb646283c35fa9c560618ad0e05396 not found: ID does not exist" Apr 17 14:11:23.118432 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.118359 2568 scope.go:117] "RemoveContainer" containerID="5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e" Apr 17 14:11:23.119347 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.118724 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e"} err="failed to get container status \"5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e\": rpc error: code = NotFound desc = could not find container \"5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e\": container with ID starting with 5afaf7293c661eb88d6891bc5e1cc32e5dce18bc61b5f70c74e4024974e4466e not found: ID does not exist" Apr 17 14:11:23.119347 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.118763 2568 scope.go:117] "RemoveContainer" containerID="033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60" Apr 17 14:11:23.119347 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.119016 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60"} err="failed to get container status \"033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60\": rpc error: code = NotFound desc = could not find container \"033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60\": container with ID starting with 033953c1f65cd6a502640af7094fe1fe153efdac9dbfa27b100d24b924c20d60 not found: ID does not exist" Apr 17 14:11:23.119347 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.119036 2568 scope.go:117] "RemoveContainer" containerID="747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564" Apr 17 14:11:23.120186 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.120054 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564"} err="failed to get container status \"747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564\": rpc error: code = NotFound desc = could not find container \"747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564\": container with ID starting with 747e4a659c7fc4dd21dfaabc7d051928fd78be22ba1c250c75fb02553e437564 not found: ID does not exist" Apr 17 14:11:23.120186 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.120079 2568 scope.go:117] "RemoveContainer" containerID="7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591" Apr 17 14:11:23.121153 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.121129 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591"} err="failed to get container status \"7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591\": rpc error: code = NotFound desc = could not find container \"7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591\": container with ID starting with 7c48547c3d36bd6340d567be1da02e250f606b5049e75fa9f0c0ec16a391e591 not found: ID does not exist" Apr 17 14:11:23.121236 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.121154 2568 scope.go:117] "RemoveContainer" containerID="d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217" Apr 17 14:11:23.121487 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.121462 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217"} err="failed to get container status \"d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217\": rpc error: code = NotFound desc = could not find container \"d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217\": container with ID starting with d7e2dae6167406f87b0ca67b185da761bf30014e0e429b03fe230cd615c6b217 not found: ID does not exist" Apr 17 14:11:23.121568 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.121489 2568 scope.go:117] "RemoveContainer" containerID="af824c226a1007ad1cddb051ffeffc02fe98b8026cd252871819278630e72757" Apr 17 14:11:23.121778 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.121746 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af824c226a1007ad1cddb051ffeffc02fe98b8026cd252871819278630e72757"} err="failed to get container status \"af824c226a1007ad1cddb051ffeffc02fe98b8026cd252871819278630e72757\": rpc error: code = NotFound desc = could not find container \"af824c226a1007ad1cddb051ffeffc02fe98b8026cd252871819278630e72757\": container with ID starting with af824c226a1007ad1cddb051ffeffc02fe98b8026cd252871819278630e72757 not found: ID does not exist" Apr 17 14:11:23.122048 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.122023 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"prometheus-trusted-ca-bundle\"" Apr 17 14:11:23.122481 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.122462 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Apr 17 14:11:23.194886 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.194831 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/445cda6f-16ef-42d7-9ac0-5f969136057a-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.195054 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.194908 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/445cda6f-16ef-42d7-9ac0-5f969136057a-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.195054 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.194954 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/445cda6f-16ef-42d7-9ac0-5f969136057a-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.195054 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.194982 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/445cda6f-16ef-42d7-9ac0-5f969136057a-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.195054 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.195006 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/445cda6f-16ef-42d7-9ac0-5f969136057a-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.195054 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.195032 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/445cda6f-16ef-42d7-9ac0-5f969136057a-config-out\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.195264 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.195087 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/445cda6f-16ef-42d7-9ac0-5f969136057a-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.195264 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.195119 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/445cda6f-16ef-42d7-9ac0-5f969136057a-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.195264 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.195163 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/445cda6f-16ef-42d7-9ac0-5f969136057a-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.195264 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.195194 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/445cda6f-16ef-42d7-9ac0-5f969136057a-config\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.195513 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.195263 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/445cda6f-16ef-42d7-9ac0-5f969136057a-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.195513 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.195290 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/445cda6f-16ef-42d7-9ac0-5f969136057a-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.195513 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.195314 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkqkt\" (UniqueName: \"kubernetes.io/projected/445cda6f-16ef-42d7-9ac0-5f969136057a-kube-api-access-zkqkt\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.195513 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.195338 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/445cda6f-16ef-42d7-9ac0-5f969136057a-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.195513 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.195368 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/445cda6f-16ef-42d7-9ac0-5f969136057a-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.195513 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.195457 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/445cda6f-16ef-42d7-9ac0-5f969136057a-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.195513 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.195497 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/445cda6f-16ef-42d7-9ac0-5f969136057a-web-config\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.195743 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.195524 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/445cda6f-16ef-42d7-9ac0-5f969136057a-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.263556 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.263521 2568 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af0cb3b3-39c8-4938-87dd-d3e263fbb0cf" path="/var/lib/kubelet/pods/af0cb3b3-39c8-4938-87dd-d3e263fbb0cf/volumes" Apr 17 14:11:23.296333 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.296303 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/445cda6f-16ef-42d7-9ac0-5f969136057a-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.296633 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.296343 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/445cda6f-16ef-42d7-9ac0-5f969136057a-web-config\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.296633 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.296363 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/445cda6f-16ef-42d7-9ac0-5f969136057a-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.296633 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.296618 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/445cda6f-16ef-42d7-9ac0-5f969136057a-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.296751 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.296686 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/445cda6f-16ef-42d7-9ac0-5f969136057a-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.296751 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.296715 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/445cda6f-16ef-42d7-9ac0-5f969136057a-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.296751 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.296745 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/445cda6f-16ef-42d7-9ac0-5f969136057a-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.297039 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.297017 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/445cda6f-16ef-42d7-9ac0-5f969136057a-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.297124 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.297069 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/445cda6f-16ef-42d7-9ac0-5f969136057a-config-out\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.297124 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.297114 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/445cda6f-16ef-42d7-9ac0-5f969136057a-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.297226 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.297143 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/445cda6f-16ef-42d7-9ac0-5f969136057a-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.297226 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.297177 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/445cda6f-16ef-42d7-9ac0-5f969136057a-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.297226 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.297206 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/445cda6f-16ef-42d7-9ac0-5f969136057a-config\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.297367 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.297253 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/445cda6f-16ef-42d7-9ac0-5f969136057a-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.297367 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.297282 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/445cda6f-16ef-42d7-9ac0-5f969136057a-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.297367 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.297307 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zkqkt\" (UniqueName: \"kubernetes.io/projected/445cda6f-16ef-42d7-9ac0-5f969136057a-kube-api-access-zkqkt\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.297367 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.297337 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/445cda6f-16ef-42d7-9ac0-5f969136057a-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.297560 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.297372 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/445cda6f-16ef-42d7-9ac0-5f969136057a-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.297560 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.297375 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/445cda6f-16ef-42d7-9ac0-5f969136057a-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.298955 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.298722 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/445cda6f-16ef-42d7-9ac0-5f969136057a-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.300303 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.300254 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/445cda6f-16ef-42d7-9ac0-5f969136057a-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.301111 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.300416 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/445cda6f-16ef-42d7-9ac0-5f969136057a-config-out\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.301111 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.300553 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/445cda6f-16ef-42d7-9ac0-5f969136057a-web-config\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.301111 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.300559 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/445cda6f-16ef-42d7-9ac0-5f969136057a-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.301111 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.300730 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/445cda6f-16ef-42d7-9ac0-5f969136057a-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.301361 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.301271 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/445cda6f-16ef-42d7-9ac0-5f969136057a-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.301361 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.301316 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/445cda6f-16ef-42d7-9ac0-5f969136057a-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.301466 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.301422 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/445cda6f-16ef-42d7-9ac0-5f969136057a-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.301777 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.301735 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/445cda6f-16ef-42d7-9ac0-5f969136057a-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.301777 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.301752 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/445cda6f-16ef-42d7-9ac0-5f969136057a-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.301925 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.301795 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/445cda6f-16ef-42d7-9ac0-5f969136057a-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.301925 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.301832 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/445cda6f-16ef-42d7-9ac0-5f969136057a-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.302227 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.302207 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/445cda6f-16ef-42d7-9ac0-5f969136057a-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.302287 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.302228 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/445cda6f-16ef-42d7-9ac0-5f969136057a-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.302901 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.302887 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/445cda6f-16ef-42d7-9ac0-5f969136057a-config\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.307103 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.307063 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkqkt\" (UniqueName: \"kubernetes.io/projected/445cda6f-16ef-42d7-9ac0-5f969136057a-kube-api-access-zkqkt\") pod \"prometheus-k8s-0\" (UID: \"445cda6f-16ef-42d7-9ac0-5f969136057a\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.424641 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.424601 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:11:23.555142 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:23.555114 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Apr 17 14:11:23.557250 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:11:23.557188 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod445cda6f_16ef_42d7_9ac0_5f969136057a.slice/crio-4eb82e27f665509d9ba63843c33aa37130be9c0c068087676de3ef828bf28663 WatchSource:0}: Error finding container 4eb82e27f665509d9ba63843c33aa37130be9c0c068087676de3ef828bf28663: Status 404 returned error can't find the container with id 4eb82e27f665509d9ba63843c33aa37130be9c0c068087676de3ef828bf28663 Apr 17 14:11:24.057771 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:24.057736 2568 generic.go:358] "Generic (PLEG): container finished" podID="445cda6f-16ef-42d7-9ac0-5f969136057a" containerID="ad589b9f7e6ae38ef36476f656f6e35ee898ead83a688a5123b4e05778d1d121" exitCode=0 Apr 17 14:11:24.057936 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:24.057833 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"445cda6f-16ef-42d7-9ac0-5f969136057a","Type":"ContainerDied","Data":"ad589b9f7e6ae38ef36476f656f6e35ee898ead83a688a5123b4e05778d1d121"} Apr 17 14:11:24.057936 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:24.057891 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"445cda6f-16ef-42d7-9ac0-5f969136057a","Type":"ContainerStarted","Data":"4eb82e27f665509d9ba63843c33aa37130be9c0c068087676de3ef828bf28663"} Apr 17 14:11:25.064645 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:25.064606 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"445cda6f-16ef-42d7-9ac0-5f969136057a","Type":"ContainerStarted","Data":"fc93a005bdcece9aec1ad9cb85dfa6875d1f2f547e1fe7f1098033df836f0ca2"} Apr 17 14:11:25.064645 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:25.064646 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"445cda6f-16ef-42d7-9ac0-5f969136057a","Type":"ContainerStarted","Data":"e7489422e31cb2760bf5f2b6e6114d1029e33fe9f6091637896cb4a1af5f2f5f"} Apr 17 14:11:25.065093 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:25.064656 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"445cda6f-16ef-42d7-9ac0-5f969136057a","Type":"ContainerStarted","Data":"a0ce216f25faa05f077508e055dfec4fa09941a66ee603acdeb2391badb49d01"} Apr 17 14:11:25.065093 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:25.064664 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"445cda6f-16ef-42d7-9ac0-5f969136057a","Type":"ContainerStarted","Data":"368989e432a2336d84af01f6238090f64d82621267603c90b633cfbe1a0047cd"} Apr 17 14:11:25.065093 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:25.064672 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"445cda6f-16ef-42d7-9ac0-5f969136057a","Type":"ContainerStarted","Data":"7821f560bf4d37ffa44831204468618240d07e03805723343241452c31b0c382"} Apr 17 14:11:25.065093 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:25.064680 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"445cda6f-16ef-42d7-9ac0-5f969136057a","Type":"ContainerStarted","Data":"2687d0564c10abbede20fd05ec728ca425bc323a8e153d99381b2c29ef4c7268"} Apr 17 14:11:25.093438 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:25.093382 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=2.093366445 podStartE2EDuration="2.093366445s" podCreationTimestamp="2026-04-17 14:11:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 14:11:25.092030703 +0000 UTC m=+234.429821814" watchObservedRunningTime="2026-04-17 14:11:25.093366445 +0000 UTC m=+234.431157558" Apr 17 14:11:28.425491 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:11:28.425451 2568 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:12:23.425247 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:12:23.425214 2568 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:12:23.440608 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:12:23.440582 2568 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:12:24.255206 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:12:24.255177 2568 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Apr 17 14:12:31.099297 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:12:31.099270 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-s6z2b_12933e12-58b2-4d0c-993a-ab690936a989/console-operator/1.log" Apr 17 14:12:31.102940 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:12:31.102912 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-s6z2b_12933e12-58b2-4d0c-993a-ab690936a989/console-operator/1.log" Apr 17 14:12:31.107936 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:12:31.107914 2568 kubelet.go:1628] "Image garbage collection succeeded" Apr 17 14:13:12.842000 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:13:12.841923 2568 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-759f64656b-2r7hr"] Apr 17 14:13:12.844761 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:13:12.844745 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-2r7hr" Apr 17 14:13:12.847257 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:13:12.847233 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Apr 17 14:13:12.848336 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:13:12.848316 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Apr 17 14:13:12.848432 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:13:12.848318 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-t82pm\"" Apr 17 14:13:12.853369 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:13:12.853341 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-2r7hr"] Apr 17 14:13:12.922617 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:13:12.922578 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4gl6\" (UniqueName: \"kubernetes.io/projected/e03a52f6-ec51-4aad-bf0f-ab5ed71d34cb-kube-api-access-z4gl6\") pod \"cert-manager-759f64656b-2r7hr\" (UID: \"e03a52f6-ec51-4aad-bf0f-ab5ed71d34cb\") " pod="cert-manager/cert-manager-759f64656b-2r7hr" Apr 17 14:13:12.922793 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:13:12.922629 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e03a52f6-ec51-4aad-bf0f-ab5ed71d34cb-bound-sa-token\") pod \"cert-manager-759f64656b-2r7hr\" (UID: \"e03a52f6-ec51-4aad-bf0f-ab5ed71d34cb\") " pod="cert-manager/cert-manager-759f64656b-2r7hr" Apr 17 14:13:13.023446 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:13:13.023409 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z4gl6\" (UniqueName: \"kubernetes.io/projected/e03a52f6-ec51-4aad-bf0f-ab5ed71d34cb-kube-api-access-z4gl6\") pod \"cert-manager-759f64656b-2r7hr\" (UID: \"e03a52f6-ec51-4aad-bf0f-ab5ed71d34cb\") " pod="cert-manager/cert-manager-759f64656b-2r7hr" Apr 17 14:13:13.023609 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:13:13.023460 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e03a52f6-ec51-4aad-bf0f-ab5ed71d34cb-bound-sa-token\") pod \"cert-manager-759f64656b-2r7hr\" (UID: \"e03a52f6-ec51-4aad-bf0f-ab5ed71d34cb\") " pod="cert-manager/cert-manager-759f64656b-2r7hr" Apr 17 14:13:13.033024 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:13:13.032984 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e03a52f6-ec51-4aad-bf0f-ab5ed71d34cb-bound-sa-token\") pod \"cert-manager-759f64656b-2r7hr\" (UID: \"e03a52f6-ec51-4aad-bf0f-ab5ed71d34cb\") " pod="cert-manager/cert-manager-759f64656b-2r7hr" Apr 17 14:13:13.033286 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:13:13.033260 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4gl6\" (UniqueName: \"kubernetes.io/projected/e03a52f6-ec51-4aad-bf0f-ab5ed71d34cb-kube-api-access-z4gl6\") pod \"cert-manager-759f64656b-2r7hr\" (UID: \"e03a52f6-ec51-4aad-bf0f-ab5ed71d34cb\") " pod="cert-manager/cert-manager-759f64656b-2r7hr" Apr 17 14:13:13.163624 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:13:13.163595 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-2r7hr" Apr 17 14:13:13.285637 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:13:13.285612 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-2r7hr"] Apr 17 14:13:13.288280 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:13:13.288247 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode03a52f6_ec51_4aad_bf0f_ab5ed71d34cb.slice/crio-7bcebdb233c693259217815cebd37dc966ad0fef0d82983b22777839f63ef7ad WatchSource:0}: Error finding container 7bcebdb233c693259217815cebd37dc966ad0fef0d82983b22777839f63ef7ad: Status 404 returned error can't find the container with id 7bcebdb233c693259217815cebd37dc966ad0fef0d82983b22777839f63ef7ad Apr 17 14:13:13.290022 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:13:13.290003 2568 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 17 14:13:13.386639 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:13:13.386607 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-2r7hr" event={"ID":"e03a52f6-ec51-4aad-bf0f-ab5ed71d34cb","Type":"ContainerStarted","Data":"7bcebdb233c693259217815cebd37dc966ad0fef0d82983b22777839f63ef7ad"} Apr 17 14:13:17.401276 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:13:17.401243 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-2r7hr" event={"ID":"e03a52f6-ec51-4aad-bf0f-ab5ed71d34cb","Type":"ContainerStarted","Data":"726a6904a07e134611a9a89a00cbb0172ce667d4dc85493d6b156a7b33c1b1f4"} Apr 17 14:13:17.417728 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:13:17.417678 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-759f64656b-2r7hr" podStartSLOduration=2.204904736 podStartE2EDuration="5.417663031s" podCreationTimestamp="2026-04-17 14:13:12 +0000 UTC" firstStartedPulling="2026-04-17 14:13:13.290188446 +0000 UTC m=+342.627979541" lastFinishedPulling="2026-04-17 14:13:16.502946745 +0000 UTC m=+345.840737836" observedRunningTime="2026-04-17 14:13:17.415990673 +0000 UTC m=+346.753781805" watchObservedRunningTime="2026-04-17 14:13:17.417663031 +0000 UTC m=+346.755454142" Apr 17 14:16:51.037247 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:16:51.037211 2568 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["test-ns-rkvvn/test-trainjob-pp4c4-node-0-0-swcm7"] Apr 17 14:16:51.040338 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:16:51.040321 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="test-ns-rkvvn/test-trainjob-pp4c4-node-0-0-swcm7" Apr 17 14:16:51.043036 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:16:51.043013 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"test-ns-rkvvn\"/\"openshift-service-ca.crt\"" Apr 17 14:16:51.043119 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:16:51.043033 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"test-ns-rkvvn\"/\"kube-root-ca.crt\"" Apr 17 14:16:51.044035 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:16:51.044017 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"test-ns-rkvvn\"/\"default-dockercfg-w5m8m\"" Apr 17 14:16:51.048465 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:16:51.048442 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["test-ns-rkvvn/test-trainjob-pp4c4-node-0-0-swcm7"] Apr 17 14:16:51.129896 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:16:51.129836 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j86q8\" (UniqueName: \"kubernetes.io/projected/aef747ee-7967-440d-8926-e228b0d3d0ed-kube-api-access-j86q8\") pod \"test-trainjob-pp4c4-node-0-0-swcm7\" (UID: \"aef747ee-7967-440d-8926-e228b0d3d0ed\") " pod="test-ns-rkvvn/test-trainjob-pp4c4-node-0-0-swcm7" Apr 17 14:16:51.231123 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:16:51.231083 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j86q8\" (UniqueName: \"kubernetes.io/projected/aef747ee-7967-440d-8926-e228b0d3d0ed-kube-api-access-j86q8\") pod \"test-trainjob-pp4c4-node-0-0-swcm7\" (UID: \"aef747ee-7967-440d-8926-e228b0d3d0ed\") " pod="test-ns-rkvvn/test-trainjob-pp4c4-node-0-0-swcm7" Apr 17 14:16:51.238842 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:16:51.238819 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j86q8\" (UniqueName: \"kubernetes.io/projected/aef747ee-7967-440d-8926-e228b0d3d0ed-kube-api-access-j86q8\") pod \"test-trainjob-pp4c4-node-0-0-swcm7\" (UID: \"aef747ee-7967-440d-8926-e228b0d3d0ed\") " pod="test-ns-rkvvn/test-trainjob-pp4c4-node-0-0-swcm7" Apr 17 14:16:51.349290 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:16:51.349205 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="test-ns-rkvvn/test-trainjob-pp4c4-node-0-0-swcm7" Apr 17 14:16:51.472144 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:16:51.472119 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["test-ns-rkvvn/test-trainjob-pp4c4-node-0-0-swcm7"] Apr 17 14:16:51.474996 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:16:51.474958 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaef747ee_7967_440d_8926_e228b0d3d0ed.slice/crio-47597777f4c6965873aba1bb8a53cd00b908f8bed878749fa02f135f5ae4e1f3 WatchSource:0}: Error finding container 47597777f4c6965873aba1bb8a53cd00b908f8bed878749fa02f135f5ae4e1f3: Status 404 returned error can't find the container with id 47597777f4c6965873aba1bb8a53cd00b908f8bed878749fa02f135f5ae4e1f3 Apr 17 14:16:52.041737 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:16:52.041699 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="test-ns-rkvvn/test-trainjob-pp4c4-node-0-0-swcm7" event={"ID":"aef747ee-7967-440d-8926-e228b0d3d0ed","Type":"ContainerStarted","Data":"47597777f4c6965873aba1bb8a53cd00b908f8bed878749fa02f135f5ae4e1f3"} Apr 17 14:17:31.124507 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:17:31.124478 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-s6z2b_12933e12-58b2-4d0c-993a-ab690936a989/console-operator/1.log" Apr 17 14:17:31.125380 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:17:31.125360 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-s6z2b_12933e12-58b2-4d0c-993a-ab690936a989/console-operator/1.log" Apr 17 14:21:58.015867 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:21:58.015830 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="test-ns-rkvvn/test-trainjob-pp4c4-node-0-0-swcm7" event={"ID":"aef747ee-7967-440d-8926-e228b0d3d0ed","Type":"ContainerStarted","Data":"de3b8e8ab70024f2d09db9ea0b8cc9480f1a9e3fc7e306a6e318270f5de1e936"} Apr 17 14:21:58.041000 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:21:58.040951 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="test-ns-rkvvn/test-trainjob-pp4c4-node-0-0-swcm7" podStartSLOduration=0.618375406 podStartE2EDuration="5m7.040936625s" podCreationTimestamp="2026-04-17 14:16:51 +0000 UTC" firstStartedPulling="2026-04-17 14:16:51.476814316 +0000 UTC m=+560.814605405" lastFinishedPulling="2026-04-17 14:21:57.899375522 +0000 UTC m=+867.237166624" observedRunningTime="2026-04-17 14:21:58.039405977 +0000 UTC m=+867.377197104" watchObservedRunningTime="2026-04-17 14:21:58.040936625 +0000 UTC m=+867.378727735" Apr 17 14:22:04.035719 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:22:04.035683 2568 generic.go:358] "Generic (PLEG): container finished" podID="aef747ee-7967-440d-8926-e228b0d3d0ed" containerID="de3b8e8ab70024f2d09db9ea0b8cc9480f1a9e3fc7e306a6e318270f5de1e936" exitCode=0 Apr 17 14:22:04.036143 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:22:04.035759 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="test-ns-rkvvn/test-trainjob-pp4c4-node-0-0-swcm7" event={"ID":"aef747ee-7967-440d-8926-e228b0d3d0ed","Type":"ContainerDied","Data":"de3b8e8ab70024f2d09db9ea0b8cc9480f1a9e3fc7e306a6e318270f5de1e936"} Apr 17 14:22:05.164719 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:22:05.164694 2568 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="test-ns-rkvvn/test-trainjob-pp4c4-node-0-0-swcm7" Apr 17 14:22:05.230553 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:22:05.230471 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j86q8\" (UniqueName: \"kubernetes.io/projected/aef747ee-7967-440d-8926-e228b0d3d0ed-kube-api-access-j86q8\") pod \"aef747ee-7967-440d-8926-e228b0d3d0ed\" (UID: \"aef747ee-7967-440d-8926-e228b0d3d0ed\") " Apr 17 14:22:05.232693 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:22:05.232661 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aef747ee-7967-440d-8926-e228b0d3d0ed-kube-api-access-j86q8" (OuterVolumeSpecName: "kube-api-access-j86q8") pod "aef747ee-7967-440d-8926-e228b0d3d0ed" (UID: "aef747ee-7967-440d-8926-e228b0d3d0ed"). InnerVolumeSpecName "kube-api-access-j86q8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 14:22:05.331699 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:22:05.331668 2568 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j86q8\" (UniqueName: \"kubernetes.io/projected/aef747ee-7967-440d-8926-e228b0d3d0ed-kube-api-access-j86q8\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:22:06.042312 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:22:06.042283 2568 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="test-ns-rkvvn/test-trainjob-pp4c4-node-0-0-swcm7" Apr 17 14:22:06.042509 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:22:06.042282 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="test-ns-rkvvn/test-trainjob-pp4c4-node-0-0-swcm7" event={"ID":"aef747ee-7967-440d-8926-e228b0d3d0ed","Type":"ContainerDied","Data":"47597777f4c6965873aba1bb8a53cd00b908f8bed878749fa02f135f5ae4e1f3"} Apr 17 14:22:06.042509 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:22:06.042393 2568 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47597777f4c6965873aba1bb8a53cd00b908f8bed878749fa02f135f5ae4e1f3" Apr 17 14:22:06.658851 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:22:06.658811 2568 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["test-ns-qgx6c/test-trainjob-tqtx6-node-0-0-2qtwh"] Apr 17 14:22:06.659319 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:22:06.659306 2568 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aef747ee-7967-440d-8926-e228b0d3d0ed" containerName="node" Apr 17 14:22:06.659385 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:22:06.659325 2568 state_mem.go:107] "Deleted CPUSet assignment" podUID="aef747ee-7967-440d-8926-e228b0d3d0ed" containerName="node" Apr 17 14:22:06.659437 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:22:06.659408 2568 memory_manager.go:356] "RemoveStaleState removing state" podUID="aef747ee-7967-440d-8926-e228b0d3d0ed" containerName="node" Apr 17 14:22:06.685007 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:22:06.684977 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="test-ns-qgx6c/test-trainjob-tqtx6-node-0-0-2qtwh" Apr 17 14:22:06.692000 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:22:06.691951 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"test-ns-qgx6c\"/\"kube-root-ca.crt\"" Apr 17 14:22:06.693182 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:22:06.693163 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"test-ns-qgx6c\"/\"default-dockercfg-sl8q7\"" Apr 17 14:22:06.693326 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:22:06.693206 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"test-ns-qgx6c\"/\"openshift-service-ca.crt\"" Apr 17 14:22:06.694093 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:22:06.694053 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["test-ns-qgx6c/test-trainjob-tqtx6-node-0-0-2qtwh"] Apr 17 14:22:06.743257 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:22:06.743225 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gs49\" (UniqueName: \"kubernetes.io/projected/ee4d51e5-1884-4547-ba81-0b11e8261e3c-kube-api-access-2gs49\") pod \"test-trainjob-tqtx6-node-0-0-2qtwh\" (UID: \"ee4d51e5-1884-4547-ba81-0b11e8261e3c\") " pod="test-ns-qgx6c/test-trainjob-tqtx6-node-0-0-2qtwh" Apr 17 14:22:06.844381 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:22:06.844343 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2gs49\" (UniqueName: \"kubernetes.io/projected/ee4d51e5-1884-4547-ba81-0b11e8261e3c-kube-api-access-2gs49\") pod \"test-trainjob-tqtx6-node-0-0-2qtwh\" (UID: \"ee4d51e5-1884-4547-ba81-0b11e8261e3c\") " pod="test-ns-qgx6c/test-trainjob-tqtx6-node-0-0-2qtwh" Apr 17 14:22:06.862048 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:22:06.862009 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gs49\" (UniqueName: \"kubernetes.io/projected/ee4d51e5-1884-4547-ba81-0b11e8261e3c-kube-api-access-2gs49\") pod \"test-trainjob-tqtx6-node-0-0-2qtwh\" (UID: \"ee4d51e5-1884-4547-ba81-0b11e8261e3c\") " pod="test-ns-qgx6c/test-trainjob-tqtx6-node-0-0-2qtwh" Apr 17 14:22:06.994475 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:22:06.994390 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="test-ns-qgx6c/test-trainjob-tqtx6-node-0-0-2qtwh" Apr 17 14:22:07.116292 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:22:07.116235 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["test-ns-qgx6c/test-trainjob-tqtx6-node-0-0-2qtwh"] Apr 17 14:22:07.118600 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:22:07.118569 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee4d51e5_1884_4547_ba81_0b11e8261e3c.slice/crio-bdd108d9570a7dd91fe410607a45b80a0df53ef96324e73d02747697c9aeed4e WatchSource:0}: Error finding container bdd108d9570a7dd91fe410607a45b80a0df53ef96324e73d02747697c9aeed4e: Status 404 returned error can't find the container with id bdd108d9570a7dd91fe410607a45b80a0df53ef96324e73d02747697c9aeed4e Apr 17 14:22:07.120567 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:22:07.120545 2568 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 17 14:22:08.051546 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:22:08.051485 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="test-ns-qgx6c/test-trainjob-tqtx6-node-0-0-2qtwh" event={"ID":"ee4d51e5-1884-4547-ba81-0b11e8261e3c","Type":"ContainerStarted","Data":"bdd108d9570a7dd91fe410607a45b80a0df53ef96324e73d02747697c9aeed4e"} Apr 17 14:22:31.157046 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:22:31.156960 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-s6z2b_12933e12-58b2-4d0c-993a-ab690936a989/console-operator/1.log" Apr 17 14:22:31.157046 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:22:31.156960 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-s6z2b_12933e12-58b2-4d0c-993a-ab690936a989/console-operator/1.log" Apr 17 14:26:28.892335 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:26:28.892292 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="test-ns-qgx6c/test-trainjob-tqtx6-node-0-0-2qtwh" event={"ID":"ee4d51e5-1884-4547-ba81-0b11e8261e3c","Type":"ContainerStarted","Data":"f483c47582d41ec6ba9fce011d39169bcfca8a75e717316df470172533b6c8d8"} Apr 17 14:26:28.916668 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:26:28.916606 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="test-ns-qgx6c/test-trainjob-tqtx6-node-0-0-2qtwh" podStartSLOduration=1.783741446 podStartE2EDuration="4m22.916586334s" podCreationTimestamp="2026-04-17 14:22:06 +0000 UTC" firstStartedPulling="2026-04-17 14:22:07.12069906 +0000 UTC m=+876.458490152" lastFinishedPulling="2026-04-17 14:26:28.253543948 +0000 UTC m=+1137.591335040" observedRunningTime="2026-04-17 14:26:28.916149585 +0000 UTC m=+1138.253940698" watchObservedRunningTime="2026-04-17 14:26:28.916586334 +0000 UTC m=+1138.254377447" Apr 17 14:26:35.914481 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:26:35.914446 2568 generic.go:358] "Generic (PLEG): container finished" podID="ee4d51e5-1884-4547-ba81-0b11e8261e3c" containerID="f483c47582d41ec6ba9fce011d39169bcfca8a75e717316df470172533b6c8d8" exitCode=0 Apr 17 14:26:35.914908 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:26:35.914521 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="test-ns-qgx6c/test-trainjob-tqtx6-node-0-0-2qtwh" event={"ID":"ee4d51e5-1884-4547-ba81-0b11e8261e3c","Type":"ContainerDied","Data":"f483c47582d41ec6ba9fce011d39169bcfca8a75e717316df470172533b6c8d8"} Apr 17 14:26:37.153546 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:26:37.153521 2568 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="test-ns-qgx6c/test-trainjob-tqtx6-node-0-0-2qtwh" Apr 17 14:26:37.167314 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:26:37.167289 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gs49\" (UniqueName: \"kubernetes.io/projected/ee4d51e5-1884-4547-ba81-0b11e8261e3c-kube-api-access-2gs49\") pod \"ee4d51e5-1884-4547-ba81-0b11e8261e3c\" (UID: \"ee4d51e5-1884-4547-ba81-0b11e8261e3c\") " Apr 17 14:26:37.169887 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:26:37.169806 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee4d51e5-1884-4547-ba81-0b11e8261e3c-kube-api-access-2gs49" (OuterVolumeSpecName: "kube-api-access-2gs49") pod "ee4d51e5-1884-4547-ba81-0b11e8261e3c" (UID: "ee4d51e5-1884-4547-ba81-0b11e8261e3c"). InnerVolumeSpecName "kube-api-access-2gs49". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 14:26:37.268056 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:26:37.268025 2568 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2gs49\" (UniqueName: \"kubernetes.io/projected/ee4d51e5-1884-4547-ba81-0b11e8261e3c-kube-api-access-2gs49\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:26:37.921253 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:26:37.921215 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="test-ns-qgx6c/test-trainjob-tqtx6-node-0-0-2qtwh" event={"ID":"ee4d51e5-1884-4547-ba81-0b11e8261e3c","Type":"ContainerDied","Data":"bdd108d9570a7dd91fe410607a45b80a0df53ef96324e73d02747697c9aeed4e"} Apr 17 14:26:37.921253 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:26:37.921247 2568 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="test-ns-qgx6c/test-trainjob-tqtx6-node-0-0-2qtwh" Apr 17 14:26:37.921460 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:26:37.921252 2568 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdd108d9570a7dd91fe410607a45b80a0df53ef96324e73d02747697c9aeed4e" Apr 17 14:26:38.849390 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:26:38.849357 2568 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["test-ns-skw8w/test-trainjob-cq27n-node-0-0-qr9kj"] Apr 17 14:26:38.849732 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:26:38.849704 2568 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ee4d51e5-1884-4547-ba81-0b11e8261e3c" containerName="node" Apr 17 14:26:38.849732 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:26:38.849719 2568 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee4d51e5-1884-4547-ba81-0b11e8261e3c" containerName="node" Apr 17 14:26:38.849805 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:26:38.849784 2568 memory_manager.go:356] "RemoveStaleState removing state" podUID="ee4d51e5-1884-4547-ba81-0b11e8261e3c" containerName="node" Apr 17 14:26:38.870351 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:26:38.870315 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["test-ns-skw8w/test-trainjob-cq27n-node-0-0-qr9kj"] Apr 17 14:26:38.870504 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:26:38.870447 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="test-ns-skw8w/test-trainjob-cq27n-node-0-0-qr9kj" Apr 17 14:26:38.873185 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:26:38.873156 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"test-ns-skw8w\"/\"kube-root-ca.crt\"" Apr 17 14:26:38.874228 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:26:38.874207 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"test-ns-skw8w\"/\"default-dockercfg-n7bhv\"" Apr 17 14:26:38.874228 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:26:38.874207 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"test-ns-skw8w\"/\"openshift-service-ca.crt\"" Apr 17 14:26:38.881942 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:26:38.881916 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwpdw\" (UniqueName: \"kubernetes.io/projected/ad3d09e2-6403-476e-bacf-0bda91cc67c9-kube-api-access-vwpdw\") pod \"test-trainjob-cq27n-node-0-0-qr9kj\" (UID: \"ad3d09e2-6403-476e-bacf-0bda91cc67c9\") " pod="test-ns-skw8w/test-trainjob-cq27n-node-0-0-qr9kj" Apr 17 14:26:38.982390 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:26:38.982357 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vwpdw\" (UniqueName: \"kubernetes.io/projected/ad3d09e2-6403-476e-bacf-0bda91cc67c9-kube-api-access-vwpdw\") pod \"test-trainjob-cq27n-node-0-0-qr9kj\" (UID: \"ad3d09e2-6403-476e-bacf-0bda91cc67c9\") " pod="test-ns-skw8w/test-trainjob-cq27n-node-0-0-qr9kj" Apr 17 14:26:38.993165 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:26:38.993128 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwpdw\" (UniqueName: \"kubernetes.io/projected/ad3d09e2-6403-476e-bacf-0bda91cc67c9-kube-api-access-vwpdw\") pod \"test-trainjob-cq27n-node-0-0-qr9kj\" (UID: \"ad3d09e2-6403-476e-bacf-0bda91cc67c9\") " pod="test-ns-skw8w/test-trainjob-cq27n-node-0-0-qr9kj" Apr 17 14:26:39.180753 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:26:39.180619 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="test-ns-skw8w/test-trainjob-cq27n-node-0-0-qr9kj" Apr 17 14:26:39.441468 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:26:39.441411 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["test-ns-skw8w/test-trainjob-cq27n-node-0-0-qr9kj"] Apr 17 14:26:39.443762 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:26:39.443730 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad3d09e2_6403_476e_bacf_0bda91cc67c9.slice/crio-f24236f27d82619e0db9ddcef60866fa4babf3eb08a2c0533ee2b4a174d2c249 WatchSource:0}: Error finding container f24236f27d82619e0db9ddcef60866fa4babf3eb08a2c0533ee2b4a174d2c249: Status 404 returned error can't find the container with id f24236f27d82619e0db9ddcef60866fa4babf3eb08a2c0533ee2b4a174d2c249 Apr 17 14:26:39.934618 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:26:39.934578 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="test-ns-skw8w/test-trainjob-cq27n-node-0-0-qr9kj" event={"ID":"ad3d09e2-6403-476e-bacf-0bda91cc67c9","Type":"ContainerStarted","Data":"f24236f27d82619e0db9ddcef60866fa4babf3eb08a2c0533ee2b4a174d2c249"} Apr 17 14:27:31.184496 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:27:31.184464 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-s6z2b_12933e12-58b2-4d0c-993a-ab690936a989/console-operator/1.log" Apr 17 14:27:31.186543 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:27:31.186521 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-s6z2b_12933e12-58b2-4d0c-993a-ab690936a989/console-operator/1.log" Apr 17 14:27:58.229703 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:27:58.229664 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="test-ns-skw8w/test-trainjob-cq27n-node-0-0-qr9kj" event={"ID":"ad3d09e2-6403-476e-bacf-0bda91cc67c9","Type":"ContainerStarted","Data":"aeeca7f275caacfaf84b7397c60a6e08df152403664fcf6b87d92de65347cec9"} Apr 17 14:27:58.247183 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:27:58.247134 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="test-ns-skw8w/test-trainjob-cq27n-node-0-0-qr9kj" podStartSLOduration=2.208930169 podStartE2EDuration="1m20.247119655s" podCreationTimestamp="2026-04-17 14:26:38 +0000 UTC" firstStartedPulling="2026-04-17 14:26:39.445631791 +0000 UTC m=+1148.783422886" lastFinishedPulling="2026-04-17 14:27:57.483821267 +0000 UTC m=+1226.821612372" observedRunningTime="2026-04-17 14:27:58.245396508 +0000 UTC m=+1227.583187622" watchObservedRunningTime="2026-04-17 14:27:58.247119655 +0000 UTC m=+1227.584910765" Apr 17 14:28:01.240241 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:28:01.240208 2568 generic.go:358] "Generic (PLEG): container finished" podID="ad3d09e2-6403-476e-bacf-0bda91cc67c9" containerID="aeeca7f275caacfaf84b7397c60a6e08df152403664fcf6b87d92de65347cec9" exitCode=0 Apr 17 14:28:01.240648 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:28:01.240284 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="test-ns-skw8w/test-trainjob-cq27n-node-0-0-qr9kj" event={"ID":"ad3d09e2-6403-476e-bacf-0bda91cc67c9","Type":"ContainerDied","Data":"aeeca7f275caacfaf84b7397c60a6e08df152403664fcf6b87d92de65347cec9"} Apr 17 14:28:02.388771 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:28:02.388747 2568 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="test-ns-skw8w/test-trainjob-cq27n-node-0-0-qr9kj" Apr 17 14:28:02.530713 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:28:02.530634 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwpdw\" (UniqueName: \"kubernetes.io/projected/ad3d09e2-6403-476e-bacf-0bda91cc67c9-kube-api-access-vwpdw\") pod \"ad3d09e2-6403-476e-bacf-0bda91cc67c9\" (UID: \"ad3d09e2-6403-476e-bacf-0bda91cc67c9\") " Apr 17 14:28:02.532791 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:28:02.532764 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad3d09e2-6403-476e-bacf-0bda91cc67c9-kube-api-access-vwpdw" (OuterVolumeSpecName: "kube-api-access-vwpdw") pod "ad3d09e2-6403-476e-bacf-0bda91cc67c9" (UID: "ad3d09e2-6403-476e-bacf-0bda91cc67c9"). InnerVolumeSpecName "kube-api-access-vwpdw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 14:28:02.632112 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:28:02.632075 2568 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vwpdw\" (UniqueName: \"kubernetes.io/projected/ad3d09e2-6403-476e-bacf-0bda91cc67c9-kube-api-access-vwpdw\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:28:03.248714 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:28:03.248683 2568 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="test-ns-skw8w/test-trainjob-cq27n-node-0-0-qr9kj" Apr 17 14:28:03.248986 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:28:03.248714 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="test-ns-skw8w/test-trainjob-cq27n-node-0-0-qr9kj" event={"ID":"ad3d09e2-6403-476e-bacf-0bda91cc67c9","Type":"ContainerDied","Data":"f24236f27d82619e0db9ddcef60866fa4babf3eb08a2c0533ee2b4a174d2c249"} Apr 17 14:28:03.248986 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:28:03.248748 2568 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f24236f27d82619e0db9ddcef60866fa4babf3eb08a2c0533ee2b4a174d2c249" Apr 17 14:28:04.190257 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:28:04.190221 2568 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["test-ns-p4bw6/test-trainjob-5488s-node-0-0-tcmwv"] Apr 17 14:28:04.190609 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:28:04.190550 2568 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ad3d09e2-6403-476e-bacf-0bda91cc67c9" containerName="node" Apr 17 14:28:04.190609 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:28:04.190560 2568 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad3d09e2-6403-476e-bacf-0bda91cc67c9" containerName="node" Apr 17 14:28:04.190686 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:28:04.190626 2568 memory_manager.go:356] "RemoveStaleState removing state" podUID="ad3d09e2-6403-476e-bacf-0bda91cc67c9" containerName="node" Apr 17 14:28:04.233309 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:28:04.233278 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["test-ns-p4bw6/test-trainjob-5488s-node-0-0-tcmwv"] Apr 17 14:28:04.233463 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:28:04.233383 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="test-ns-p4bw6/test-trainjob-5488s-node-0-0-tcmwv" Apr 17 14:28:04.236108 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:28:04.236080 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"test-ns-p4bw6\"/\"openshift-service-ca.crt\"" Apr 17 14:28:04.237188 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:28:04.237151 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"test-ns-p4bw6\"/\"kube-root-ca.crt\"" Apr 17 14:28:04.237293 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:28:04.237195 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"test-ns-p4bw6\"/\"default-dockercfg-q9zmv\"" Apr 17 14:28:04.347000 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:28:04.346956 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6jcj\" (UniqueName: \"kubernetes.io/projected/2b702c37-1c98-4f2d-a677-0c51a4807b29-kube-api-access-p6jcj\") pod \"test-trainjob-5488s-node-0-0-tcmwv\" (UID: \"2b702c37-1c98-4f2d-a677-0c51a4807b29\") " pod="test-ns-p4bw6/test-trainjob-5488s-node-0-0-tcmwv" Apr 17 14:28:04.448256 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:28:04.448161 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p6jcj\" (UniqueName: \"kubernetes.io/projected/2b702c37-1c98-4f2d-a677-0c51a4807b29-kube-api-access-p6jcj\") pod \"test-trainjob-5488s-node-0-0-tcmwv\" (UID: \"2b702c37-1c98-4f2d-a677-0c51a4807b29\") " pod="test-ns-p4bw6/test-trainjob-5488s-node-0-0-tcmwv" Apr 17 14:28:04.456450 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:28:04.456408 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6jcj\" (UniqueName: \"kubernetes.io/projected/2b702c37-1c98-4f2d-a677-0c51a4807b29-kube-api-access-p6jcj\") pod \"test-trainjob-5488s-node-0-0-tcmwv\" (UID: \"2b702c37-1c98-4f2d-a677-0c51a4807b29\") " pod="test-ns-p4bw6/test-trainjob-5488s-node-0-0-tcmwv" Apr 17 14:28:04.543261 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:28:04.543230 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="test-ns-p4bw6/test-trainjob-5488s-node-0-0-tcmwv" Apr 17 14:28:04.668764 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:28:04.668739 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["test-ns-p4bw6/test-trainjob-5488s-node-0-0-tcmwv"] Apr 17 14:28:04.671088 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:28:04.671055 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b702c37_1c98_4f2d_a677_0c51a4807b29.slice/crio-5bf5ec3e7de46c00d963aae04ac0aa0ef450134e6a34132f96e0f96e06542ea8 WatchSource:0}: Error finding container 5bf5ec3e7de46c00d963aae04ac0aa0ef450134e6a34132f96e0f96e06542ea8: Status 404 returned error can't find the container with id 5bf5ec3e7de46c00d963aae04ac0aa0ef450134e6a34132f96e0f96e06542ea8 Apr 17 14:28:04.672995 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:28:04.672978 2568 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 17 14:28:05.256359 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:28:05.256323 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="test-ns-p4bw6/test-trainjob-5488s-node-0-0-tcmwv" event={"ID":"2b702c37-1c98-4f2d-a677-0c51a4807b29","Type":"ContainerStarted","Data":"5bf5ec3e7de46c00d963aae04ac0aa0ef450134e6a34132f96e0f96e06542ea8"} Apr 17 14:32:31.208307 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:32:31.208268 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-s6z2b_12933e12-58b2-4d0c-993a-ab690936a989/console-operator/1.log" Apr 17 14:32:31.211153 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:32:31.211131 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-s6z2b_12933e12-58b2-4d0c-993a-ab690936a989/console-operator/1.log" Apr 17 14:35:08.797112 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:35:08.797077 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="test-ns-p4bw6/test-trainjob-5488s-node-0-0-tcmwv" event={"ID":"2b702c37-1c98-4f2d-a677-0c51a4807b29","Type":"ContainerStarted","Data":"6d026fa4c6de2da3f2550fa799ed2fdc8f2aea315d400dc7ae799de55b835380"} Apr 17 14:35:08.799635 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:35:08.799616 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"test-ns-p4bw6\"/\"default-dockercfg-q9zmv\"" Apr 17 14:35:08.821805 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:35:08.821754 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="test-ns-p4bw6/test-trainjob-5488s-node-0-0-tcmwv" podStartSLOduration=0.862691859 podStartE2EDuration="7m4.821735139s" podCreationTimestamp="2026-04-17 14:28:04 +0000 UTC" firstStartedPulling="2026-04-17 14:28:04.673158197 +0000 UTC m=+1234.010949300" lastFinishedPulling="2026-04-17 14:35:08.632201491 +0000 UTC m=+1657.969992580" observedRunningTime="2026-04-17 14:35:08.821071013 +0000 UTC m=+1658.158862126" watchObservedRunningTime="2026-04-17 14:35:08.821735139 +0000 UTC m=+1658.159526251" Apr 17 14:35:08.929063 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:35:08.929030 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"test-ns-p4bw6\"/\"kube-root-ca.crt\"" Apr 17 14:35:08.939273 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:35:08.939239 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"test-ns-p4bw6\"/\"openshift-service-ca.crt\"" Apr 17 14:35:12.813173 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:35:12.813140 2568 generic.go:358] "Generic (PLEG): container finished" podID="2b702c37-1c98-4f2d-a677-0c51a4807b29" containerID="6d026fa4c6de2da3f2550fa799ed2fdc8f2aea315d400dc7ae799de55b835380" exitCode=0 Apr 17 14:35:12.813586 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:35:12.813223 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="test-ns-p4bw6/test-trainjob-5488s-node-0-0-tcmwv" event={"ID":"2b702c37-1c98-4f2d-a677-0c51a4807b29","Type":"ContainerDied","Data":"6d026fa4c6de2da3f2550fa799ed2fdc8f2aea315d400dc7ae799de55b835380"} Apr 17 14:35:13.946767 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:35:13.946744 2568 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="test-ns-p4bw6/test-trainjob-5488s-node-0-0-tcmwv" Apr 17 14:35:14.070727 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:35:14.070636 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6jcj\" (UniqueName: \"kubernetes.io/projected/2b702c37-1c98-4f2d-a677-0c51a4807b29-kube-api-access-p6jcj\") pod \"2b702c37-1c98-4f2d-a677-0c51a4807b29\" (UID: \"2b702c37-1c98-4f2d-a677-0c51a4807b29\") " Apr 17 14:35:14.072908 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:35:14.072880 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b702c37-1c98-4f2d-a677-0c51a4807b29-kube-api-access-p6jcj" (OuterVolumeSpecName: "kube-api-access-p6jcj") pod "2b702c37-1c98-4f2d-a677-0c51a4807b29" (UID: "2b702c37-1c98-4f2d-a677-0c51a4807b29"). InnerVolumeSpecName "kube-api-access-p6jcj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 14:35:14.172191 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:35:14.172157 2568 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p6jcj\" (UniqueName: \"kubernetes.io/projected/2b702c37-1c98-4f2d-a677-0c51a4807b29-kube-api-access-p6jcj\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:35:14.820294 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:35:14.820265 2568 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="test-ns-p4bw6/test-trainjob-5488s-node-0-0-tcmwv" Apr 17 14:35:14.820483 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:35:14.820264 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="test-ns-p4bw6/test-trainjob-5488s-node-0-0-tcmwv" event={"ID":"2b702c37-1c98-4f2d-a677-0c51a4807b29","Type":"ContainerDied","Data":"5bf5ec3e7de46c00d963aae04ac0aa0ef450134e6a34132f96e0f96e06542ea8"} Apr 17 14:35:14.820483 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:35:14.820376 2568 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bf5ec3e7de46c00d963aae04ac0aa0ef450134e6a34132f96e0f96e06542ea8" Apr 17 14:35:15.224133 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:35:15.224102 2568 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["test-ns-lgg4b/test-trainjob-hpdnx-node-0-0-4xxps"] Apr 17 14:35:15.224505 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:35:15.224434 2568 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2b702c37-1c98-4f2d-a677-0c51a4807b29" containerName="node" Apr 17 14:35:15.224505 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:35:15.224445 2568 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b702c37-1c98-4f2d-a677-0c51a4807b29" containerName="node" Apr 17 14:35:15.224505 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:35:15.224494 2568 memory_manager.go:356] "RemoveStaleState removing state" podUID="2b702c37-1c98-4f2d-a677-0c51a4807b29" containerName="node" Apr 17 14:35:15.249381 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:35:15.249340 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["test-ns-lgg4b/test-trainjob-hpdnx-node-0-0-4xxps"] Apr 17 14:35:15.249536 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:35:15.249458 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="test-ns-lgg4b/test-trainjob-hpdnx-node-0-0-4xxps" Apr 17 14:35:15.252601 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:35:15.252577 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"test-ns-lgg4b\"/\"kube-root-ca.crt\"" Apr 17 14:35:15.252740 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:35:15.252605 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"test-ns-lgg4b\"/\"openshift-service-ca.crt\"" Apr 17 14:35:15.253672 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:35:15.253655 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"test-ns-lgg4b\"/\"default-dockercfg-48j5g\"" Apr 17 14:35:15.382824 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:35:15.382782 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c7w4\" (UniqueName: \"kubernetes.io/projected/6c2329fe-ec3d-4b3f-9491-de3e4740b94a-kube-api-access-4c7w4\") pod \"test-trainjob-hpdnx-node-0-0-4xxps\" (UID: \"6c2329fe-ec3d-4b3f-9491-de3e4740b94a\") " pod="test-ns-lgg4b/test-trainjob-hpdnx-node-0-0-4xxps" Apr 17 14:35:15.484190 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:35:15.484106 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4c7w4\" (UniqueName: \"kubernetes.io/projected/6c2329fe-ec3d-4b3f-9491-de3e4740b94a-kube-api-access-4c7w4\") pod \"test-trainjob-hpdnx-node-0-0-4xxps\" (UID: \"6c2329fe-ec3d-4b3f-9491-de3e4740b94a\") " pod="test-ns-lgg4b/test-trainjob-hpdnx-node-0-0-4xxps" Apr 17 14:35:15.492655 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:35:15.492629 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c7w4\" (UniqueName: \"kubernetes.io/projected/6c2329fe-ec3d-4b3f-9491-de3e4740b94a-kube-api-access-4c7w4\") pod \"test-trainjob-hpdnx-node-0-0-4xxps\" (UID: \"6c2329fe-ec3d-4b3f-9491-de3e4740b94a\") " pod="test-ns-lgg4b/test-trainjob-hpdnx-node-0-0-4xxps" Apr 17 14:35:15.558704 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:35:15.558667 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="test-ns-lgg4b/test-trainjob-hpdnx-node-0-0-4xxps" Apr 17 14:35:15.698042 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:35:15.698014 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["test-ns-lgg4b/test-trainjob-hpdnx-node-0-0-4xxps"] Apr 17 14:35:15.700526 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:35:15.700493 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c2329fe_ec3d_4b3f_9491_de3e4740b94a.slice/crio-d65f0f2b4c1843b73669d92258244fdbf7cd3d5582418060c5540fa21b8a0c78 WatchSource:0}: Error finding container d65f0f2b4c1843b73669d92258244fdbf7cd3d5582418060c5540fa21b8a0c78: Status 404 returned error can't find the container with id d65f0f2b4c1843b73669d92258244fdbf7cd3d5582418060c5540fa21b8a0c78 Apr 17 14:35:15.702535 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:35:15.702518 2568 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 17 14:35:15.824706 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:35:15.824620 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="test-ns-lgg4b/test-trainjob-hpdnx-node-0-0-4xxps" event={"ID":"6c2329fe-ec3d-4b3f-9491-de3e4740b94a","Type":"ContainerStarted","Data":"d65f0f2b4c1843b73669d92258244fdbf7cd3d5582418060c5540fa21b8a0c78"} Apr 17 14:37:31.241886 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:37:31.241755 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-s6z2b_12933e12-58b2-4d0c-993a-ab690936a989/console-operator/1.log" Apr 17 14:37:31.245945 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:37:31.243074 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-s6z2b_12933e12-58b2-4d0c-993a-ab690936a989/console-operator/1.log" Apr 17 14:42:07.146328 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:42:07.146290 2568 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Apr 17 14:42:07.146993 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:42:07.146363 2568 container_gc.go:86] "Attempting to delete unused containers" Apr 17 14:42:07.148100 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:42:07.148071 2568 scope.go:117] "RemoveContainer" containerID="aeeca7f275caacfaf84b7397c60a6e08df152403664fcf6b87d92de65347cec9" Apr 17 14:42:11.081193 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:42:11.081160 2568 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-104.ec2.internal" event="NodeHasDiskPressure" Apr 17 14:43:07.514084 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:43:07.514026 2568 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="nil" Apr 17 14:43:07.514084 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:43:07.514069 2568 kuberuntime_image.go:104] "Failed to list images" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 17 14:43:07.514084 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:43:07.514082 2568 image_gc_manager.go:230] "Failed to update image list" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 17 14:44:07.149338 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:44:07.149283 2568 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" containerID="aeeca7f275caacfaf84b7397c60a6e08df152403664fcf6b87d92de65347cec9" Apr 17 14:44:07.149951 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:44:07.149347 2568 kuberuntime_gc.go:150] "Failed to remove container" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" containerID="aeeca7f275caacfaf84b7397c60a6e08df152403664fcf6b87d92de65347cec9" Apr 17 14:44:07.149951 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:44:07.149372 2568 scope.go:117] "RemoveContainer" containerID="f483c47582d41ec6ba9fce011d39169bcfca8a75e717316df470172533b6c8d8" Apr 17 14:44:31.249652 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:44:31.249614 2568 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" filter="nil" Apr 17 14:44:31.250189 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:44:31.249661 2568 kuberuntime_image.go:104] "Failed to list images" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" Apr 17 14:44:31.250189 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:44:31.249680 2568 image_gc_manager.go:222] "Failed to monitor images" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" Apr 17 14:44:31.252901 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:44:31.252870 2568 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="nil" Apr 17 14:44:31.253022 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:44:31.252908 2568 kuberuntime_image.go:104] "Failed to list images" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 17 14:44:31.253022 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:44:31.252921 2568 kubelet.go:1618] "Image garbage collection failed once. Stats initialization may not have completed yet" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 17 14:45:37.515322 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:45:37.515278 2568 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" filter="nil" Apr 17 14:45:37.515322 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:45:37.515328 2568 kuberuntime_image.go:104] "Failed to list images" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" Apr 17 14:45:37.515829 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:45:37.515341 2568 image_gc_manager.go:230] "Failed to update image list" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" Apr 17 14:46:07.150698 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:46:07.150655 2568 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" containerID="f483c47582d41ec6ba9fce011d39169bcfca8a75e717316df470172533b6c8d8" Apr 17 14:46:07.150698 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:46:07.150702 2568 kuberuntime_gc.go:150] "Failed to remove container" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" containerID="f483c47582d41ec6ba9fce011d39169bcfca8a75e717316df470172533b6c8d8" Apr 17 14:46:07.151361 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:46:07.150723 2568 scope.go:117] "RemoveContainer" containerID="6d026fa4c6de2da3f2550fa799ed2fdc8f2aea315d400dc7ae799de55b835380" Apr 17 14:46:13.458360 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:46:13.458136 2568 scope.go:117] "RemoveContainer" containerID="de3b8e8ab70024f2d09db9ea0b8cc9480f1a9e3fc7e306a6e318270f5de1e936" Apr 17 14:46:13.533688 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:46:13.533668 2568 image_gc_manager.go:447] "Attempting to delete unused images" Apr 17 14:46:13.547758 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:46:13.547732 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-s6z2b_12933e12-58b2-4d0c-993a-ab690936a989/console-operator/1.log" Apr 17 14:46:13.554577 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:46:13.554551 2568 image_gc_manager.go:514] "Removing image to free bytes" imageID="5151a6030289f6d1ef2c984ebd3e465632a3bf64de79db6f7b3d6e2e638b0557" size=1065600018 runtimeHandler="" Apr 17 14:46:13.555212 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:46:13.555188 2568 log.go:32] "RemoveImage from image service failed" err="rpc error: code = Unknown desc = delete image: image used by 125fffbc4139fa11a4df3216d085fccdee776cab83aacce86ba46e01c314448c: image is in use by a container" image="5151a6030289f6d1ef2c984ebd3e465632a3bf64de79db6f7b3d6e2e638b0557" Apr 17 14:46:13.555295 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:46:13.555228 2568 kuberuntime_image.go:137] "Failed to remove image" err="rpc error: code = Unknown desc = delete image: image used by 125fffbc4139fa11a4df3216d085fccdee776cab83aacce86ba46e01c314448c: image is in use by a container" image="5151a6030289f6d1ef2c984ebd3e465632a3bf64de79db6f7b3d6e2e638b0557" Apr 17 14:46:13.555295 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:46:13.555248 2568 image_gc_manager.go:514] "Removing image to free bytes" imageID="ac4be6c7a52584c773ae754a4ccfb9fb1db440f4c9d858ad0f78765a85625b4b" size=1065006420 runtimeHandler="" Apr 17 14:46:14.139546 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:46:14.139500 2568 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: copying system image from manifest list: writing blob: adding layer with blob \"sha256:56a6a09b03e81131ca690210efc814701da032525ada07591ffe5d6c5d5a4906\"/\"\"/\"sha256:0832a7269a80e5ed5e1c5c749a8d30a6b248ce9b970d81f61a4f93a3b72673f9\": unpacking failed (error: exit status 1; output: write /opt/rocm-6.4.3/lib/rocblas/library/TensileLibrary_Type_SS_Contraction_l_Ailk_Bjlk_Cijk_Dijk_XPU_gfx942.co: no space left on device); artifact err: provided artifact is a container image" image="quay.io/opendatahub/odh-training-rocm64-torch29-py312@sha256:8a053c8ee3a4c326b745b2516a291c6b8a6e92defc5406ac2e9590bb742153f6" Apr 17 14:46:14.139743 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:46:14.139694 2568 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:node,Image:quay.io/opendatahub/odh-training-rocm64-torch29-py312@sha256:8a053c8ee3a4c326b745b2516a291c6b8a6e92defc5406ac2e9590bb742153f6,Command:[python -c import torch; print(f'PyTorch version: {torch.__version__}'); print('Training completed successfully')],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:29500,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:PET_NNODES,Value:1,ValueFrom:nil,},EnvVar{Name:PET_NPROC_PER_NODE,Value:1,ValueFrom:nil,},EnvVar{Name:PET_NODE_RANK,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['batch.kubernetes.io/job-completion-index'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:PET_MASTER_ADDR,Value:test-trainjob-hpdnx-node-0-0.test-trainjob-hpdnx,ValueFrom:nil,},EnvVar{Name:PET_MASTER_PORT,Value:29500,ValueFrom:nil,},EnvVar{Name:JOB_COMPLETION_INDEX,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.labels['batch.kubernetes.io/job-completion-index'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4c7w4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000710000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-trainjob-hpdnx-node-0-0-4xxps_test-ns-lgg4b(6c2329fe-ec3d-4b3f-9491-de3e4740b94a): ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: writing blob: adding layer with blob \"sha256:56a6a09b03e81131ca690210efc814701da032525ada07591ffe5d6c5d5a4906\"/\"\"/\"sha256:0832a7269a80e5ed5e1c5c749a8d30a6b248ce9b970d81f61a4f93a3b72673f9\": unpacking failed (error: exit status 1; output: write /opt/rocm-6.4.3/lib/rocblas/library/TensileLibrary_Type_SS_Contraction_l_Ailk_Bjlk_Cijk_Dijk_XPU_gfx942.co: no space left on device); artifact err: provided artifact is a container image" logger="UnhandledError" Apr 17 14:46:14.140911 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:46:14.140879 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: copying system image from manifest list: writing blob: adding layer with blob \\\"sha256:56a6a09b03e81131ca690210efc814701da032525ada07591ffe5d6c5d5a4906\\\"/\\\"\\\"/\\\"sha256:0832a7269a80e5ed5e1c5c749a8d30a6b248ce9b970d81f61a4f93a3b72673f9\\\": unpacking failed (error: exit status 1; output: write /opt/rocm-6.4.3/lib/rocblas/library/TensileLibrary_Type_SS_Contraction_l_Ailk_Bjlk_Cijk_Dijk_XPU_gfx942.co: no space left on device); artifact err: provided artifact is a container image\"" pod="test-ns-lgg4b/test-trainjob-hpdnx-node-0-0-4xxps" podUID="6c2329fe-ec3d-4b3f-9491-de3e4740b94a" Apr 17 14:46:14.172257 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:46:14.172230 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"test-ns-lgg4b\"/\"default-dockercfg-48j5g\"" Apr 17 14:46:14.172844 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:46:14.172822 2568 image_gc_manager.go:514] "Removing image to free bytes" imageID="8cfae5f12a3d5e8f5711d1531d223358c13a3d4b36be844d8c6890efdfa09339" size=622989096 runtimeHandler="" Apr 17 14:46:14.235227 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:46:14.235154 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"test-ns-lgg4b\"/\"kube-root-ca.crt\"" Apr 17 14:46:14.238257 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:46:14.238230 2568 image_gc_manager.go:514] "Removing image to free bytes" imageID="6c4c067f462bc2663748e874791852ccb12bcf84699da52995d89638791c3221" size=468536373 runtimeHandler="" Apr 17 14:46:14.238693 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:46:14.238675 2568 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 17 14:46:14.238987 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:46:14.238946 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/opendatahub/odh-training-rocm64-torch29-py312@sha256:8a053c8ee3a4c326b745b2516a291c6b8a6e92defc5406ac2e9590bb742153f6\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: writing blob: adding layer with blob \\\"sha256:56a6a09b03e81131ca690210efc814701da032525ada07591ffe5d6c5d5a4906\\\"/\\\"\\\"/\\\"sha256:0832a7269a80e5ed5e1c5c749a8d30a6b248ce9b970d81f61a4f93a3b72673f9\\\": unpacking failed (error: exit status 1; output: write /opt/rocm-6.4.3/lib/rocblas/library/TensileLibrary_Type_SS_Contraction_l_Ailk_Bjlk_Cijk_Dijk_XPU_gfx942.co: no space left on device); artifact err: provided artifact is a container image\"" pod="test-ns-lgg4b/test-trainjob-hpdnx-node-0-0-4xxps" podUID="6c2329fe-ec3d-4b3f-9491-de3e4740b94a" Apr 17 14:46:14.245044 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:46:14.244999 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"test-ns-lgg4b\"/\"openshift-service-ca.crt\"" Apr 17 14:46:14.251286 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:46:14.251253 2568 image_gc_manager.go:514] "Removing image to free bytes" imageID="bd2f0c6a473dfa650b536cfe1992446bf45305b3ace698398143f161694113a5" size=20806872103 runtimeHandler="" Apr 17 14:46:18.218220 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:46:18.218180 2568 image_gc_manager.go:514] "Removing image to free bytes" imageID="ad110250a85fcdba558f7f776c90e8eeba85487d69852b32b99f6e3e85c4336a" size=23201654703 runtimeHandler="" Apr 17 14:46:22.167332 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:46:22.167289 2568 image_gc_manager.go:514] "Removing image to free bytes" imageID="025dda36860e1edcbee35aeb77c23dc6099d12fbe19ecd8288e34b8ff70c9a3e" size=7588072891 runtimeHandler="" Apr 17 14:46:25.535896 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:46:25.535831 2568 image_gc_manager.go:514] "Removing image to free bytes" imageID="819e15fdec92d846e6d5de4b1b2988adcb74f6d3046689fe03c655b03a67975d" size=18873458221 runtimeHandler="" Apr 17 14:46:28.425577 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:46:28.425531 2568 eviction_manager.go:473] "Eviction manager: unexpected error when attempting to reduce resource pressure" resourceName="ephemeral-storage" err="wanted to free 9223372036854775807 bytes, but freed 72626589807 bytes space with errors in image deletion: rpc error: code = Unknown desc = delete image: image used by 125fffbc4139fa11a4df3216d085fccdee776cab83aacce86ba46e01c314448c: image is in use by a container" Apr 17 14:46:28.466481 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:46:28.466441 2568 eviction_manager.go:383] "Eviction manager: able to reduce resource pressure without evicting pods." resourceName="ephemeral-storage" Apr 17 14:47:17.058306 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:47:17.058277 2568 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-104.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 14:49:31.266838 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:49:31.266765 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-s6z2b_12933e12-58b2-4d0c-993a-ab690936a989/console-operator/1.log" Apr 17 14:49:31.269708 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:49:31.268502 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-s6z2b_12933e12-58b2-4d0c-993a-ab690936a989/console-operator/1.log" Apr 17 14:49:31.279235 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:49:31.279215 2568 kubelet.go:1628] "Image garbage collection succeeded" Apr 17 14:54:43.566930 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:54:43.566850 2568 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" filter="nil" Apr 17 14:54:43.566930 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:54:43.566924 2568 kuberuntime_image.go:104] "Failed to list images" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" Apr 17 14:54:43.622433 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:54:43.566939 2568 image_gc_manager.go:230] "Failed to update image list" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" Apr 17 14:55:15.214508 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:55:15.214332 2568 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d026fa4c6de2da3f2550fa799ed2fdc8f2aea315d400dc7ae799de55b835380\": container with ID starting with 6d026fa4c6de2da3f2550fa799ed2fdc8f2aea315d400dc7ae799de55b835380 not found: ID does not exist" containerID="6d026fa4c6de2da3f2550fa799ed2fdc8f2aea315d400dc7ae799de55b835380" Apr 17 14:55:15.313949 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:55:15.313913 2568 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aeeca7f275caacfaf84b7397c60a6e08df152403664fcf6b87d92de65347cec9\": container with ID starting with aeeca7f275caacfaf84b7397c60a6e08df152403664fcf6b87d92de65347cec9 not found: ID does not exist" containerID="aeeca7f275caacfaf84b7397c60a6e08df152403664fcf6b87d92de65347cec9" Apr 17 14:55:15.413951 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:55:15.413903 2568 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f483c47582d41ec6ba9fce011d39169bcfca8a75e717316df470172533b6c8d8\": container with ID starting with f483c47582d41ec6ba9fce011d39169bcfca8a75e717316df470172533b6c8d8 not found: ID does not exist" containerID="f483c47582d41ec6ba9fce011d39169bcfca8a75e717316df470172533b6c8d8" Apr 17 14:55:15.909766 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:55:15.909728 2568 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de3b8e8ab70024f2d09db9ea0b8cc9480f1a9e3fc7e306a6e318270f5de1e936\": container with ID starting with de3b8e8ab70024f2d09db9ea0b8cc9480f1a9e3fc7e306a6e318270f5de1e936 not found: ID does not exist" containerID="de3b8e8ab70024f2d09db9ea0b8cc9480f1a9e3fc7e306a6e318270f5de1e936" Apr 17 14:55:20.155256 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:55:20.155218 2568 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["test-ns-lgg4b/test-trainjob-hpdnx-node-0-0-4xxps"] Apr 17 14:55:20.256121 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:55:20.256058 2568 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["test-ns-p4bw6/test-trainjob-5488s-node-0-0-tcmwv"] Apr 17 14:55:20.257533 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:55:20.257510 2568 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["test-ns-p4bw6/test-trainjob-5488s-node-0-0-tcmwv"] Apr 17 14:55:20.355832 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:55:20.355801 2568 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["test-ns-skw8w/test-trainjob-cq27n-node-0-0-qr9kj"] Apr 17 14:55:20.358946 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:55:20.358916 2568 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["test-ns-skw8w/test-trainjob-cq27n-node-0-0-qr9kj"] Apr 17 14:55:20.529688 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:55:20.529638 2568 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["test-ns-qgx6c/test-trainjob-tqtx6-node-0-0-2qtwh"] Apr 17 14:55:20.531199 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:55:20.531174 2568 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["test-ns-qgx6c/test-trainjob-tqtx6-node-0-0-2qtwh"] Apr 17 14:55:21.180344 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:55:21.180305 2568 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["test-ns-rkvvn/test-trainjob-pp4c4-node-0-0-swcm7"] Apr 17 14:55:21.183641 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:55:21.183614 2568 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["test-ns-rkvvn/test-trainjob-pp4c4-node-0-0-swcm7"] Apr 17 14:55:21.265075 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:55:21.265039 2568 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b702c37-1c98-4f2d-a677-0c51a4807b29" path="/var/lib/kubelet/pods/2b702c37-1c98-4f2d-a677-0c51a4807b29/volumes" Apr 17 14:55:21.265438 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:55:21.265421 2568 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad3d09e2-6403-476e-bacf-0bda91cc67c9" path="/var/lib/kubelet/pods/ad3d09e2-6403-476e-bacf-0bda91cc67c9/volumes" Apr 17 14:55:21.265744 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:55:21.265730 2568 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aef747ee-7967-440d-8926-e228b0d3d0ed" path="/var/lib/kubelet/pods/aef747ee-7967-440d-8926-e228b0d3d0ed/volumes" Apr 17 14:55:21.266077 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:55:21.266062 2568 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee4d51e5-1884-4547-ba81-0b11e8261e3c" path="/var/lib/kubelet/pods/ee4d51e5-1884-4547-ba81-0b11e8261e3c/volumes" Apr 17 14:56:07.125885 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:56:07.125822 2568 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wh6bh/must-gather-h8nr6"] Apr 17 14:56:07.144279 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:56:07.144248 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wh6bh/must-gather-h8nr6"] Apr 17 14:56:07.144430 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:56:07.144373 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wh6bh/must-gather-h8nr6" Apr 17 14:56:07.147352 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:56:07.147322 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-wh6bh\"/\"kube-root-ca.crt\"" Apr 17 14:56:07.148651 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:56:07.148625 2568 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-wh6bh\"/\"openshift-service-ca.crt\"" Apr 17 14:56:07.148833 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:56:07.148638 2568 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-wh6bh\"/\"default-dockercfg-g56sz\"" Apr 17 14:56:07.184024 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:56:07.183991 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlbrp\" (UniqueName: \"kubernetes.io/projected/eb2a22c9-e5fc-4301-9946-eb0bb4c67dd4-kube-api-access-xlbrp\") pod \"must-gather-h8nr6\" (UID: \"eb2a22c9-e5fc-4301-9946-eb0bb4c67dd4\") " pod="openshift-must-gather-wh6bh/must-gather-h8nr6" Apr 17 14:56:07.184180 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:56:07.184035 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/eb2a22c9-e5fc-4301-9946-eb0bb4c67dd4-must-gather-output\") pod \"must-gather-h8nr6\" (UID: \"eb2a22c9-e5fc-4301-9946-eb0bb4c67dd4\") " pod="openshift-must-gather-wh6bh/must-gather-h8nr6" Apr 17 14:56:07.285147 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:56:07.285106 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xlbrp\" (UniqueName: \"kubernetes.io/projected/eb2a22c9-e5fc-4301-9946-eb0bb4c67dd4-kube-api-access-xlbrp\") pod \"must-gather-h8nr6\" (UID: \"eb2a22c9-e5fc-4301-9946-eb0bb4c67dd4\") " pod="openshift-must-gather-wh6bh/must-gather-h8nr6" Apr 17 14:56:07.285147 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:56:07.285156 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/eb2a22c9-e5fc-4301-9946-eb0bb4c67dd4-must-gather-output\") pod \"must-gather-h8nr6\" (UID: \"eb2a22c9-e5fc-4301-9946-eb0bb4c67dd4\") " pod="openshift-must-gather-wh6bh/must-gather-h8nr6" Apr 17 14:56:07.285458 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:56:07.285442 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/eb2a22c9-e5fc-4301-9946-eb0bb4c67dd4-must-gather-output\") pod \"must-gather-h8nr6\" (UID: \"eb2a22c9-e5fc-4301-9946-eb0bb4c67dd4\") " pod="openshift-must-gather-wh6bh/must-gather-h8nr6" Apr 17 14:56:07.294179 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:56:07.294153 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlbrp\" (UniqueName: \"kubernetes.io/projected/eb2a22c9-e5fc-4301-9946-eb0bb4c67dd4-kube-api-access-xlbrp\") pod \"must-gather-h8nr6\" (UID: \"eb2a22c9-e5fc-4301-9946-eb0bb4c67dd4\") " pod="openshift-must-gather-wh6bh/must-gather-h8nr6" Apr 17 14:56:07.456185 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:56:07.456151 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wh6bh/must-gather-h8nr6" Apr 17 14:56:31.276188 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:56:31.276139 2568 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" filter="nil" Apr 17 14:56:31.276188 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:56:31.276187 2568 kuberuntime_image.go:104] "Failed to list images" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" Apr 17 14:56:31.276687 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:56:31.276198 2568 image_gc_manager.go:222] "Failed to monitor images" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" Apr 17 14:56:31.280283 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:56:31.280264 2568 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" filter="nil" Apr 17 14:56:31.280342 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:56:31.280284 2568 kuberuntime_image.go:104] "Failed to list images" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" Apr 17 14:56:31.280342 ip-10-0-140-104 kubenswrapper[2568]: E0417 14:56:31.280294 2568 kubelet.go:1618] "Image garbage collection failed once. Stats initialization may not have completed yet" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" Apr 17 14:56:54.795578 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:56:54.795542 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wh6bh/must-gather-h8nr6"] Apr 17 14:56:54.799309 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:56:54.799283 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb2a22c9_e5fc_4301_9946_eb0bb4c67dd4.slice/crio-5a0cc6168b498791f4c62e8fb71f712d6c7119ff9ff38b8f90e7efbc6312eb49 WatchSource:0}: Error finding container 5a0cc6168b498791f4c62e8fb71f712d6c7119ff9ff38b8f90e7efbc6312eb49: Status 404 returned error can't find the container with id 5a0cc6168b498791f4c62e8fb71f712d6c7119ff9ff38b8f90e7efbc6312eb49 Apr 17 14:56:54.801478 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:56:54.801455 2568 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 17 14:56:55.155979 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:56:55.155897 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wh6bh/must-gather-h8nr6" event={"ID":"eb2a22c9-e5fc-4301-9946-eb0bb4c67dd4","Type":"ContainerStarted","Data":"5a0cc6168b498791f4c62e8fb71f712d6c7119ff9ff38b8f90e7efbc6312eb49"} Apr 17 14:56:57.164314 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:56:57.164279 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="test-ns-lgg4b/test-trainjob-hpdnx-node-0-0-4xxps" event={"ID":"6c2329fe-ec3d-4b3f-9491-de3e4740b94a","Type":"ContainerStarted","Data":"ba233acab81b1a257b30b3a929a2caaca110e7ae29dfc4241513c84fc76e3e8c"} Apr 17 14:56:57.164798 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:56:57.164344 2568 kuberuntime_container.go:864] "Killing container with a grace period" pod="test-ns-lgg4b/test-trainjob-hpdnx-node-0-0-4xxps" podUID="6c2329fe-ec3d-4b3f-9491-de3e4740b94a" containerName="node" containerID="cri-o://ba233acab81b1a257b30b3a929a2caaca110e7ae29dfc4241513c84fc76e3e8c" gracePeriod=30 Apr 17 14:56:57.166135 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:56:57.166112 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wh6bh/must-gather-h8nr6" event={"ID":"eb2a22c9-e5fc-4301-9946-eb0bb4c67dd4","Type":"ContainerStarted","Data":"2159ef7e3b06367bcc58edde07b41c76f9a7a902de162e5b0403bb1eeb231523"} Apr 17 14:56:57.166239 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:56:57.166138 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wh6bh/must-gather-h8nr6" event={"ID":"eb2a22c9-e5fc-4301-9946-eb0bb4c67dd4","Type":"ContainerStarted","Data":"a7bb70f693f6a8ad739e8982e655028f7606aff6268dd188592e838d69fa8aa9"} Apr 17 14:56:57.180661 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:56:57.180616 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="test-ns-lgg4b/test-trainjob-hpdnx-node-0-0-4xxps" podStartSLOduration=1.795078993 podStartE2EDuration="21m42.18060386s" podCreationTimestamp="2026-04-17 14:35:15 +0000 UTC" firstStartedPulling="2026-04-17 14:35:15.702701533 +0000 UTC m=+1665.040492637" lastFinishedPulling="2026-04-17 14:56:56.088226412 +0000 UTC m=+2965.426017504" observedRunningTime="2026-04-17 14:56:57.179348802 +0000 UTC m=+2966.517139915" watchObservedRunningTime="2026-04-17 14:56:57.18060386 +0000 UTC m=+2966.518394971" Apr 17 14:56:57.200175 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:56:57.200128 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wh6bh/must-gather-h8nr6" podStartSLOduration=48.407642484 podStartE2EDuration="50.200116851s" podCreationTimestamp="2026-04-17 14:56:07 +0000 UTC" firstStartedPulling="2026-04-17 14:56:54.801587734 +0000 UTC m=+2964.139378822" lastFinishedPulling="2026-04-17 14:56:56.594062096 +0000 UTC m=+2965.931853189" observedRunningTime="2026-04-17 14:56:57.197914136 +0000 UTC m=+2966.535705243" watchObservedRunningTime="2026-04-17 14:56:57.200116851 +0000 UTC m=+2966.537908012" Apr 17 14:56:59.225706 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:56:59.225680 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_global-pull-secret-syncer-cgd26_a39d75e4-6ec9-4f74-b702-49461b73e668/global-pull-secret-syncer/0.log" Apr 17 14:56:59.309064 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:56:59.309030 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_konnectivity-agent-5bbkp_629add9d-ca73-450f-84ca-bbde403bb4a1/konnectivity-agent/0.log" Apr 17 14:56:59.386792 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:56:59.386756 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_kube-apiserver-proxy-ip-10-0-140-104.ec2.internal_d77c3cf6e24439d7792eeaf3b4807554/haproxy/0.log" Apr 17 14:57:02.712639 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:02.712606 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_cluster-monitoring-operator-75587bd455-b6tjh_d3d37f50-e860-4ca0-9480-8785e349ad48/cluster-monitoring-operator/0.log" Apr 17 14:57:02.799247 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:02.799214 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_metrics-server-74c596c9bf-ckdfj_1ef97ae2-261e-4072-8667-f11c1fbf0024/metrics-server/0.log" Apr 17 14:57:02.822194 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:02.822162 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_monitoring-plugin-7dccd58f55-qplhm_7e9b4543-ab44-4004-8de1-369205a75e30/monitoring-plugin/0.log" Apr 17 14:57:02.915352 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:02.915316 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-m6s4q_51af9648-c2cc-494b-bd12-803fa91c0c24/node-exporter/0.log" Apr 17 14:57:02.935021 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:02.934985 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-m6s4q_51af9648-c2cc-494b-bd12-803fa91c0c24/kube-rbac-proxy/0.log" Apr 17 14:57:02.957764 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:02.957739 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-m6s4q_51af9648-c2cc-494b-bd12-803fa91c0c24/init-textfile/0.log" Apr 17 14:57:03.065277 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:03.065237 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-9d44df66c-4zmlh_efeaf65b-a5ac-4bbd-bc74-77d92f56b365/kube-rbac-proxy-main/0.log" Apr 17 14:57:03.087874 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:03.087834 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-9d44df66c-4zmlh_efeaf65b-a5ac-4bbd-bc74-77d92f56b365/kube-rbac-proxy-self/0.log" Apr 17 14:57:03.107223 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:03.107193 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-9d44df66c-4zmlh_efeaf65b-a5ac-4bbd-bc74-77d92f56b365/openshift-state-metrics/0.log" Apr 17 14:57:03.138216 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:03.138169 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_445cda6f-16ef-42d7-9ac0-5f969136057a/prometheus/0.log" Apr 17 14:57:03.158499 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:03.158473 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_445cda6f-16ef-42d7-9ac0-5f969136057a/config-reloader/0.log" Apr 17 14:57:03.177044 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:03.176969 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_445cda6f-16ef-42d7-9ac0-5f969136057a/thanos-sidecar/0.log" Apr 17 14:57:03.198105 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:03.198075 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_445cda6f-16ef-42d7-9ac0-5f969136057a/kube-rbac-proxy-web/0.log" Apr 17 14:57:03.218152 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:03.218120 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_445cda6f-16ef-42d7-9ac0-5f969136057a/kube-rbac-proxy/0.log" Apr 17 14:57:03.236993 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:03.236961 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_445cda6f-16ef-42d7-9ac0-5f969136057a/kube-rbac-proxy-thanos/0.log" Apr 17 14:57:03.287895 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:03.287848 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_445cda6f-16ef-42d7-9ac0-5f969136057a/init-config-reloader/0.log" Apr 17 14:57:03.309440 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:03.309401 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-5676c8c784-45424_56bb2c90-d8f5-4f9a-9e91-ff9a83a6f890/prometheus-operator/0.log" Apr 17 14:57:03.327534 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:03.327508 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-5676c8c784-45424_56bb2c90-d8f5-4f9a-9e91-ff9a83a6f890/kube-rbac-proxy/0.log" Apr 17 14:57:03.441983 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:03.441901 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-9475c9469-l2zbd_6ec54637-4eaa-4c78-9ade-06fa724dd4b9/thanos-query/0.log" Apr 17 14:57:03.462309 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:03.462282 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-9475c9469-l2zbd_6ec54637-4eaa-4c78-9ade-06fa724dd4b9/kube-rbac-proxy-web/0.log" Apr 17 14:57:03.482101 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:03.482067 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-9475c9469-l2zbd_6ec54637-4eaa-4c78-9ade-06fa724dd4b9/kube-rbac-proxy/0.log" Apr 17 14:57:03.503040 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:03.503006 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-9475c9469-l2zbd_6ec54637-4eaa-4c78-9ade-06fa724dd4b9/prom-label-proxy/0.log" Apr 17 14:57:03.528034 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:03.527979 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-9475c9469-l2zbd_6ec54637-4eaa-4c78-9ade-06fa724dd4b9/kube-rbac-proxy-rules/0.log" Apr 17 14:57:03.547070 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:03.547042 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-9475c9469-l2zbd_6ec54637-4eaa-4c78-9ade-06fa724dd4b9/kube-rbac-proxy-metrics/0.log" Apr 17 14:57:05.120026 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:05.119991 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-s6z2b_12933e12-58b2-4d0c-993a-ab690936a989/console-operator/1.log" Apr 17 14:57:05.126450 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:05.126422 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-s6z2b_12933e12-58b2-4d0c-993a-ab690936a989/console-operator/2.log" Apr 17 14:57:05.897915 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:05.897890 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_volume-data-source-validator-7c6cbb6c87-9rmr5_9d9069f6-f802-49ac-ae51-9f6f9fac00d2/volume-data-source-validator/0.log" Apr 17 14:57:06.658004 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:06.657931 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-mrqpz_c99bcc58-e14e-4455-8308-f2a36ad35eff/dns/0.log" Apr 17 14:57:06.675786 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:06.675756 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-mrqpz_c99bcc58-e14e-4455-8308-f2a36ad35eff/kube-rbac-proxy/0.log" Apr 17 14:57:06.695501 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:06.695474 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-sh5zz_0ca24926-e680-41ea-85df-e8d6ba856597/dns-node-resolver/0.log" Apr 17 14:57:07.161415 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:07.161390 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-6nflw_dd5f44ea-0850-4ff4-8f21-1f4135fc02ca/node-ca/0.log" Apr 17 14:57:07.253422 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:07.253385 2568 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wh6bh/perf-node-gather-daemonset-b4dc6"] Apr 17 14:57:07.289037 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:07.288954 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wh6bh/perf-node-gather-daemonset-b4dc6"] Apr 17 14:57:07.289222 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:07.289088 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wh6bh/perf-node-gather-daemonset-b4dc6" Apr 17 14:57:07.425448 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:07.425361 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/18339e10-dcfe-458a-9e94-6050f1d1f29a-sys\") pod \"perf-node-gather-daemonset-b4dc6\" (UID: \"18339e10-dcfe-458a-9e94-6050f1d1f29a\") " pod="openshift-must-gather-wh6bh/perf-node-gather-daemonset-b4dc6" Apr 17 14:57:07.425448 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:07.425410 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxmqr\" (UniqueName: \"kubernetes.io/projected/18339e10-dcfe-458a-9e94-6050f1d1f29a-kube-api-access-zxmqr\") pod \"perf-node-gather-daemonset-b4dc6\" (UID: \"18339e10-dcfe-458a-9e94-6050f1d1f29a\") " pod="openshift-must-gather-wh6bh/perf-node-gather-daemonset-b4dc6" Apr 17 14:57:07.425667 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:07.425495 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18339e10-dcfe-458a-9e94-6050f1d1f29a-lib-modules\") pod \"perf-node-gather-daemonset-b4dc6\" (UID: \"18339e10-dcfe-458a-9e94-6050f1d1f29a\") " pod="openshift-must-gather-wh6bh/perf-node-gather-daemonset-b4dc6" Apr 17 14:57:07.425667 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:07.425517 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/18339e10-dcfe-458a-9e94-6050f1d1f29a-podres\") pod \"perf-node-gather-daemonset-b4dc6\" (UID: \"18339e10-dcfe-458a-9e94-6050f1d1f29a\") " pod="openshift-must-gather-wh6bh/perf-node-gather-daemonset-b4dc6" Apr 17 14:57:07.425667 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:07.425557 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/18339e10-dcfe-458a-9e94-6050f1d1f29a-proc\") pod \"perf-node-gather-daemonset-b4dc6\" (UID: \"18339e10-dcfe-458a-9e94-6050f1d1f29a\") " pod="openshift-must-gather-wh6bh/perf-node-gather-daemonset-b4dc6" Apr 17 14:57:07.526801 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:07.526759 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/18339e10-dcfe-458a-9e94-6050f1d1f29a-proc\") pod \"perf-node-gather-daemonset-b4dc6\" (UID: \"18339e10-dcfe-458a-9e94-6050f1d1f29a\") " pod="openshift-must-gather-wh6bh/perf-node-gather-daemonset-b4dc6" Apr 17 14:57:07.527223 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:07.527201 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/18339e10-dcfe-458a-9e94-6050f1d1f29a-sys\") pod \"perf-node-gather-daemonset-b4dc6\" (UID: \"18339e10-dcfe-458a-9e94-6050f1d1f29a\") " pod="openshift-must-gather-wh6bh/perf-node-gather-daemonset-b4dc6" Apr 17 14:57:07.527465 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:07.527447 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zxmqr\" (UniqueName: \"kubernetes.io/projected/18339e10-dcfe-458a-9e94-6050f1d1f29a-kube-api-access-zxmqr\") pod \"perf-node-gather-daemonset-b4dc6\" (UID: \"18339e10-dcfe-458a-9e94-6050f1d1f29a\") " pod="openshift-must-gather-wh6bh/perf-node-gather-daemonset-b4dc6" Apr 17 14:57:07.528014 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:07.527990 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18339e10-dcfe-458a-9e94-6050f1d1f29a-lib-modules\") pod \"perf-node-gather-daemonset-b4dc6\" (UID: \"18339e10-dcfe-458a-9e94-6050f1d1f29a\") " pod="openshift-must-gather-wh6bh/perf-node-gather-daemonset-b4dc6" Apr 17 14:57:07.528311 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:07.528293 2568 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/18339e10-dcfe-458a-9e94-6050f1d1f29a-podres\") pod \"perf-node-gather-daemonset-b4dc6\" (UID: \"18339e10-dcfe-458a-9e94-6050f1d1f29a\") " pod="openshift-must-gather-wh6bh/perf-node-gather-daemonset-b4dc6" Apr 17 14:57:07.528553 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:07.527389 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/18339e10-dcfe-458a-9e94-6050f1d1f29a-sys\") pod \"perf-node-gather-daemonset-b4dc6\" (UID: \"18339e10-dcfe-458a-9e94-6050f1d1f29a\") " pod="openshift-must-gather-wh6bh/perf-node-gather-daemonset-b4dc6" Apr 17 14:57:07.528669 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:07.528253 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18339e10-dcfe-458a-9e94-6050f1d1f29a-lib-modules\") pod \"perf-node-gather-daemonset-b4dc6\" (UID: \"18339e10-dcfe-458a-9e94-6050f1d1f29a\") " pod="openshift-must-gather-wh6bh/perf-node-gather-daemonset-b4dc6" Apr 17 14:57:07.528760 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:07.527073 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/18339e10-dcfe-458a-9e94-6050f1d1f29a-proc\") pod \"perf-node-gather-daemonset-b4dc6\" (UID: \"18339e10-dcfe-458a-9e94-6050f1d1f29a\") " pod="openshift-must-gather-wh6bh/perf-node-gather-daemonset-b4dc6" Apr 17 14:57:07.528879 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:07.528484 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/18339e10-dcfe-458a-9e94-6050f1d1f29a-podres\") pod \"perf-node-gather-daemonset-b4dc6\" (UID: \"18339e10-dcfe-458a-9e94-6050f1d1f29a\") " pod="openshift-must-gather-wh6bh/perf-node-gather-daemonset-b4dc6" Apr 17 14:57:07.535382 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:07.535351 2568 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxmqr\" (UniqueName: \"kubernetes.io/projected/18339e10-dcfe-458a-9e94-6050f1d1f29a-kube-api-access-zxmqr\") pod \"perf-node-gather-daemonset-b4dc6\" (UID: \"18339e10-dcfe-458a-9e94-6050f1d1f29a\") " pod="openshift-must-gather-wh6bh/perf-node-gather-daemonset-b4dc6" Apr 17 14:57:07.601786 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:07.601751 2568 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wh6bh/perf-node-gather-daemonset-b4dc6" Apr 17 14:57:07.745793 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:07.745735 2568 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wh6bh/perf-node-gather-daemonset-b4dc6"] Apr 17 14:57:07.748534 ip-10-0-140-104 kubenswrapper[2568]: W0417 14:57:07.748505 2568 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod18339e10_dcfe_458a_9e94_6050f1d1f29a.slice/crio-aeff73247cc1529531640f1629e67f80cec95c6b22632ccf238ee791397e39cf WatchSource:0}: Error finding container aeff73247cc1529531640f1629e67f80cec95c6b22632ccf238ee791397e39cf: Status 404 returned error can't find the container with id aeff73247cc1529531640f1629e67f80cec95c6b22632ccf238ee791397e39cf Apr 17 14:57:08.191203 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:08.191173 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-n4248_f7bc9c86-d3a7-43d7-9862-6170cb691894/serve-healthcheck-canary/0.log" Apr 17 14:57:08.206726 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:08.206690 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wh6bh/perf-node-gather-daemonset-b4dc6" event={"ID":"18339e10-dcfe-458a-9e94-6050f1d1f29a","Type":"ContainerStarted","Data":"fc130370cfd7ff703742b8e01241f2176b8c5ad2b40b8ee84ea92feb9d45bcfc"} Apr 17 14:57:08.206726 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:08.206732 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wh6bh/perf-node-gather-daemonset-b4dc6" event={"ID":"18339e10-dcfe-458a-9e94-6050f1d1f29a","Type":"ContainerStarted","Data":"aeff73247cc1529531640f1629e67f80cec95c6b22632ccf238ee791397e39cf"} Apr 17 14:57:08.206948 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:08.206760 2568 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-must-gather-wh6bh/perf-node-gather-daemonset-b4dc6" Apr 17 14:57:08.222739 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:08.222692 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wh6bh/perf-node-gather-daemonset-b4dc6" podStartSLOduration=1.222677883 podStartE2EDuration="1.222677883s" podCreationTimestamp="2026-04-17 14:57:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 14:57:08.220068195 +0000 UTC m=+2977.557859307" watchObservedRunningTime="2026-04-17 14:57:08.222677883 +0000 UTC m=+2977.560468993" Apr 17 14:57:08.688896 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:08.688853 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-qnjxh_69839ef7-4c5c-4b64-95e0-5187708bda48/kube-rbac-proxy/0.log" Apr 17 14:57:08.706529 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:08.706497 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-qnjxh_69839ef7-4c5c-4b64-95e0-5187708bda48/exporter/0.log" Apr 17 14:57:08.726969 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:08.726940 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-qnjxh_69839ef7-4c5c-4b64-95e0-5187708bda48/extractor/0.log" Apr 17 14:57:13.131910 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:13.131875 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator_migrator-74bb7799d9-nctkf_1a5d1cc2-214d-4ed2-a8be-4b70dbf4c772/migrator/0.log" Apr 17 14:57:13.149491 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:13.149467 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator_migrator-74bb7799d9-nctkf_1a5d1cc2-214d-4ed2-a8be-4b70dbf4c772/graceful-termination/0.log" Apr 17 14:57:14.220191 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:14.220166 2568 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-wh6bh/perf-node-gather-daemonset-b4dc6" Apr 17 14:57:14.492730 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:14.492663 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-x4sv2_7075f961-4efd-41c8-9591-c7608ce4563a/kube-multus-additional-cni-plugins/0.log" Apr 17 14:57:14.520138 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:14.520106 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-x4sv2_7075f961-4efd-41c8-9591-c7608ce4563a/egress-router-binary-copy/0.log" Apr 17 14:57:14.535659 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:14.535628 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-x4sv2_7075f961-4efd-41c8-9591-c7608ce4563a/cni-plugins/0.log" Apr 17 14:57:14.554487 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:14.554438 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-x4sv2_7075f961-4efd-41c8-9591-c7608ce4563a/bond-cni-plugin/0.log" Apr 17 14:57:14.580504 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:14.580476 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-x4sv2_7075f961-4efd-41c8-9591-c7608ce4563a/routeoverride-cni/0.log" Apr 17 14:57:14.596705 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:14.596673 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-x4sv2_7075f961-4efd-41c8-9591-c7608ce4563a/whereabouts-cni-bincopy/0.log" Apr 17 14:57:14.623493 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:14.623463 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-x4sv2_7075f961-4efd-41c8-9591-c7608ce4563a/whereabouts-cni/0.log" Apr 17 14:57:14.670237 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:14.670211 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hhjfc_98c5f7fc-8ede-450b-961f-6812d4ee961b/kube-multus/0.log" Apr 17 14:57:14.795662 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:14.795579 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-vgr2h_30cbf063-628a-472d-981e-312f5bea1f7f/network-metrics-daemon/0.log" Apr 17 14:57:14.812273 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:14.812248 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-vgr2h_30cbf063-628a-472d-981e-312f5bea1f7f/kube-rbac-proxy/0.log" Apr 17 14:57:16.155226 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:16.155198 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmbqf_47f04f56-df85-4d1d-ade1-4e5bc3b49e67/ovn-controller/0.log" Apr 17 14:57:16.191369 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:16.191338 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmbqf_47f04f56-df85-4d1d-ade1-4e5bc3b49e67/ovn-acl-logging/0.log" Apr 17 14:57:16.212599 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:16.212569 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmbqf_47f04f56-df85-4d1d-ade1-4e5bc3b49e67/kube-rbac-proxy-node/0.log" Apr 17 14:57:16.233035 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:16.232999 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmbqf_47f04f56-df85-4d1d-ade1-4e5bc3b49e67/kube-rbac-proxy-ovn-metrics/0.log" Apr 17 14:57:16.252926 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:16.252891 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmbqf_47f04f56-df85-4d1d-ade1-4e5bc3b49e67/northd/0.log" Apr 17 14:57:16.272465 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:16.272436 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmbqf_47f04f56-df85-4d1d-ade1-4e5bc3b49e67/nbdb/0.log" Apr 17 14:57:16.293795 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:16.293761 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmbqf_47f04f56-df85-4d1d-ade1-4e5bc3b49e67/sbdb/0.log" Apr 17 14:57:16.470050 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:16.470019 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmbqf_47f04f56-df85-4d1d-ade1-4e5bc3b49e67/ovnkube-controller/0.log" Apr 17 14:57:17.618231 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:17.618194 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-target-gh5vx_6946cb58-b181-4207-94e3-02bca6a030b2/network-check-target-container/0.log" Apr 17 14:57:18.245526 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:18.245427 2568 generic.go:358] "Generic (PLEG): container finished" podID="6c2329fe-ec3d-4b3f-9491-de3e4740b94a" containerID="ba233acab81b1a257b30b3a929a2caaca110e7ae29dfc4241513c84fc76e3e8c" exitCode=0 Apr 17 14:57:18.245526 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:18.245478 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="test-ns-lgg4b/test-trainjob-hpdnx-node-0-0-4xxps" event={"ID":"6c2329fe-ec3d-4b3f-9491-de3e4740b94a","Type":"ContainerDied","Data":"ba233acab81b1a257b30b3a929a2caaca110e7ae29dfc4241513c84fc76e3e8c"} Apr 17 14:57:18.352115 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:18.351994 2568 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="test-ns-lgg4b/test-trainjob-hpdnx-node-0-0-4xxps" Apr 17 14:57:18.530557 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:18.530526 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4c7w4\" (UniqueName: \"kubernetes.io/projected/6c2329fe-ec3d-4b3f-9491-de3e4740b94a-kube-api-access-4c7w4\") pod \"6c2329fe-ec3d-4b3f-9491-de3e4740b94a\" (UID: \"6c2329fe-ec3d-4b3f-9491-de3e4740b94a\") " Apr 17 14:57:18.533125 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:18.533089 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c2329fe-ec3d-4b3f-9491-de3e4740b94a-kube-api-access-4c7w4" (OuterVolumeSpecName: "kube-api-access-4c7w4") pod "6c2329fe-ec3d-4b3f-9491-de3e4740b94a" (UID: "6c2329fe-ec3d-4b3f-9491-de3e4740b94a"). InnerVolumeSpecName "kube-api-access-4c7w4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 14:57:18.546398 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:18.546374 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_iptables-alerter-5qssl_51daea5e-57b3-4362-8315-ad830e53345a/iptables-alerter/0.log" Apr 17 14:57:18.631687 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:18.631606 2568 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4c7w4\" (UniqueName: \"kubernetes.io/projected/6c2329fe-ec3d-4b3f-9491-de3e4740b94a-kube-api-access-4c7w4\") on node \"ip-10-0-140-104.ec2.internal\" DevicePath \"\"" Apr 17 14:57:19.181768 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:19.181721 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_tuned-vzgdn_a7543f64-7649-44f7-bd0e-fdc6724b7f1e/tuned/0.log" Apr 17 14:57:19.250031 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:19.249994 2568 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="test-ns-lgg4b/test-trainjob-hpdnx-node-0-0-4xxps" event={"ID":"6c2329fe-ec3d-4b3f-9491-de3e4740b94a","Type":"ContainerDied","Data":"d65f0f2b4c1843b73669d92258244fdbf7cd3d5582418060c5540fa21b8a0c78"} Apr 17 14:57:19.250216 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:19.250052 2568 scope.go:117] "RemoveContainer" containerID="ba233acab81b1a257b30b3a929a2caaca110e7ae29dfc4241513c84fc76e3e8c" Apr 17 14:57:19.250341 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:19.250320 2568 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="test-ns-lgg4b/test-trainjob-hpdnx-node-0-0-4xxps" Apr 17 14:57:19.279300 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:19.279270 2568 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["test-ns-lgg4b/test-trainjob-hpdnx-node-0-0-4xxps"] Apr 17 14:57:19.283306 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:19.283271 2568 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["test-ns-lgg4b/test-trainjob-hpdnx-node-0-0-4xxps"] Apr 17 14:57:20.971184 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:20.971153 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-6dc5bdb6b4-6qfvr_6a93e218-4c76-4f41-ac1c-71595ca764d4/cluster-samples-operator/0.log" Apr 17 14:57:20.985583 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:20.985551 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-6dc5bdb6b4-6qfvr_6a93e218-4c76-4f41-ac1c-71595ca764d4/cluster-samples-operator-watch/0.log" Apr 17 14:57:21.264561 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:21.264490 2568 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c2329fe-ec3d-4b3f-9491-de3e4740b94a" path="/var/lib/kubelet/pods/6c2329fe-ec3d-4b3f-9491-de3e4740b94a/volumes" Apr 17 14:57:21.826188 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:21.826161 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-d6fc45fc5-qvzpd_fa9a1ab0-741d-44b8-9de1-9a0b296aee9c/service-ca-operator/1.log" Apr 17 14:57:21.853005 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:21.852971 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-d6fc45fc5-qvzpd_fa9a1ab0-741d-44b8-9de1-9a0b296aee9c/service-ca-operator/0.log" Apr 17 14:57:22.597149 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:22.597120 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-csi-drivers_aws-ebs-csi-driver-node-wn4kk_380cb33f-199e-44fd-8e74-06e5aad709a9/csi-driver/0.log" Apr 17 14:57:22.615123 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:22.615086 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-csi-drivers_aws-ebs-csi-driver-node-wn4kk_380cb33f-199e-44fd-8e74-06e5aad709a9/csi-node-driver-registrar/0.log" Apr 17 14:57:22.632629 ip-10-0-140-104 kubenswrapper[2568]: I0417 14:57:22.632601 2568 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-csi-drivers_aws-ebs-csi-driver-node-wn4kk_380cb33f-199e-44fd-8e74-06e5aad709a9/csi-liveness-probe/0.log"