Apr 17 07:48:54.212992 ip-10-0-132-178 systemd[1]: kubelet.service: Failed to load environment files: No such file or directory Apr 17 07:48:54.213006 ip-10-0-132-178 systemd[1]: kubelet.service: Failed to run 'start-pre' task: No such file or directory Apr 17 07:48:54.213016 ip-10-0-132-178 systemd[1]: kubelet.service: Failed with result 'resources'. Apr 17 07:48:54.213337 ip-10-0-132-178 systemd[1]: Failed to start Kubernetes Kubelet. Apr 17 07:49:04.409679 ip-10-0-132-178 systemd[1]: kubelet.service: Failed to schedule restart job: Unit crio.service not found. Apr 17 07:49:04.409697 ip-10-0-132-178 systemd[1]: kubelet.service: Failed with result 'resources'. -- Boot 99c49adb870f4ae68ecd76f34d405b87 -- Apr 17 07:51:17.115915 ip-10-0-132-178 systemd[1]: Starting Kubernetes Kubelet... Apr 17 07:51:17.519963 ip-10-0-132-178 kubenswrapper[2569]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 07:51:17.519963 ip-10-0-132-178 kubenswrapper[2569]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Apr 17 07:51:17.519963 ip-10-0-132-178 kubenswrapper[2569]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 07:51:17.519963 ip-10-0-132-178 kubenswrapper[2569]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 07:51:17.519963 ip-10-0-132-178 kubenswrapper[2569]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 07:51:17.521180 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.520735 2569 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 07:51:17.523678 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523663 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 17 07:51:17.523678 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523678 2569 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 17 07:51:17.523678 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523681 2569 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 17 07:51:17.523772 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523685 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 17 07:51:17.523772 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523687 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 17 07:51:17.523772 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523690 2569 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 17 07:51:17.523772 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523692 2569 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 17 07:51:17.523772 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523695 2569 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 17 07:51:17.523772 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523698 2569 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 17 07:51:17.523772 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523700 2569 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 17 07:51:17.523772 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523703 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 17 07:51:17.523772 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523705 2569 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 17 07:51:17.523772 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523709 2569 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 17 07:51:17.523772 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523711 2569 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 17 07:51:17.523772 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523714 2569 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 17 07:51:17.523772 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523722 2569 feature_gate.go:328] unrecognized feature gate: Example2 Apr 17 07:51:17.523772 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523725 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 17 07:51:17.523772 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523727 2569 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 17 07:51:17.523772 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523731 2569 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 17 07:51:17.523772 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523734 2569 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 17 07:51:17.523772 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523736 2569 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 17 07:51:17.523772 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523739 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 17 07:51:17.523772 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523741 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 17 07:51:17.524236 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523744 2569 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 17 07:51:17.524236 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523747 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 17 07:51:17.524236 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523749 2569 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 17 07:51:17.524236 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523752 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 17 07:51:17.524236 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523754 2569 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 17 07:51:17.524236 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523756 2569 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 17 07:51:17.524236 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523759 2569 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 17 07:51:17.524236 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523762 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 17 07:51:17.524236 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523764 2569 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 17 07:51:17.524236 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523767 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 17 07:51:17.524236 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523769 2569 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 17 07:51:17.524236 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523772 2569 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 17 07:51:17.524236 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523775 2569 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 17 07:51:17.524236 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523778 2569 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 17 07:51:17.524236 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523781 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 17 07:51:17.524236 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523784 2569 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 17 07:51:17.524236 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523788 2569 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 17 07:51:17.524236 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523793 2569 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 17 07:51:17.524236 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523796 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 17 07:51:17.524236 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523798 2569 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 17 07:51:17.524780 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523801 2569 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 17 07:51:17.524780 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523804 2569 feature_gate.go:328] unrecognized feature gate: Example Apr 17 07:51:17.524780 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523807 2569 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 17 07:51:17.524780 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523810 2569 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 17 07:51:17.524780 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523812 2569 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 17 07:51:17.524780 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523815 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 17 07:51:17.524780 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523818 2569 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 17 07:51:17.524780 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523820 2569 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 17 07:51:17.524780 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523823 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 17 07:51:17.524780 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523826 2569 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 17 07:51:17.524780 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523828 2569 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 17 07:51:17.524780 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523832 2569 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 17 07:51:17.524780 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523836 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 17 07:51:17.524780 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523840 2569 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 17 07:51:17.524780 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523843 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 17 07:51:17.524780 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523846 2569 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 17 07:51:17.524780 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523849 2569 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 17 07:51:17.524780 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523852 2569 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 17 07:51:17.524780 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523856 2569 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 17 07:51:17.525239 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523859 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 17 07:51:17.525239 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523863 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 17 07:51:17.525239 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523866 2569 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 17 07:51:17.525239 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523869 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 17 07:51:17.525239 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523875 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 17 07:51:17.525239 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523878 2569 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 17 07:51:17.525239 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523881 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 17 07:51:17.525239 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523884 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 17 07:51:17.525239 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523887 2569 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 17 07:51:17.525239 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523889 2569 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 17 07:51:17.525239 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523892 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 17 07:51:17.525239 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523895 2569 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 17 07:51:17.525239 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523897 2569 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 17 07:51:17.525239 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523900 2569 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 17 07:51:17.525239 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523903 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 17 07:51:17.525239 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523905 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 17 07:51:17.525239 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523908 2569 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 17 07:51:17.525239 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523910 2569 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 17 07:51:17.525239 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523912 2569 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 17 07:51:17.525239 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523915 2569 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 17 07:51:17.525741 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523917 2569 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 17 07:51:17.525741 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523920 2569 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 17 07:51:17.525741 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523922 2569 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 17 07:51:17.525741 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.523925 2569 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 17 07:51:17.525741 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524299 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 17 07:51:17.525741 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524304 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 17 07:51:17.525741 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524307 2569 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 17 07:51:17.525741 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524310 2569 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 17 07:51:17.525741 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524312 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 17 07:51:17.525741 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524315 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 17 07:51:17.525741 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524318 2569 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 17 07:51:17.525741 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524320 2569 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 17 07:51:17.525741 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524323 2569 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 17 07:51:17.525741 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524326 2569 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 17 07:51:17.525741 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524328 2569 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 17 07:51:17.525741 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524331 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 17 07:51:17.525741 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524334 2569 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 17 07:51:17.525741 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524337 2569 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 17 07:51:17.525741 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524340 2569 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 17 07:51:17.525741 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524342 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 17 07:51:17.526222 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524345 2569 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 17 07:51:17.526222 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524347 2569 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 17 07:51:17.526222 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524350 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 17 07:51:17.526222 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524352 2569 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 17 07:51:17.526222 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524354 2569 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 17 07:51:17.526222 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524357 2569 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 17 07:51:17.526222 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524360 2569 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 17 07:51:17.526222 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524362 2569 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 17 07:51:17.526222 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524364 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 17 07:51:17.526222 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524367 2569 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 17 07:51:17.526222 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524369 2569 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 17 07:51:17.526222 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524372 2569 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 17 07:51:17.526222 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524374 2569 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 17 07:51:17.526222 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524377 2569 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 17 07:51:17.526222 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524379 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 17 07:51:17.526222 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524382 2569 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 17 07:51:17.526222 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524384 2569 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 17 07:51:17.526222 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524387 2569 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 17 07:51:17.526222 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524391 2569 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 17 07:51:17.526708 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524395 2569 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 17 07:51:17.526708 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524398 2569 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 17 07:51:17.526708 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524400 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 17 07:51:17.526708 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524403 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 17 07:51:17.526708 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524405 2569 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 17 07:51:17.526708 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524408 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 17 07:51:17.526708 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524411 2569 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 17 07:51:17.526708 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524413 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 17 07:51:17.526708 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524416 2569 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 17 07:51:17.526708 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524419 2569 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 17 07:51:17.526708 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524423 2569 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 17 07:51:17.526708 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524426 2569 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 17 07:51:17.526708 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524428 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 17 07:51:17.526708 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524431 2569 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 17 07:51:17.526708 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524434 2569 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 17 07:51:17.526708 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524436 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 17 07:51:17.526708 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524439 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 17 07:51:17.526708 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524441 2569 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 17 07:51:17.526708 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524444 2569 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 17 07:51:17.527177 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524446 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 17 07:51:17.527177 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524449 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 17 07:51:17.527177 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524451 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 17 07:51:17.527177 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524454 2569 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 17 07:51:17.527177 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524456 2569 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 17 07:51:17.527177 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524459 2569 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 17 07:51:17.527177 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524461 2569 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 17 07:51:17.527177 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524464 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 17 07:51:17.527177 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524466 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 17 07:51:17.527177 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524468 2569 feature_gate.go:328] unrecognized feature gate: Example Apr 17 07:51:17.527177 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524471 2569 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 17 07:51:17.527177 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524474 2569 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 17 07:51:17.527177 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524477 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 17 07:51:17.527177 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524480 2569 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 17 07:51:17.527177 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524483 2569 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 17 07:51:17.527177 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524485 2569 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 17 07:51:17.527177 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524488 2569 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 17 07:51:17.527177 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524491 2569 feature_gate.go:328] unrecognized feature gate: Example2 Apr 17 07:51:17.527177 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524494 2569 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 17 07:51:17.527177 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524496 2569 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 17 07:51:17.527678 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524499 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 17 07:51:17.527678 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524501 2569 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 17 07:51:17.527678 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524504 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 17 07:51:17.527678 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524506 2569 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 17 07:51:17.527678 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524510 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 17 07:51:17.527678 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524513 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 17 07:51:17.527678 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524515 2569 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 17 07:51:17.527678 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524518 2569 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 17 07:51:17.527678 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524520 2569 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 17 07:51:17.527678 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524522 2569 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 17 07:51:17.527678 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524525 2569 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 17 07:51:17.527678 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.524528 2569 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 17 07:51:17.527678 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524601 2569 flags.go:64] FLAG: --address="0.0.0.0" Apr 17 07:51:17.527678 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524608 2569 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Apr 17 07:51:17.527678 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524614 2569 flags.go:64] FLAG: --anonymous-auth="true" Apr 17 07:51:17.527678 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524619 2569 flags.go:64] FLAG: --application-metrics-count-limit="100" Apr 17 07:51:17.527678 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524623 2569 flags.go:64] FLAG: --authentication-token-webhook="false" Apr 17 07:51:17.527678 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524626 2569 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Apr 17 07:51:17.527678 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524630 2569 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Apr 17 07:51:17.527678 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524634 2569 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Apr 17 07:51:17.527678 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524638 2569 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Apr 17 07:51:17.528184 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524657 2569 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Apr 17 07:51:17.528184 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524661 2569 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Apr 17 07:51:17.528184 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524665 2569 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Apr 17 07:51:17.528184 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524673 2569 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Apr 17 07:51:17.528184 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524676 2569 flags.go:64] FLAG: --cgroup-root="" Apr 17 07:51:17.528184 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524679 2569 flags.go:64] FLAG: --cgroups-per-qos="true" Apr 17 07:51:17.528184 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524682 2569 flags.go:64] FLAG: --client-ca-file="" Apr 17 07:51:17.528184 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524685 2569 flags.go:64] FLAG: --cloud-config="" Apr 17 07:51:17.528184 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524688 2569 flags.go:64] FLAG: --cloud-provider="external" Apr 17 07:51:17.528184 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524691 2569 flags.go:64] FLAG: --cluster-dns="[]" Apr 17 07:51:17.528184 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524696 2569 flags.go:64] FLAG: --cluster-domain="" Apr 17 07:51:17.528184 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524699 2569 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Apr 17 07:51:17.528184 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524702 2569 flags.go:64] FLAG: --config-dir="" Apr 17 07:51:17.528184 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524705 2569 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Apr 17 07:51:17.528184 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524708 2569 flags.go:64] FLAG: --container-log-max-files="5" Apr 17 07:51:17.528184 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524712 2569 flags.go:64] FLAG: --container-log-max-size="10Mi" Apr 17 07:51:17.528184 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524715 2569 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Apr 17 07:51:17.528184 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524718 2569 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Apr 17 07:51:17.528184 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524722 2569 flags.go:64] FLAG: --containerd-namespace="k8s.io" Apr 17 07:51:17.528184 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524724 2569 flags.go:64] FLAG: --contention-profiling="false" Apr 17 07:51:17.528184 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524727 2569 flags.go:64] FLAG: --cpu-cfs-quota="true" Apr 17 07:51:17.528184 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524730 2569 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Apr 17 07:51:17.528184 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524734 2569 flags.go:64] FLAG: --cpu-manager-policy="none" Apr 17 07:51:17.528184 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524736 2569 flags.go:64] FLAG: --cpu-manager-policy-options="" Apr 17 07:51:17.528184 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524741 2569 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Apr 17 07:51:17.528808 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524744 2569 flags.go:64] FLAG: --enable-controller-attach-detach="true" Apr 17 07:51:17.528808 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524747 2569 flags.go:64] FLAG: --enable-debugging-handlers="true" Apr 17 07:51:17.528808 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524749 2569 flags.go:64] FLAG: --enable-load-reader="false" Apr 17 07:51:17.528808 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524753 2569 flags.go:64] FLAG: --enable-server="true" Apr 17 07:51:17.528808 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524756 2569 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Apr 17 07:51:17.528808 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524760 2569 flags.go:64] FLAG: --event-burst="100" Apr 17 07:51:17.528808 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524764 2569 flags.go:64] FLAG: --event-qps="50" Apr 17 07:51:17.528808 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524766 2569 flags.go:64] FLAG: --event-storage-age-limit="default=0" Apr 17 07:51:17.528808 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524769 2569 flags.go:64] FLAG: --event-storage-event-limit="default=0" Apr 17 07:51:17.528808 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524772 2569 flags.go:64] FLAG: --eviction-hard="" Apr 17 07:51:17.528808 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524778 2569 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Apr 17 07:51:17.528808 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524781 2569 flags.go:64] FLAG: --eviction-minimum-reclaim="" Apr 17 07:51:17.528808 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524785 2569 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Apr 17 07:51:17.528808 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524788 2569 flags.go:64] FLAG: --eviction-soft="" Apr 17 07:51:17.528808 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524791 2569 flags.go:64] FLAG: --eviction-soft-grace-period="" Apr 17 07:51:17.528808 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524794 2569 flags.go:64] FLAG: --exit-on-lock-contention="false" Apr 17 07:51:17.528808 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524798 2569 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Apr 17 07:51:17.528808 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524801 2569 flags.go:64] FLAG: --experimental-mounter-path="" Apr 17 07:51:17.528808 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524804 2569 flags.go:64] FLAG: --fail-cgroupv1="false" Apr 17 07:51:17.528808 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524807 2569 flags.go:64] FLAG: --fail-swap-on="true" Apr 17 07:51:17.528808 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524809 2569 flags.go:64] FLAG: --feature-gates="" Apr 17 07:51:17.528808 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524813 2569 flags.go:64] FLAG: --file-check-frequency="20s" Apr 17 07:51:17.528808 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524816 2569 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Apr 17 07:51:17.528808 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524819 2569 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Apr 17 07:51:17.528808 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524823 2569 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Apr 17 07:51:17.529454 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524826 2569 flags.go:64] FLAG: --healthz-port="10248" Apr 17 07:51:17.529454 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524829 2569 flags.go:64] FLAG: --help="false" Apr 17 07:51:17.529454 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524832 2569 flags.go:64] FLAG: --hostname-override="ip-10-0-132-178.ec2.internal" Apr 17 07:51:17.529454 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524835 2569 flags.go:64] FLAG: --housekeeping-interval="10s" Apr 17 07:51:17.529454 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524838 2569 flags.go:64] FLAG: --http-check-frequency="20s" Apr 17 07:51:17.529454 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524841 2569 flags.go:64] FLAG: --image-credential-provider-bin-dir="/usr/libexec/kubelet-image-credential-provider-plugins" Apr 17 07:51:17.529454 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524844 2569 flags.go:64] FLAG: --image-credential-provider-config="/etc/kubernetes/credential-providers/ecr-credential-provider.yaml" Apr 17 07:51:17.529454 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524848 2569 flags.go:64] FLAG: --image-gc-high-threshold="85" Apr 17 07:51:17.529454 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524850 2569 flags.go:64] FLAG: --image-gc-low-threshold="80" Apr 17 07:51:17.529454 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524854 2569 flags.go:64] FLAG: --image-service-endpoint="" Apr 17 07:51:17.529454 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524856 2569 flags.go:64] FLAG: --kernel-memcg-notification="false" Apr 17 07:51:17.529454 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524859 2569 flags.go:64] FLAG: --kube-api-burst="100" Apr 17 07:51:17.529454 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524862 2569 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Apr 17 07:51:17.529454 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524865 2569 flags.go:64] FLAG: --kube-api-qps="50" Apr 17 07:51:17.529454 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524868 2569 flags.go:64] FLAG: --kube-reserved="" Apr 17 07:51:17.529454 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524871 2569 flags.go:64] FLAG: --kube-reserved-cgroup="" Apr 17 07:51:17.529454 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524874 2569 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Apr 17 07:51:17.529454 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524879 2569 flags.go:64] FLAG: --kubelet-cgroups="" Apr 17 07:51:17.529454 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524882 2569 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Apr 17 07:51:17.529454 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524885 2569 flags.go:64] FLAG: --lock-file="" Apr 17 07:51:17.529454 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524888 2569 flags.go:64] FLAG: --log-cadvisor-usage="false" Apr 17 07:51:17.529454 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524891 2569 flags.go:64] FLAG: --log-flush-frequency="5s" Apr 17 07:51:17.529454 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524894 2569 flags.go:64] FLAG: --log-json-info-buffer-size="0" Apr 17 07:51:17.529454 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524899 2569 flags.go:64] FLAG: --log-json-split-stream="false" Apr 17 07:51:17.530047 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524902 2569 flags.go:64] FLAG: --log-text-info-buffer-size="0" Apr 17 07:51:17.530047 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524905 2569 flags.go:64] FLAG: --log-text-split-stream="false" Apr 17 07:51:17.530047 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524908 2569 flags.go:64] FLAG: --logging-format="text" Apr 17 07:51:17.530047 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524911 2569 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Apr 17 07:51:17.530047 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524914 2569 flags.go:64] FLAG: --make-iptables-util-chains="true" Apr 17 07:51:17.530047 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524917 2569 flags.go:64] FLAG: --manifest-url="" Apr 17 07:51:17.530047 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524920 2569 flags.go:64] FLAG: --manifest-url-header="" Apr 17 07:51:17.530047 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524924 2569 flags.go:64] FLAG: --max-housekeeping-interval="15s" Apr 17 07:51:17.530047 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524928 2569 flags.go:64] FLAG: --max-open-files="1000000" Apr 17 07:51:17.530047 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524932 2569 flags.go:64] FLAG: --max-pods="110" Apr 17 07:51:17.530047 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524935 2569 flags.go:64] FLAG: --maximum-dead-containers="-1" Apr 17 07:51:17.530047 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524938 2569 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Apr 17 07:51:17.530047 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524941 2569 flags.go:64] FLAG: --memory-manager-policy="None" Apr 17 07:51:17.530047 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524944 2569 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Apr 17 07:51:17.530047 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524947 2569 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Apr 17 07:51:17.530047 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524949 2569 flags.go:64] FLAG: --node-ip="0.0.0.0" Apr 17 07:51:17.530047 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524952 2569 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhel" Apr 17 07:51:17.530047 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524960 2569 flags.go:64] FLAG: --node-status-max-images="50" Apr 17 07:51:17.530047 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524963 2569 flags.go:64] FLAG: --node-status-update-frequency="10s" Apr 17 07:51:17.530047 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524965 2569 flags.go:64] FLAG: --oom-score-adj="-999" Apr 17 07:51:17.530047 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524968 2569 flags.go:64] FLAG: --pod-cidr="" Apr 17 07:51:17.530047 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524971 2569 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8cfe89231412ff3ee8cb6207fa0be33cad0f08e88c9c0f1e9f7e8c6f14d6715" Apr 17 07:51:17.530047 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524976 2569 flags.go:64] FLAG: --pod-manifest-path="" Apr 17 07:51:17.530585 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524979 2569 flags.go:64] FLAG: --pod-max-pids="-1" Apr 17 07:51:17.530585 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524983 2569 flags.go:64] FLAG: --pods-per-core="0" Apr 17 07:51:17.530585 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524986 2569 flags.go:64] FLAG: --port="10250" Apr 17 07:51:17.530585 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524990 2569 flags.go:64] FLAG: --protect-kernel-defaults="false" Apr 17 07:51:17.530585 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524994 2569 flags.go:64] FLAG: --provider-id="aws:///us-east-1a/i-05892ae66aea98314" Apr 17 07:51:17.530585 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.524997 2569 flags.go:64] FLAG: --qos-reserved="" Apr 17 07:51:17.530585 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525000 2569 flags.go:64] FLAG: --read-only-port="10255" Apr 17 07:51:17.530585 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525003 2569 flags.go:64] FLAG: --register-node="true" Apr 17 07:51:17.530585 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525006 2569 flags.go:64] FLAG: --register-schedulable="true" Apr 17 07:51:17.530585 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525008 2569 flags.go:64] FLAG: --register-with-taints="" Apr 17 07:51:17.530585 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525012 2569 flags.go:64] FLAG: --registry-burst="10" Apr 17 07:51:17.530585 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525015 2569 flags.go:64] FLAG: --registry-qps="5" Apr 17 07:51:17.530585 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525018 2569 flags.go:64] FLAG: --reserved-cpus="" Apr 17 07:51:17.530585 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525021 2569 flags.go:64] FLAG: --reserved-memory="" Apr 17 07:51:17.530585 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525025 2569 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Apr 17 07:51:17.530585 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525028 2569 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Apr 17 07:51:17.530585 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525031 2569 flags.go:64] FLAG: --rotate-certificates="false" Apr 17 07:51:17.530585 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525034 2569 flags.go:64] FLAG: --rotate-server-certificates="false" Apr 17 07:51:17.530585 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525037 2569 flags.go:64] FLAG: --runonce="false" Apr 17 07:51:17.530585 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525040 2569 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Apr 17 07:51:17.530585 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525043 2569 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Apr 17 07:51:17.530585 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525046 2569 flags.go:64] FLAG: --seccomp-default="false" Apr 17 07:51:17.530585 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525049 2569 flags.go:64] FLAG: --serialize-image-pulls="true" Apr 17 07:51:17.530585 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525051 2569 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Apr 17 07:51:17.530585 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525055 2569 flags.go:64] FLAG: --storage-driver-db="cadvisor" Apr 17 07:51:17.530585 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525058 2569 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Apr 17 07:51:17.531276 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525061 2569 flags.go:64] FLAG: --storage-driver-password="root" Apr 17 07:51:17.531276 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525063 2569 flags.go:64] FLAG: --storage-driver-secure="false" Apr 17 07:51:17.531276 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525067 2569 flags.go:64] FLAG: --storage-driver-table="stats" Apr 17 07:51:17.531276 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525070 2569 flags.go:64] FLAG: --storage-driver-user="root" Apr 17 07:51:17.531276 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525073 2569 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Apr 17 07:51:17.531276 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525076 2569 flags.go:64] FLAG: --sync-frequency="1m0s" Apr 17 07:51:17.531276 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525079 2569 flags.go:64] FLAG: --system-cgroups="" Apr 17 07:51:17.531276 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525082 2569 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Apr 17 07:51:17.531276 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525089 2569 flags.go:64] FLAG: --system-reserved-cgroup="" Apr 17 07:51:17.531276 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525093 2569 flags.go:64] FLAG: --tls-cert-file="" Apr 17 07:51:17.531276 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525096 2569 flags.go:64] FLAG: --tls-cipher-suites="[]" Apr 17 07:51:17.531276 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525100 2569 flags.go:64] FLAG: --tls-min-version="" Apr 17 07:51:17.531276 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525103 2569 flags.go:64] FLAG: --tls-private-key-file="" Apr 17 07:51:17.531276 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525106 2569 flags.go:64] FLAG: --topology-manager-policy="none" Apr 17 07:51:17.531276 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525109 2569 flags.go:64] FLAG: --topology-manager-policy-options="" Apr 17 07:51:17.531276 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525112 2569 flags.go:64] FLAG: --topology-manager-scope="container" Apr 17 07:51:17.531276 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525115 2569 flags.go:64] FLAG: --v="2" Apr 17 07:51:17.531276 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525120 2569 flags.go:64] FLAG: --version="false" Apr 17 07:51:17.531276 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525127 2569 flags.go:64] FLAG: --vmodule="" Apr 17 07:51:17.531276 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525131 2569 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Apr 17 07:51:17.531276 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.525134 2569 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Apr 17 07:51:17.531276 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525227 2569 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 17 07:51:17.531276 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525231 2569 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 17 07:51:17.531276 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525234 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 17 07:51:17.531965 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525236 2569 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 17 07:51:17.531965 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525239 2569 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 17 07:51:17.531965 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525244 2569 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 17 07:51:17.531965 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525247 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 17 07:51:17.531965 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525251 2569 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 17 07:51:17.531965 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525254 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 17 07:51:17.531965 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525256 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 17 07:51:17.531965 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525259 2569 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 17 07:51:17.531965 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525262 2569 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 17 07:51:17.531965 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525265 2569 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 17 07:51:17.531965 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525267 2569 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 17 07:51:17.531965 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525270 2569 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 17 07:51:17.531965 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525272 2569 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 17 07:51:17.531965 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525275 2569 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 17 07:51:17.531965 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525277 2569 feature_gate.go:328] unrecognized feature gate: Example2 Apr 17 07:51:17.531965 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525280 2569 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 17 07:51:17.531965 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525284 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 17 07:51:17.531965 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525289 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 17 07:51:17.531965 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525291 2569 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 17 07:51:17.532453 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525294 2569 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 17 07:51:17.532453 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525297 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 17 07:51:17.532453 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525299 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 17 07:51:17.532453 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525302 2569 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 17 07:51:17.532453 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525304 2569 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 17 07:51:17.532453 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525307 2569 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 17 07:51:17.532453 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525309 2569 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 17 07:51:17.532453 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525312 2569 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 17 07:51:17.532453 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525314 2569 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 17 07:51:17.532453 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525317 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 17 07:51:17.532453 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525319 2569 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 17 07:51:17.532453 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525322 2569 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 17 07:51:17.532453 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525325 2569 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 17 07:51:17.532453 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525327 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 17 07:51:17.532453 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525330 2569 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 17 07:51:17.532453 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525333 2569 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 17 07:51:17.532453 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525336 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 17 07:51:17.532453 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525338 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 17 07:51:17.532453 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525341 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 17 07:51:17.532453 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525343 2569 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 17 07:51:17.532957 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525346 2569 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 17 07:51:17.532957 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525348 2569 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 17 07:51:17.532957 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525351 2569 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 17 07:51:17.532957 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525353 2569 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 17 07:51:17.532957 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525356 2569 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 17 07:51:17.532957 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525358 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 17 07:51:17.532957 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525361 2569 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 17 07:51:17.532957 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525363 2569 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 17 07:51:17.532957 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525365 2569 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 17 07:51:17.532957 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525372 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 17 07:51:17.532957 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525377 2569 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 17 07:51:17.532957 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525379 2569 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 17 07:51:17.532957 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525382 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 17 07:51:17.532957 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525385 2569 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 17 07:51:17.532957 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525387 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 17 07:51:17.532957 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525390 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 17 07:51:17.532957 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525392 2569 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 17 07:51:17.532957 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525395 2569 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 17 07:51:17.532957 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525398 2569 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 17 07:51:17.532957 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525400 2569 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 17 07:51:17.533485 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525403 2569 feature_gate.go:328] unrecognized feature gate: Example Apr 17 07:51:17.533485 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525405 2569 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 17 07:51:17.533485 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525408 2569 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 17 07:51:17.533485 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525411 2569 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 17 07:51:17.533485 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525413 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 17 07:51:17.533485 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525416 2569 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 17 07:51:17.533485 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525418 2569 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 17 07:51:17.533485 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525421 2569 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 17 07:51:17.533485 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525424 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 17 07:51:17.533485 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525427 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 17 07:51:17.533485 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525429 2569 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 17 07:51:17.533485 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525432 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 17 07:51:17.533485 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525434 2569 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 17 07:51:17.533485 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525437 2569 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 17 07:51:17.533485 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525439 2569 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 17 07:51:17.533485 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525442 2569 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 17 07:51:17.533485 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525444 2569 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 17 07:51:17.533485 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525447 2569 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 17 07:51:17.533485 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525449 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 17 07:51:17.533485 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525452 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 17 07:51:17.534000 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525455 2569 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 17 07:51:17.534000 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525460 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 17 07:51:17.534000 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525464 2569 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 17 07:51:17.534000 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.525467 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 17 07:51:17.534000 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.526002 2569 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 17 07:51:17.534000 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.532349 2569 server.go:530] "Kubelet version" kubeletVersion="v1.33.9" Apr 17 07:51:17.534000 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.532364 2569 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 07:51:17.534000 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532412 2569 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 17 07:51:17.534000 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532418 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 17 07:51:17.534000 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532421 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 17 07:51:17.534000 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532425 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 17 07:51:17.534000 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532428 2569 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 17 07:51:17.534000 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532431 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 17 07:51:17.534000 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532434 2569 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 17 07:51:17.534000 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532437 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 17 07:51:17.534371 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532439 2569 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 17 07:51:17.534371 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532442 2569 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 17 07:51:17.534371 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532445 2569 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 17 07:51:17.534371 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532447 2569 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 17 07:51:17.534371 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532450 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 17 07:51:17.534371 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532453 2569 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 17 07:51:17.534371 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532456 2569 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 17 07:51:17.534371 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532458 2569 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 17 07:51:17.534371 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532461 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 17 07:51:17.534371 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532464 2569 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 17 07:51:17.534371 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532467 2569 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 17 07:51:17.534371 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532470 2569 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 17 07:51:17.534371 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532472 2569 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 17 07:51:17.534371 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532475 2569 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 17 07:51:17.534371 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532477 2569 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 17 07:51:17.534371 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532480 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 17 07:51:17.534371 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532482 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 17 07:51:17.534371 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532485 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 17 07:51:17.534371 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532488 2569 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 17 07:51:17.534371 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532490 2569 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 17 07:51:17.534875 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532493 2569 feature_gate.go:328] unrecognized feature gate: Example2 Apr 17 07:51:17.534875 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532495 2569 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 17 07:51:17.534875 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532498 2569 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 17 07:51:17.534875 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532503 2569 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 17 07:51:17.534875 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532507 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 17 07:51:17.534875 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532510 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 17 07:51:17.534875 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532512 2569 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 17 07:51:17.534875 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532515 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 17 07:51:17.534875 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532518 2569 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 17 07:51:17.534875 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532521 2569 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 17 07:51:17.534875 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532523 2569 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 17 07:51:17.534875 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532526 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 17 07:51:17.534875 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532529 2569 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 17 07:51:17.534875 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532531 2569 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 17 07:51:17.534875 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532534 2569 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 17 07:51:17.534875 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532537 2569 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 17 07:51:17.534875 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532539 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 17 07:51:17.534875 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532542 2569 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 17 07:51:17.534875 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532544 2569 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 17 07:51:17.534875 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532547 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 17 07:51:17.535373 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532550 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 17 07:51:17.535373 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532552 2569 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 17 07:51:17.535373 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532555 2569 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 17 07:51:17.535373 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532557 2569 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 17 07:51:17.535373 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532560 2569 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 17 07:51:17.535373 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532563 2569 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 17 07:51:17.535373 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532565 2569 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 17 07:51:17.535373 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532568 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 17 07:51:17.535373 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532570 2569 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 17 07:51:17.535373 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532573 2569 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 17 07:51:17.535373 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532575 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 17 07:51:17.535373 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532578 2569 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 17 07:51:17.535373 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532581 2569 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 17 07:51:17.535373 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532583 2569 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 17 07:51:17.535373 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532586 2569 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 17 07:51:17.535373 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532589 2569 feature_gate.go:328] unrecognized feature gate: Example Apr 17 07:51:17.535373 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532593 2569 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 17 07:51:17.535373 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532595 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 17 07:51:17.535373 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532598 2569 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 17 07:51:17.535373 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532601 2569 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 17 07:51:17.535873 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532604 2569 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 17 07:51:17.535873 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532606 2569 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 17 07:51:17.535873 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532609 2569 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 17 07:51:17.535873 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532611 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 17 07:51:17.535873 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532614 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 17 07:51:17.535873 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532616 2569 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 17 07:51:17.535873 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532619 2569 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 17 07:51:17.535873 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532621 2569 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 17 07:51:17.535873 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532624 2569 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 17 07:51:17.535873 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532626 2569 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 17 07:51:17.535873 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532629 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 17 07:51:17.535873 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532631 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 17 07:51:17.535873 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532634 2569 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 17 07:51:17.535873 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532636 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 17 07:51:17.535873 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532640 2569 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 17 07:51:17.535873 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532662 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 17 07:51:17.535873 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532665 2569 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 17 07:51:17.535873 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532668 2569 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 17 07:51:17.536307 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.532673 2569 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 17 07:51:17.536307 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532775 2569 feature_gate.go:328] unrecognized feature gate: Example2 Apr 17 07:51:17.536307 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532781 2569 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 17 07:51:17.536307 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532784 2569 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 17 07:51:17.536307 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532787 2569 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 17 07:51:17.536307 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532790 2569 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 17 07:51:17.536307 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532793 2569 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 17 07:51:17.536307 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532796 2569 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 17 07:51:17.536307 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532800 2569 feature_gate.go:328] unrecognized feature gate: Example Apr 17 07:51:17.536307 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532803 2569 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 17 07:51:17.536307 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532807 2569 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 17 07:51:17.536307 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532810 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 17 07:51:17.536307 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532813 2569 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 17 07:51:17.536307 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532816 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 17 07:51:17.536307 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532819 2569 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 17 07:51:17.536684 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532822 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 17 07:51:17.536684 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532825 2569 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 17 07:51:17.536684 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532828 2569 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 17 07:51:17.536684 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532831 2569 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 17 07:51:17.536684 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532834 2569 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 17 07:51:17.536684 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532836 2569 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 17 07:51:17.536684 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532839 2569 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 17 07:51:17.536684 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532841 2569 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 17 07:51:17.536684 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532844 2569 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 17 07:51:17.536684 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532847 2569 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 17 07:51:17.536684 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532849 2569 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 17 07:51:17.536684 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532852 2569 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 17 07:51:17.536684 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532854 2569 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 17 07:51:17.536684 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532857 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 17 07:51:17.536684 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532860 2569 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 17 07:51:17.536684 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532862 2569 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 17 07:51:17.536684 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532865 2569 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 17 07:51:17.536684 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532868 2569 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 17 07:51:17.536684 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532870 2569 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 17 07:51:17.536684 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532873 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 17 07:51:17.537184 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532875 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 17 07:51:17.537184 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532878 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 17 07:51:17.537184 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532880 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 17 07:51:17.537184 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532883 2569 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 17 07:51:17.537184 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532885 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 17 07:51:17.537184 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532888 2569 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 17 07:51:17.537184 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532890 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 17 07:51:17.537184 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532893 2569 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 17 07:51:17.537184 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532895 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 17 07:51:17.537184 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532899 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 17 07:51:17.537184 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532901 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 17 07:51:17.537184 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532903 2569 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 17 07:51:17.537184 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532906 2569 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 17 07:51:17.537184 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532909 2569 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 17 07:51:17.537184 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532911 2569 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 17 07:51:17.537184 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532915 2569 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 17 07:51:17.537184 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532918 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 17 07:51:17.537184 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532921 2569 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 17 07:51:17.537184 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532924 2569 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 17 07:51:17.537638 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532926 2569 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 17 07:51:17.537638 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532929 2569 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 17 07:51:17.537638 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532932 2569 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 17 07:51:17.537638 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532934 2569 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 17 07:51:17.537638 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532937 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 17 07:51:17.537638 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532939 2569 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 17 07:51:17.537638 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532942 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 17 07:51:17.537638 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532944 2569 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 17 07:51:17.537638 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532947 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 17 07:51:17.537638 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532950 2569 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 17 07:51:17.537638 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532952 2569 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 17 07:51:17.537638 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532955 2569 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 17 07:51:17.537638 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532957 2569 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 17 07:51:17.537638 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532960 2569 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 17 07:51:17.537638 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532962 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 17 07:51:17.537638 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532965 2569 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 17 07:51:17.537638 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532968 2569 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 17 07:51:17.537638 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532970 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 17 07:51:17.537638 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532973 2569 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 17 07:51:17.537638 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532976 2569 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 17 07:51:17.538165 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532980 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 17 07:51:17.538165 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532983 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 17 07:51:17.538165 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532985 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 17 07:51:17.538165 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532988 2569 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 17 07:51:17.538165 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532991 2569 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 17 07:51:17.538165 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532993 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 17 07:51:17.538165 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532996 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 17 07:51:17.538165 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.532998 2569 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 17 07:51:17.538165 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.533001 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 17 07:51:17.538165 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.533003 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 17 07:51:17.538165 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.533006 2569 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 17 07:51:17.538165 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.533008 2569 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 17 07:51:17.538165 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:17.533011 2569 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 17 07:51:17.538165 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.533015 2569 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 17 07:51:17.538165 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.533683 2569 server.go:962] "Client rotation is on, will bootstrap in background" Apr 17 07:51:17.538530 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.536770 2569 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Apr 17 07:51:17.538530 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.537636 2569 server.go:1019] "Starting client certificate rotation" Apr 17 07:51:17.538530 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.537739 2569 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 17 07:51:17.538530 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.538483 2569 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 17 07:51:17.563240 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.563217 2569 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 17 07:51:17.568566 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.568539 2569 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 17 07:51:17.585378 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.585361 2569 log.go:25] "Validated CRI v1 runtime API" Apr 17 07:51:17.590665 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.590639 2569 log.go:25] "Validated CRI v1 image API" Apr 17 07:51:17.591856 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.591841 2569 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 17 07:51:17.593960 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.593941 2569 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 17 07:51:17.596802 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.596753 2569 fs.go:135] Filesystem UUIDs: map[1301c98a-94d8-4b92-9350-eaaed3902022:/dev/nvme0n1p4 77fd247d-ab6e-4f41-8ddd-a88e38cd2d3d:/dev/nvme0n1p3 7B77-95E7:/dev/nvme0n1p2] Apr 17 07:51:17.596862 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.596802 2569 fs.go:136] Filesystem partitions: map[/dev/nvme0n1p3:{mountpoint:/boot major:259 minor:3 fsType:ext4 blockSize:0} /dev/nvme0n1p4:{mountpoint:/var major:259 minor:4 fsType:xfs blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Apr 17 07:51:17.603120 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.603014 2569 manager.go:217] Machine: {Timestamp:2026-04-17 07:51:17.601246962 +0000 UTC m=+0.377212982 CPUVendorID:GenuineIntel NumCores:8 NumPhysicalCores:4 NumSockets:1 CpuFrequency:3103794 MemoryCapacity:33164488704 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ec214969ede74ee52dfcfca1b0952dc1 SystemUUID:ec214969-ede7-4ee5-2dfc-fca1b0952dc1 BootID:99c49adb-870f-4ae6-8ecd-76f34d405b87 Filesystems:[{Device:/dev/nvme0n1p3 DeviceMajor:259 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16582242304 Type:vfs Inodes:4048399 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6632898560 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/nvme0n1p4 DeviceMajor:259 DeviceMinor:4 Capacity:128243970048 Type:vfs Inodes:62651840 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6103040 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16582246400 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[259:0:{Name:nvme0n1 Major:259 Minor:0 Size:128849018880 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:02:f3:b5:8f:4e:2b Speed:0 Mtu:9001} {Name:ens5 MacAddress:02:f3:b5:8f:4e:2b Speed:0 Mtu:9001} {Name:ovs-system MacAddress:da:e6:95:e4:40:a6 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33164488704 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0 4] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:1 Threads:[1 5] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:2 Threads:[2 6] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:3 Threads:[3 7] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:}] Caches:[{Id:0 Size:37486592 Type:Unified Level:3}] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Apr 17 07:51:17.603120 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.603116 2569 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Apr 17 07:51:17.603224 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.603197 2569 manager.go:233] Version: {KernelVersion:5.14.0-570.107.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20260414-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Apr 17 07:51:17.604212 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.604187 2569 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 07:51:17.604358 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.604213 2569 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-10-0-132-178.ec2.internal","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 07:51:17.604406 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.604367 2569 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 07:51:17.604406 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.604376 2569 container_manager_linux.go:306] "Creating device plugin manager" Apr 17 07:51:17.604406 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.604393 2569 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 17 07:51:17.605092 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.605082 2569 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 17 07:51:17.606247 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.606237 2569 state_mem.go:36] "Initialized new in-memory state store" Apr 17 07:51:17.606456 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.606443 2569 server.go:1267] "Using root directory" path="/var/lib/kubelet" Apr 17 07:51:17.610321 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.610305 2569 kubelet.go:491] "Attempting to sync node with API server" Apr 17 07:51:17.610400 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.610324 2569 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 07:51:17.610400 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.610337 2569 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Apr 17 07:51:17.610400 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.610347 2569 kubelet.go:397] "Adding apiserver pod source" Apr 17 07:51:17.610400 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.610356 2569 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 07:51:17.611531 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.611518 2569 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 17 07:51:17.611589 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.611537 2569 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 17 07:51:17.614148 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.614120 2569 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.10-2.rhaos4.20.gita4d0894.el9" apiVersion="v1" Apr 17 07:51:17.615834 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.615820 2569 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 07:51:17.617146 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.617134 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Apr 17 07:51:17.617197 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.617151 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Apr 17 07:51:17.617197 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.617157 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Apr 17 07:51:17.617197 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.617163 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Apr 17 07:51:17.617197 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.617169 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Apr 17 07:51:17.617197 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.617174 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Apr 17 07:51:17.617197 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.617180 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Apr 17 07:51:17.617197 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.617186 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Apr 17 07:51:17.617197 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.617192 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Apr 17 07:51:17.617197 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.617198 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Apr 17 07:51:17.617425 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.617207 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Apr 17 07:51:17.617425 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.617216 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Apr 17 07:51:17.618792 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.618782 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Apr 17 07:51:17.618792 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.618792 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Apr 17 07:51:17.622814 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.622799 2569 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 07:51:17.622898 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.622836 2569 server.go:1295] "Started kubelet" Apr 17 07:51:17.622953 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.622916 2569 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 07:51:17.623005 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.622995 2569 server_v1.go:47] "podresources" method="list" useActivePods=true Apr 17 07:51:17.623044 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.622928 2569 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 07:51:17.623726 ip-10-0-132-178 systemd[1]: Started Kubernetes Kubelet. Apr 17 07:51:17.624582 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.624524 2569 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 07:51:17.625767 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.625740 2569 server.go:317] "Adding debug handlers to kubelet server" Apr 17 07:51:17.625880 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:17.625854 2569 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 07:51:17.625993 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.625968 2569 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-132-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 17 07:51:17.626041 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:17.625993 2569 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-132-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 07:51:17.630385 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.630369 2569 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 07:51:17.630553 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.630392 2569 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Apr 17 07:51:17.631028 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.631004 2569 volume_manager.go:295] "The desired_state_of_world populator starts" Apr 17 07:51:17.631116 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.631033 2569 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 07:51:17.631167 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.631130 2569 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 07:51:17.631219 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.631177 2569 reconstruct.go:97] "Volume reconstruction finished" Apr 17 07:51:17.631219 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.631184 2569 reconciler.go:26] "Reconciler: start to sync state" Apr 17 07:51:17.631430 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:17.631411 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-132-178.ec2.internal\" not found" Apr 17 07:51:17.631621 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.631603 2569 factory.go:153] Registering CRI-O factory Apr 17 07:51:17.631716 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.631629 2569 factory.go:223] Registration of the crio container factory successfully Apr 17 07:51:17.631716 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.631711 2569 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Apr 17 07:51:17.631811 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.631722 2569 factory.go:55] Registering systemd factory Apr 17 07:51:17.631811 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.631730 2569 factory.go:223] Registration of the systemd container factory successfully Apr 17 07:51:17.631811 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.631757 2569 factory.go:103] Registering Raw factory Apr 17 07:51:17.631811 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.631772 2569 manager.go:1196] Started watching for new ooms in manager Apr 17 07:51:17.632483 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.632463 2569 manager.go:319] Starting recovery of all containers Apr 17 07:51:17.637099 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:17.636721 2569 kubelet.go:1618] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Apr 17 07:51:17.637099 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:17.636928 2569 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 07:51:17.637413 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:17.637350 2569 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-132-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Apr 17 07:51:17.638794 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.638768 2569 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-x4sw4" Apr 17 07:51:17.640258 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:17.636850 2569 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-132-178.ec2.internal.18a7158dfe59aad9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-132-178.ec2.internal,UID:ip-10-0-132-178.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-10-0-132-178.ec2.internal,},FirstTimestamp:2026-04-17 07:51:17.622811353 +0000 UTC m=+0.398777367,LastTimestamp:2026-04-17 07:51:17.622811353 +0000 UTC m=+0.398777367,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-132-178.ec2.internal,}" Apr 17 07:51:17.640894 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.640866 2569 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 07:51:17.643660 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.643530 2569 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-x4sw4" Apr 17 07:51:17.648217 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.648197 2569 manager.go:324] Recovery completed Apr 17 07:51:17.652217 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.652205 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 07:51:17.655546 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.655529 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-178.ec2.internal" event="NodeHasSufficientMemory" Apr 17 07:51:17.655606 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.655561 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 07:51:17.655606 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.655575 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-178.ec2.internal" event="NodeHasSufficientPID" Apr 17 07:51:17.656132 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.656116 2569 cpu_manager.go:222] "Starting CPU manager" policy="none" Apr 17 07:51:17.656132 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.656127 2569 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Apr 17 07:51:17.656239 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.656142 2569 state_mem.go:36] "Initialized new in-memory state store" Apr 17 07:51:17.657419 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:17.657356 2569 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-132-178.ec2.internal.18a7158e004d24dc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-132-178.ec2.internal,UID:ip-10-0-132-178.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-132-178.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-132-178.ec2.internal,},FirstTimestamp:2026-04-17 07:51:17.655545052 +0000 UTC m=+0.431511072,LastTimestamp:2026-04-17 07:51:17.655545052 +0000 UTC m=+0.431511072,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-132-178.ec2.internal,}" Apr 17 07:51:17.658162 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.658149 2569 policy_none.go:49] "None policy: Start" Apr 17 07:51:17.658227 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.658167 2569 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 07:51:17.658227 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.658180 2569 state_mem.go:35] "Initializing new in-memory state store" Apr 17 07:51:17.707338 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.707320 2569 manager.go:341] "Starting Device Plugin manager" Apr 17 07:51:17.707481 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:17.707373 2569 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 07:51:17.707481 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.707387 2569 server.go:85] "Starting device plugin registration server" Apr 17 07:51:17.707704 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.707682 2569 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 07:51:17.707804 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.707696 2569 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 07:51:17.707804 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.707784 2569 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Apr 17 07:51:17.707909 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.707857 2569 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Apr 17 07:51:17.707909 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.707866 2569 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 07:51:17.708503 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:17.708480 2569 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Apr 17 07:51:17.708575 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:17.708528 2569 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-132-178.ec2.internal\" not found" Apr 17 07:51:17.749000 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.748974 2569 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 07:51:17.749136 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.749008 2569 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 07:51:17.749136 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.749027 2569 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 07:51:17.749136 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.749033 2569 kubelet.go:2451] "Starting kubelet main sync loop" Apr 17 07:51:17.749136 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:17.749068 2569 kubelet.go:2475] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Apr 17 07:51:17.752022 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.751999 2569 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 17 07:51:17.809554 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.809479 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 07:51:17.810502 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.810484 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-178.ec2.internal" event="NodeHasSufficientMemory" Apr 17 07:51:17.810619 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.810513 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 07:51:17.810619 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.810528 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-178.ec2.internal" event="NodeHasSufficientPID" Apr 17 07:51:17.810619 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.810555 2569 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-132-178.ec2.internal" Apr 17 07:51:17.819154 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.819132 2569 kubelet_node_status.go:81] "Successfully registered node" node="ip-10-0-132-178.ec2.internal" Apr 17 07:51:17.819265 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:17.819157 2569 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-132-178.ec2.internal\": node \"ip-10-0-132-178.ec2.internal\" not found" Apr 17 07:51:17.839181 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:17.839158 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-132-178.ec2.internal\" not found" Apr 17 07:51:17.849942 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.849922 2569 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-178.ec2.internal","kube-system/kube-apiserver-proxy-ip-10-0-132-178.ec2.internal"] Apr 17 07:51:17.850044 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.849996 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 07:51:17.851011 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.850995 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-178.ec2.internal" event="NodeHasSufficientMemory" Apr 17 07:51:17.851102 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.851026 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 07:51:17.851102 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.851041 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-178.ec2.internal" event="NodeHasSufficientPID" Apr 17 07:51:17.852223 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.852208 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 07:51:17.852366 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.852352 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-178.ec2.internal" Apr 17 07:51:17.852419 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.852378 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 07:51:17.853557 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.853537 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-178.ec2.internal" event="NodeHasSufficientMemory" Apr 17 07:51:17.853659 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.853568 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 07:51:17.853659 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.853578 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-178.ec2.internal" event="NodeHasSufficientPID" Apr 17 07:51:17.853659 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.853541 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-178.ec2.internal" event="NodeHasSufficientMemory" Apr 17 07:51:17.853659 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.853658 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 07:51:17.853820 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.853675 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-178.ec2.internal" event="NodeHasSufficientPID" Apr 17 07:51:17.855243 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.855229 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-132-178.ec2.internal" Apr 17 07:51:17.855313 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.855253 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 07:51:17.856076 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.856053 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-178.ec2.internal" event="NodeHasSufficientMemory" Apr 17 07:51:17.856076 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.856078 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 07:51:17.856196 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.856089 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-178.ec2.internal" event="NodeHasSufficientPID" Apr 17 07:51:17.880250 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:17.880230 2569 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-132-178.ec2.internal\" not found" node="ip-10-0-132-178.ec2.internal" Apr 17 07:51:17.883602 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:17.883587 2569 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-132-178.ec2.internal\" not found" node="ip-10-0-132-178.ec2.internal" Apr 17 07:51:17.934073 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.932877 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/151265600758e7af6e4648d3faae164a-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-132-178.ec2.internal\" (UID: \"151265600758e7af6e4648d3faae164a\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-178.ec2.internal" Apr 17 07:51:17.934073 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.932927 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/3dd336e8c07dea5cd1d3b829faa22207-config\") pod \"kube-apiserver-proxy-ip-10-0-132-178.ec2.internal\" (UID: \"3dd336e8c07dea5cd1d3b829faa22207\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-132-178.ec2.internal" Apr 17 07:51:17.934073 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:17.932962 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/151265600758e7af6e4648d3faae164a-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-132-178.ec2.internal\" (UID: \"151265600758e7af6e4648d3faae164a\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-178.ec2.internal" Apr 17 07:51:17.939282 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:17.939264 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-132-178.ec2.internal\" not found" Apr 17 07:51:18.033797 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:18.033764 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/3dd336e8c07dea5cd1d3b829faa22207-config\") pod \"kube-apiserver-proxy-ip-10-0-132-178.ec2.internal\" (UID: \"3dd336e8c07dea5cd1d3b829faa22207\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-132-178.ec2.internal" Apr 17 07:51:18.033918 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:18.033803 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/151265600758e7af6e4648d3faae164a-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-132-178.ec2.internal\" (UID: \"151265600758e7af6e4648d3faae164a\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-178.ec2.internal" Apr 17 07:51:18.033918 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:18.033827 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/151265600758e7af6e4648d3faae164a-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-132-178.ec2.internal\" (UID: \"151265600758e7af6e4648d3faae164a\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-178.ec2.internal" Apr 17 07:51:18.033918 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:18.033872 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/3dd336e8c07dea5cd1d3b829faa22207-config\") pod \"kube-apiserver-proxy-ip-10-0-132-178.ec2.internal\" (UID: \"3dd336e8c07dea5cd1d3b829faa22207\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-132-178.ec2.internal" Apr 17 07:51:18.033918 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:18.033905 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/151265600758e7af6e4648d3faae164a-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-132-178.ec2.internal\" (UID: \"151265600758e7af6e4648d3faae164a\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-178.ec2.internal" Apr 17 07:51:18.034057 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:18.033945 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/151265600758e7af6e4648d3faae164a-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-132-178.ec2.internal\" (UID: \"151265600758e7af6e4648d3faae164a\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-178.ec2.internal" Apr 17 07:51:18.039824 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:18.039802 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-132-178.ec2.internal\" not found" Apr 17 07:51:18.140675 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:18.140566 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-132-178.ec2.internal\" not found" Apr 17 07:51:18.181750 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:18.181728 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-178.ec2.internal" Apr 17 07:51:18.185107 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:18.185082 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-132-178.ec2.internal" Apr 17 07:51:18.240790 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:18.240758 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-132-178.ec2.internal\" not found" Apr 17 07:51:18.341347 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:18.341315 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-132-178.ec2.internal\" not found" Apr 17 07:51:18.441863 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:18.441780 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-132-178.ec2.internal\" not found" Apr 17 07:51:18.537335 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:18.537309 2569 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Apr 17 07:51:18.537954 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:18.537458 2569 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Apr 17 07:51:18.542457 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:18.542434 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-132-178.ec2.internal\" not found" Apr 17 07:51:18.568422 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:18.568394 2569 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Apr 17 07:51:18.631186 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:18.631153 2569 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Apr 17 07:51:18.641227 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:18.641205 2569 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 17 07:51:18.641989 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:18.641965 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dd336e8c07dea5cd1d3b829faa22207.slice/crio-f9f78b14615469978ec3db02f2b53090492b1ad408c10812250f62fcf6e09882 WatchSource:0}: Error finding container f9f78b14615469978ec3db02f2b53090492b1ad408c10812250f62fcf6e09882: Status 404 returned error can't find the container with id f9f78b14615469978ec3db02f2b53090492b1ad408c10812250f62fcf6e09882 Apr 17 07:51:18.642354 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:18.642339 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod151265600758e7af6e4648d3faae164a.slice/crio-cd278d7243f8899d87bf5c63d7d0e2b88cae6872b1b1292beca4d7ce23016c2a WatchSource:0}: Error finding container cd278d7243f8899d87bf5c63d7d0e2b88cae6872b1b1292beca4d7ce23016c2a: Status 404 returned error can't find the container with id cd278d7243f8899d87bf5c63d7d0e2b88cae6872b1b1292beca4d7ce23016c2a Apr 17 07:51:18.642984 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:18.642965 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-132-178.ec2.internal\" not found" Apr 17 07:51:18.645290 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:18.645266 2569 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2028-04-16 07:46:17 +0000 UTC" deadline="2028-01-04 01:03:35.888504748 +0000 UTC" Apr 17 07:51:18.645290 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:18.645288 2569 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="15041h12m17.243220182s" Apr 17 07:51:18.646924 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:18.646903 2569 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 17 07:51:18.743950 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:18.743849 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-132-178.ec2.internal\" not found" Apr 17 07:51:18.751920 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:18.751873 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-132-178.ec2.internal" event={"ID":"3dd336e8c07dea5cd1d3b829faa22207","Type":"ContainerStarted","Data":"f9f78b14615469978ec3db02f2b53090492b1ad408c10812250f62fcf6e09882"} Apr 17 07:51:18.752728 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:18.752706 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-178.ec2.internal" event={"ID":"151265600758e7af6e4648d3faae164a","Type":"ContainerStarted","Data":"cd278d7243f8899d87bf5c63d7d0e2b88cae6872b1b1292beca4d7ce23016c2a"} Apr 17 07:51:18.755145 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:18.755131 2569 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-cdv5m" Apr 17 07:51:18.761914 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:18.761897 2569 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-cdv5m" Apr 17 07:51:18.844634 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:18.844599 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-132-178.ec2.internal\" not found" Apr 17 07:51:18.945104 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:18.945070 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-132-178.ec2.internal\" not found" Apr 17 07:51:19.045631 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:19.045597 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-132-178.ec2.internal\" not found" Apr 17 07:51:19.052270 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.052247 2569 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Apr 17 07:51:19.131712 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.131438 2569 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-178.ec2.internal" Apr 17 07:51:19.143743 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.143370 2569 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 17 07:51:19.144485 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.144448 2569 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-132-178.ec2.internal" Apr 17 07:51:19.152154 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.152132 2569 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 17 07:51:19.225717 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.225692 2569 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Apr 17 07:51:19.527843 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.527814 2569 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 17 07:51:19.612459 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.612425 2569 apiserver.go:52] "Watching apiserver" Apr 17 07:51:19.620009 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.619981 2569 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Apr 17 07:51:19.620393 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.620372 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/tuned-ktjch","openshift-multus/multus-additional-cni-plugins-rjghl","openshift-multus/network-metrics-daemon-pqns9","openshift-network-diagnostics/network-check-target-t7s67","openshift-network-operator/iptables-alerter-k6m72","openshift-ovn-kubernetes/ovnkube-node-wbxwr","kube-system/konnectivity-agent-txzjm","kube-system/kube-apiserver-proxy-ip-10-0-132-178.ec2.internal","openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-t8rsw","openshift-dns/node-resolver-xg76x","openshift-image-registry/node-ca-jmbz2","openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-178.ec2.internal","openshift-multus/multus-jm5zl"] Apr 17 07:51:19.622209 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.622185 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xg76x" Apr 17 07:51:19.623758 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.623730 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rjghl" Apr 17 07:51:19.624759 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.624739 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Apr 17 07:51:19.624844 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.624739 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Apr 17 07:51:19.624896 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.624750 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-qw9hw\"" Apr 17 07:51:19.624947 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.624891 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pqns9" Apr 17 07:51:19.625493 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:19.624989 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pqns9" podUID="6f23d3b8-dbbd-4489-8830-f7fb50e6d226" Apr 17 07:51:19.626444 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.626424 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Apr 17 07:51:19.626723 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.626703 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Apr 17 07:51:19.626809 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.626749 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Apr 17 07:51:19.626954 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.626939 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Apr 17 07:51:19.627068 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.627052 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-zpq7q\"" Apr 17 07:51:19.627389 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.627371 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Apr 17 07:51:19.628590 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.628132 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.630089 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.630071 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-txzjm" Apr 17 07:51:19.630474 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.630450 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Apr 17 07:51:19.630556 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.630542 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-nzr8b\"" Apr 17 07:51:19.631187 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.631169 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-t7s67" Apr 17 07:51:19.631283 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:19.631226 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-t7s67" podUID="f102eb44-3020-49b4-b898-dcf83e0d0a11" Apr 17 07:51:19.631345 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.631320 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-k6m72" Apr 17 07:51:19.632192 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.632172 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"konnectivity-agent\"" Apr 17 07:51:19.632725 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.632703 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.632829 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.632814 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"default-dockercfg-9t7p7\"" Apr 17 07:51:19.633166 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.633121 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kube-system\"/\"konnectivity-ca-bundle\"" Apr 17 07:51:19.633549 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.633406 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Apr 17 07:51:19.633885 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.633865 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Apr 17 07:51:19.633885 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.633877 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Apr 17 07:51:19.634030 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.633912 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-dockercfg-w9vhh\"" Apr 17 07:51:19.635190 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.635174 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-t8rsw" Apr 17 07:51:19.636135 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.636118 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Apr 17 07:51:19.636228 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.636163 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Apr 17 07:51:19.636228 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.636182 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Apr 17 07:51:19.636366 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.636124 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Apr 17 07:51:19.636420 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.636382 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-8k57d\"" Apr 17 07:51:19.636420 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.636393 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Apr 17 07:51:19.636514 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.636438 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Apr 17 07:51:19.636736 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.636719 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.636814 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.636743 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-jmbz2" Apr 17 07:51:19.637403 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.637383 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"openshift-service-ca.crt\"" Apr 17 07:51:19.637403 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.637398 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-metrics-serving-cert\"" Apr 17 07:51:19.637525 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.637409 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-sa-dockercfg-wxbs8\"" Apr 17 07:51:19.637736 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.637711 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"kube-root-ca.crt\"" Apr 17 07:51:19.638810 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.638791 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"openshift-service-ca.crt\"" Apr 17 07:51:19.639198 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.639031 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Apr 17 07:51:19.639198 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.639078 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-rf8rr\"" Apr 17 07:51:19.639198 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.639084 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Apr 17 07:51:19.639198 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.639165 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"tuned-dockercfg-crfbz\"" Apr 17 07:51:19.639495 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.639292 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"kube-root-ca.crt\"" Apr 17 07:51:19.639495 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.639319 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Apr 17 07:51:19.643127 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.643107 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/20cbf601-967c-4931-9e41-f9b3377a7284-os-release\") pod \"multus-additional-cni-plugins-rjghl\" (UID: \"20cbf601-967c-4931-9e41-f9b3377a7284\") " pod="openshift-multus/multus-additional-cni-plugins-rjghl" Apr 17 07:51:19.643217 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.643181 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/20cbf601-967c-4931-9e41-f9b3377a7284-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-rjghl\" (UID: \"20cbf601-967c-4931-9e41-f9b3377a7284\") " pod="openshift-multus/multus-additional-cni-plugins-rjghl" Apr 17 07:51:19.643285 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.643266 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-multus-conf-dir\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.643340 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.643298 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q7vj\" (UniqueName: \"kubernetes.io/projected/f102eb44-3020-49b4-b898-dcf83e0d0a11-kube-api-access-6q7vj\") pod \"network-check-target-t7s67\" (UID: \"f102eb44-3020-49b4-b898-dcf83e0d0a11\") " pod="openshift-network-diagnostics/network-check-target-t7s67" Apr 17 07:51:19.643340 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.643325 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2f75afd8-16c7-448e-8f36-42d2b2219a87-host-slash\") pod \"iptables-alerter-k6m72\" (UID: \"2f75afd8-16c7-448e-8f36-42d2b2219a87\") " pod="openshift-network-operator/iptables-alerter-k6m72" Apr 17 07:51:19.643446 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.643360 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-run-ovn\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.643446 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.643386 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-multus-socket-dir-parent\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.643446 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.643410 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f23d3b8-dbbd-4489-8830-f7fb50e6d226-metrics-certs\") pod \"network-metrics-daemon-pqns9\" (UID: \"6f23d3b8-dbbd-4489-8830-f7fb50e6d226\") " pod="openshift-multus/network-metrics-daemon-pqns9" Apr 17 07:51:19.643577 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.643453 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/edf1e79e-27d7-4945-a525-2276d0340c21-agent-certs\") pod \"konnectivity-agent-txzjm\" (UID: \"edf1e79e-27d7-4945-a525-2276d0340c21\") " pod="kube-system/konnectivity-agent-txzjm" Apr 17 07:51:19.643577 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.643486 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9719cdb1-840b-4a4d-8e68-be9ea50fc183-tmp-dir\") pod \"node-resolver-xg76x\" (UID: \"9719cdb1-840b-4a4d-8e68-be9ea50fc183\") " pod="openshift-dns/node-resolver-xg76x" Apr 17 07:51:19.643577 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.643514 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c78xj\" (UniqueName: \"kubernetes.io/projected/20cbf601-967c-4931-9e41-f9b3377a7284-kube-api-access-c78xj\") pod \"multus-additional-cni-plugins-rjghl\" (UID: \"20cbf601-967c-4931-9e41-f9b3377a7284\") " pod="openshift-multus/multus-additional-cni-plugins-rjghl" Apr 17 07:51:19.643577 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.643553 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-host-var-lib-cni-bin\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.643800 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.643598 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n27s\" (UniqueName: \"kubernetes.io/projected/6f23d3b8-dbbd-4489-8830-f7fb50e6d226-kube-api-access-8n27s\") pod \"network-metrics-daemon-pqns9\" (UID: \"6f23d3b8-dbbd-4489-8830-f7fb50e6d226\") " pod="openshift-multus/network-metrics-daemon-pqns9" Apr 17 07:51:19.643800 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.643628 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-host-run-ovn-kubernetes\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.643800 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.643674 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.643800 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.643705 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-os-release\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.643800 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.643727 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-host-slash\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.643800 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.643753 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/2f75afd8-16c7-448e-8f36-42d2b2219a87-iptables-alerter-script\") pod \"iptables-alerter-k6m72\" (UID: \"2f75afd8-16c7-448e-8f36-42d2b2219a87\") " pod="openshift-network-operator/iptables-alerter-k6m72" Apr 17 07:51:19.643800 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.643786 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-etc-openvswitch\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.644098 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.643814 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2e0e406d-3d55-41f3-ba63-448c73f82ded-env-overrides\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.644098 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.643838 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9719cdb1-840b-4a4d-8e68-be9ea50fc183-hosts-file\") pod \"node-resolver-xg76x\" (UID: \"9719cdb1-840b-4a4d-8e68-be9ea50fc183\") " pod="openshift-dns/node-resolver-xg76x" Apr 17 07:51:19.644098 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.643870 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-multus-cni-dir\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.644098 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.643908 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-node-log\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.644098 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.643954 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6246\" (UniqueName: \"kubernetes.io/projected/9719cdb1-840b-4a4d-8e68-be9ea50fc183-kube-api-access-x6246\") pod \"node-resolver-xg76x\" (UID: \"9719cdb1-840b-4a4d-8e68-be9ea50fc183\") " pod="openshift-dns/node-resolver-xg76x" Apr 17 07:51:19.644098 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.643991 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-host-run-netns\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.644098 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.644030 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-etc-kubernetes\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.644098 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.644062 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-var-lib-openvswitch\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.644098 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.644087 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2e0e406d-3d55-41f3-ba63-448c73f82ded-ovnkube-script-lib\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.644488 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.644111 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-system-cni-dir\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.644488 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.644145 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-host-run-multus-certs\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.644488 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.644168 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-systemd-units\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.644488 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.644191 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2e0e406d-3d55-41f3-ba63-448c73f82ded-ovnkube-config\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.644488 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.644213 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-host-var-lib-cni-multus\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.644488 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.644237 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-host-kubelet\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.644488 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.644289 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2e0e406d-3d55-41f3-ba63-448c73f82ded-ovn-node-metrics-cert\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.644488 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.644322 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/20cbf601-967c-4931-9e41-f9b3377a7284-system-cni-dir\") pod \"multus-additional-cni-plugins-rjghl\" (UID: \"20cbf601-967c-4931-9e41-f9b3377a7284\") " pod="openshift-multus/multus-additional-cni-plugins-rjghl" Apr 17 07:51:19.644488 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.644347 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/20cbf601-967c-4931-9e41-f9b3377a7284-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rjghl\" (UID: \"20cbf601-967c-4931-9e41-f9b3377a7284\") " pod="openshift-multus/multus-additional-cni-plugins-rjghl" Apr 17 07:51:19.644488 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.644371 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-run-systemd\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.644488 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.644388 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-cnibin\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.644488 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.644409 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw5wv\" (UniqueName: \"kubernetes.io/projected/2f75afd8-16c7-448e-8f36-42d2b2219a87-kube-api-access-pw5wv\") pod \"iptables-alerter-k6m72\" (UID: \"2f75afd8-16c7-448e-8f36-42d2b2219a87\") " pod="openshift-network-operator/iptables-alerter-k6m72" Apr 17 07:51:19.644488 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.644422 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-host-run-netns\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.644488 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.644455 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4331ee4b-17e0-4bfd-a306-b47ead03f055-cni-binary-copy\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.645200 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.644490 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-host-run-k8s-cni-cncf-io\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.645200 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.644532 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/20cbf601-967c-4931-9e41-f9b3377a7284-cni-binary-copy\") pod \"multus-additional-cni-plugins-rjghl\" (UID: \"20cbf601-967c-4931-9e41-f9b3377a7284\") " pod="openshift-multus/multus-additional-cni-plugins-rjghl" Apr 17 07:51:19.645200 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.644574 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-host-var-lib-kubelet\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.645200 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.644592 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/edf1e79e-27d7-4945-a525-2276d0340c21-konnectivity-ca\") pod \"konnectivity-agent-txzjm\" (UID: \"edf1e79e-27d7-4945-a525-2276d0340c21\") " pod="kube-system/konnectivity-agent-txzjm" Apr 17 07:51:19.645200 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.644615 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/20cbf601-967c-4931-9e41-f9b3377a7284-cnibin\") pod \"multus-additional-cni-plugins-rjghl\" (UID: \"20cbf601-967c-4931-9e41-f9b3377a7284\") " pod="openshift-multus/multus-additional-cni-plugins-rjghl" Apr 17 07:51:19.645200 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.644638 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4331ee4b-17e0-4bfd-a306-b47ead03f055-multus-daemon-config\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.645200 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.644683 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-run-openvswitch\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.645200 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.644722 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/20cbf601-967c-4931-9e41-f9b3377a7284-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rjghl\" (UID: \"20cbf601-967c-4931-9e41-f9b3377a7284\") " pod="openshift-multus/multus-additional-cni-plugins-rjghl" Apr 17 07:51:19.645200 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.644753 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6p4g\" (UniqueName: \"kubernetes.io/projected/4331ee4b-17e0-4bfd-a306-b47ead03f055-kube-api-access-h6p4g\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.645200 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.644784 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-host-cni-bin\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.645200 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.644808 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-host-cni-netd\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.645200 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.644856 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-hostroot\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.645200 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.644887 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-log-socket\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.645200 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.644914 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m2rv\" (UniqueName: \"kubernetes.io/projected/2e0e406d-3d55-41f3-ba63-448c73f82ded-kube-api-access-5m2rv\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.732422 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.732388 2569 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 07:51:19.745290 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.745263 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-log-socket\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.745290 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.745292 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5m2rv\" (UniqueName: \"kubernetes.io/projected/2e0e406d-3d55-41f3-ba63-448c73f82ded-kube-api-access-5m2rv\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.745471 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.745309 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/20cbf601-967c-4931-9e41-f9b3377a7284-os-release\") pod \"multus-additional-cni-plugins-rjghl\" (UID: \"20cbf601-967c-4931-9e41-f9b3377a7284\") " pod="openshift-multus/multus-additional-cni-plugins-rjghl" Apr 17 07:51:19.745471 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.745327 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/20cbf601-967c-4931-9e41-f9b3377a7284-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-rjghl\" (UID: \"20cbf601-967c-4931-9e41-f9b3377a7284\") " pod="openshift-multus/multus-additional-cni-plugins-rjghl" Apr 17 07:51:19.745471 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.745344 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-multus-conf-dir\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.745471 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.745369 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/41bd8583-55d0-44d9-b7c7-0dee1be59867-device-dir\") pod \"aws-ebs-csi-driver-node-t8rsw\" (UID: \"41bd8583-55d0-44d9-b7c7-0dee1be59867\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-t8rsw" Apr 17 07:51:19.745471 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.745369 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-log-socket\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.745471 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.745392 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/41bd8583-55d0-44d9-b7c7-0dee1be59867-etc-selinux\") pod \"aws-ebs-csi-driver-node-t8rsw\" (UID: \"41bd8583-55d0-44d9-b7c7-0dee1be59867\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-t8rsw" Apr 17 07:51:19.745471 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.745420 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/41bd8583-55d0-44d9-b7c7-0dee1be59867-sys-fs\") pod \"aws-ebs-csi-driver-node-t8rsw\" (UID: \"41bd8583-55d0-44d9-b7c7-0dee1be59867\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-t8rsw" Apr 17 07:51:19.745471 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.745450 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-multus-conf-dir\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.745471 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.745464 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6q7vj\" (UniqueName: \"kubernetes.io/projected/f102eb44-3020-49b4-b898-dcf83e0d0a11-kube-api-access-6q7vj\") pod \"network-check-target-t7s67\" (UID: \"f102eb44-3020-49b4-b898-dcf83e0d0a11\") " pod="openshift-network-diagnostics/network-check-target-t7s67" Apr 17 07:51:19.745471 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.745458 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/20cbf601-967c-4931-9e41-f9b3377a7284-os-release\") pod \"multus-additional-cni-plugins-rjghl\" (UID: \"20cbf601-967c-4931-9e41-f9b3377a7284\") " pod="openshift-multus/multus-additional-cni-plugins-rjghl" Apr 17 07:51:19.745954 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.745489 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2f75afd8-16c7-448e-8f36-42d2b2219a87-host-slash\") pod \"iptables-alerter-k6m72\" (UID: \"2f75afd8-16c7-448e-8f36-42d2b2219a87\") " pod="openshift-network-operator/iptables-alerter-k6m72" Apr 17 07:51:19.745954 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.745516 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-run-ovn\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.745954 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.745538 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2f75afd8-16c7-448e-8f36-42d2b2219a87-host-slash\") pod \"iptables-alerter-k6m72\" (UID: \"2f75afd8-16c7-448e-8f36-42d2b2219a87\") " pod="openshift-network-operator/iptables-alerter-k6m72" Apr 17 07:51:19.745954 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.745592 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-multus-socket-dir-parent\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.745954 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.745617 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-run-ovn\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.745954 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.745634 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-multus-socket-dir-parent\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.745954 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.745637 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-etc-sysctl-conf\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.745954 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.745696 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f23d3b8-dbbd-4489-8830-f7fb50e6d226-metrics-certs\") pod \"network-metrics-daemon-pqns9\" (UID: \"6f23d3b8-dbbd-4489-8830-f7fb50e6d226\") " pod="openshift-multus/network-metrics-daemon-pqns9" Apr 17 07:51:19.745954 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.745730 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/edf1e79e-27d7-4945-a525-2276d0340c21-agent-certs\") pod \"konnectivity-agent-txzjm\" (UID: \"edf1e79e-27d7-4945-a525-2276d0340c21\") " pod="kube-system/konnectivity-agent-txzjm" Apr 17 07:51:19.745954 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.745760 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9719cdb1-840b-4a4d-8e68-be9ea50fc183-tmp-dir\") pod \"node-resolver-xg76x\" (UID: \"9719cdb1-840b-4a4d-8e68-be9ea50fc183\") " pod="openshift-dns/node-resolver-xg76x" Apr 17 07:51:19.745954 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.745792 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c78xj\" (UniqueName: \"kubernetes.io/projected/20cbf601-967c-4931-9e41-f9b3377a7284-kube-api-access-c78xj\") pod \"multus-additional-cni-plugins-rjghl\" (UID: \"20cbf601-967c-4931-9e41-f9b3377a7284\") " pod="openshift-multus/multus-additional-cni-plugins-rjghl" Apr 17 07:51:19.745954 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.745824 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-host-var-lib-cni-bin\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.745954 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:19.745832 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 07:51:19.745954 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.745851 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-tmp\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.745954 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.745876 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5681141e-f336-418e-bb90-9a38ef69d0fc-host\") pod \"node-ca-jmbz2\" (UID: \"5681141e-f336-418e-bb90-9a38ef69d0fc\") " pod="openshift-image-registry/node-ca-jmbz2" Apr 17 07:51:19.745954 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.745937 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-host-var-lib-cni-bin\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.745954 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:19.745940 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f23d3b8-dbbd-4489-8830-f7fb50e6d226-metrics-certs podName:6f23d3b8-dbbd-4489-8830-f7fb50e6d226 nodeName:}" failed. No retries permitted until 2026-04-17 07:51:20.24588998 +0000 UTC m=+3.021856004 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6f23d3b8-dbbd-4489-8830-f7fb50e6d226-metrics-certs") pod "network-metrics-daemon-pqns9" (UID: "6f23d3b8-dbbd-4489-8830-f7fb50e6d226") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 07:51:19.746811 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.745965 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8n27s\" (UniqueName: \"kubernetes.io/projected/6f23d3b8-dbbd-4489-8830-f7fb50e6d226-kube-api-access-8n27s\") pod \"network-metrics-daemon-pqns9\" (UID: \"6f23d3b8-dbbd-4489-8830-f7fb50e6d226\") " pod="openshift-multus/network-metrics-daemon-pqns9" Apr 17 07:51:19.746811 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.745980 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/20cbf601-967c-4931-9e41-f9b3377a7284-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-rjghl\" (UID: \"20cbf601-967c-4931-9e41-f9b3377a7284\") " pod="openshift-multus/multus-additional-cni-plugins-rjghl" Apr 17 07:51:19.746811 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.745989 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-host-run-ovn-kubernetes\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.746811 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746047 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-host-run-ovn-kubernetes\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.746811 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746055 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.746811 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746068 2569 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Apr 17 07:51:19.746811 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746086 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-os-release\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.746811 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746115 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5681141e-f336-418e-bb90-9a38ef69d0fc-serviceca\") pod \"node-ca-jmbz2\" (UID: \"5681141e-f336-418e-bb90-9a38ef69d0fc\") " pod="openshift-image-registry/node-ca-jmbz2" Apr 17 07:51:19.746811 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746106 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.746811 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746143 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-host-slash\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.746811 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746181 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/2f75afd8-16c7-448e-8f36-42d2b2219a87-iptables-alerter-script\") pod \"iptables-alerter-k6m72\" (UID: \"2f75afd8-16c7-448e-8f36-42d2b2219a87\") " pod="openshift-network-operator/iptables-alerter-k6m72" Apr 17 07:51:19.746811 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746199 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-os-release\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.746811 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746223 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-etc-openvswitch\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.746811 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746211 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9719cdb1-840b-4a4d-8e68-be9ea50fc183-tmp-dir\") pod \"node-resolver-xg76x\" (UID: \"9719cdb1-840b-4a4d-8e68-be9ea50fc183\") " pod="openshift-dns/node-resolver-xg76x" Apr 17 07:51:19.746811 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746229 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-host-slash\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.746811 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746264 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2e0e406d-3d55-41f3-ba63-448c73f82ded-env-overrides\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.746811 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746257 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-etc-openvswitch\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.747579 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746322 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9719cdb1-840b-4a4d-8e68-be9ea50fc183-hosts-file\") pod \"node-resolver-xg76x\" (UID: \"9719cdb1-840b-4a4d-8e68-be9ea50fc183\") " pod="openshift-dns/node-resolver-xg76x" Apr 17 07:51:19.747579 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746357 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-multus-cni-dir\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.747579 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746383 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-sys\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.747579 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746410 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-node-log\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.747579 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746420 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9719cdb1-840b-4a4d-8e68-be9ea50fc183-hosts-file\") pod \"node-resolver-xg76x\" (UID: \"9719cdb1-840b-4a4d-8e68-be9ea50fc183\") " pod="openshift-dns/node-resolver-xg76x" Apr 17 07:51:19.747579 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746425 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-multus-cni-dir\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.747579 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746434 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x6246\" (UniqueName: \"kubernetes.io/projected/9719cdb1-840b-4a4d-8e68-be9ea50fc183-kube-api-access-x6246\") pod \"node-resolver-xg76x\" (UID: \"9719cdb1-840b-4a4d-8e68-be9ea50fc183\") " pod="openshift-dns/node-resolver-xg76x" Apr 17 07:51:19.747579 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746469 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-node-log\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.747579 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746492 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-host-run-netns\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.747579 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746513 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-etc-kubernetes\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.747579 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746541 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41bd8583-55d0-44d9-b7c7-0dee1be59867-kubelet-dir\") pod \"aws-ebs-csi-driver-node-t8rsw\" (UID: \"41bd8583-55d0-44d9-b7c7-0dee1be59867\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-t8rsw" Apr 17 07:51:19.747579 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746568 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-var-lib-openvswitch\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.747579 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746571 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-etc-kubernetes\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.747579 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746542 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-host-run-netns\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.747579 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746592 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2e0e406d-3d55-41f3-ba63-448c73f82ded-ovnkube-script-lib\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.747579 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746625 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-var-lib-openvswitch\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.747579 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746654 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-system-cni-dir\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.747579 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746680 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-host-run-multus-certs\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.748364 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746700 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/2f75afd8-16c7-448e-8f36-42d2b2219a87-iptables-alerter-script\") pod \"iptables-alerter-k6m72\" (UID: \"2f75afd8-16c7-448e-8f36-42d2b2219a87\") " pod="openshift-network-operator/iptables-alerter-k6m72" Apr 17 07:51:19.748364 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746705 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-etc-tuned\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.748364 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746754 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-system-cni-dir\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.748364 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746760 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-systemd-units\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.748364 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746766 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-host-run-multus-certs\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.748364 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746788 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2e0e406d-3d55-41f3-ba63-448c73f82ded-ovnkube-config\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.748364 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746812 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-host-var-lib-cni-multus\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.748364 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746814 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-systemd-units\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.748364 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746837 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/41bd8583-55d0-44d9-b7c7-0dee1be59867-registration-dir\") pod \"aws-ebs-csi-driver-node-t8rsw\" (UID: \"41bd8583-55d0-44d9-b7c7-0dee1be59867\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-t8rsw" Apr 17 07:51:19.748364 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746862 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsrqx\" (UniqueName: \"kubernetes.io/projected/41bd8583-55d0-44d9-b7c7-0dee1be59867-kube-api-access-bsrqx\") pod \"aws-ebs-csi-driver-node-t8rsw\" (UID: \"41bd8583-55d0-44d9-b7c7-0dee1be59867\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-t8rsw" Apr 17 07:51:19.748364 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746881 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-host-var-lib-cni-multus\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.748364 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746887 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-lib-modules\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.748364 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746820 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2e0e406d-3d55-41f3-ba63-448c73f82ded-env-overrides\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.748364 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746930 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-host-kubelet\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.748364 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746958 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2e0e406d-3d55-41f3-ba63-448c73f82ded-ovn-node-metrics-cert\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.748364 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.746981 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/20cbf601-967c-4931-9e41-f9b3377a7284-system-cni-dir\") pod \"multus-additional-cni-plugins-rjghl\" (UID: \"20cbf601-967c-4931-9e41-f9b3377a7284\") " pod="openshift-multus/multus-additional-cni-plugins-rjghl" Apr 17 07:51:19.748364 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747010 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/20cbf601-967c-4931-9e41-f9b3377a7284-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rjghl\" (UID: \"20cbf601-967c-4931-9e41-f9b3377a7284\") " pod="openshift-multus/multus-additional-cni-plugins-rjghl" Apr 17 07:51:19.748838 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747021 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-host-kubelet\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.748838 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747036 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-etc-sysconfig\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.748838 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747061 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-host\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.748838 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747062 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/20cbf601-967c-4931-9e41-f9b3377a7284-system-cni-dir\") pod \"multus-additional-cni-plugins-rjghl\" (UID: \"20cbf601-967c-4931-9e41-f9b3377a7284\") " pod="openshift-multus/multus-additional-cni-plugins-rjghl" Apr 17 07:51:19.748838 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747088 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-run-systemd\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.748838 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747111 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-cnibin\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.748838 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747134 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2e0e406d-3d55-41f3-ba63-448c73f82ded-ovnkube-script-lib\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.748838 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747176 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-etc-sysctl-d\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.748838 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747201 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-run\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.748838 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747204 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-run-systemd\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.748838 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747228 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntv5c\" (UniqueName: \"kubernetes.io/projected/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-kube-api-access-ntv5c\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.748838 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747255 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pw5wv\" (UniqueName: \"kubernetes.io/projected/2f75afd8-16c7-448e-8f36-42d2b2219a87-kube-api-access-pw5wv\") pod \"iptables-alerter-k6m72\" (UID: \"2f75afd8-16c7-448e-8f36-42d2b2219a87\") " pod="openshift-network-operator/iptables-alerter-k6m72" Apr 17 07:51:19.748838 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747276 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2e0e406d-3d55-41f3-ba63-448c73f82ded-ovnkube-config\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.748838 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747282 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-host-run-netns\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.748838 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747259 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-cnibin\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.748838 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747318 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4331ee4b-17e0-4bfd-a306-b47ead03f055-cni-binary-copy\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.748838 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747328 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-host-run-netns\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.749408 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747342 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-host-run-k8s-cni-cncf-io\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.749408 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747376 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/20cbf601-967c-4931-9e41-f9b3377a7284-cni-binary-copy\") pod \"multus-additional-cni-plugins-rjghl\" (UID: \"20cbf601-967c-4931-9e41-f9b3377a7284\") " pod="openshift-multus/multus-additional-cni-plugins-rjghl" Apr 17 07:51:19.749408 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747400 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-host-var-lib-kubelet\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.749408 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747425 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-etc-kubernetes\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.749408 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747459 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/edf1e79e-27d7-4945-a525-2276d0340c21-konnectivity-ca\") pod \"konnectivity-agent-txzjm\" (UID: \"edf1e79e-27d7-4945-a525-2276d0340c21\") " pod="kube-system/konnectivity-agent-txzjm" Apr 17 07:51:19.749408 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747484 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/20cbf601-967c-4931-9e41-f9b3377a7284-cnibin\") pod \"multus-additional-cni-plugins-rjghl\" (UID: \"20cbf601-967c-4931-9e41-f9b3377a7284\") " pod="openshift-multus/multus-additional-cni-plugins-rjghl" Apr 17 07:51:19.749408 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747509 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4331ee4b-17e0-4bfd-a306-b47ead03f055-multus-daemon-config\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.749408 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747521 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/20cbf601-967c-4931-9e41-f9b3377a7284-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rjghl\" (UID: \"20cbf601-967c-4931-9e41-f9b3377a7284\") " pod="openshift-multus/multus-additional-cni-plugins-rjghl" Apr 17 07:51:19.749408 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747534 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-etc-systemd\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.749408 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747574 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-host-run-k8s-cni-cncf-io\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.749408 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747633 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-var-lib-kubelet\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.749408 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747680 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-run-openvswitch\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.749408 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747708 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/20cbf601-967c-4931-9e41-f9b3377a7284-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rjghl\" (UID: \"20cbf601-967c-4931-9e41-f9b3377a7284\") " pod="openshift-multus/multus-additional-cni-plugins-rjghl" Apr 17 07:51:19.749408 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747759 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h6p4g\" (UniqueName: \"kubernetes.io/projected/4331ee4b-17e0-4bfd-a306-b47ead03f055-kube-api-access-h6p4g\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.749408 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747784 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-host-cni-bin\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.749408 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747809 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-host-cni-netd\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.749408 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747833 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-hostroot\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.750205 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747860 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/41bd8583-55d0-44d9-b7c7-0dee1be59867-socket-dir\") pod \"aws-ebs-csi-driver-node-t8rsw\" (UID: \"41bd8583-55d0-44d9-b7c7-0dee1be59867\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-t8rsw" Apr 17 07:51:19.750205 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747895 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-etc-modprobe-d\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.750205 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747923 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4331ee4b-17e0-4bfd-a306-b47ead03f055-cni-binary-copy\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.750205 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747947 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh4nt\" (UniqueName: \"kubernetes.io/projected/5681141e-f336-418e-bb90-9a38ef69d0fc-kube-api-access-mh4nt\") pod \"node-ca-jmbz2\" (UID: \"5681141e-f336-418e-bb90-9a38ef69d0fc\") " pod="openshift-image-registry/node-ca-jmbz2" Apr 17 07:51:19.750205 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.747969 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/20cbf601-967c-4931-9e41-f9b3377a7284-cnibin\") pod \"multus-additional-cni-plugins-rjghl\" (UID: \"20cbf601-967c-4931-9e41-f9b3377a7284\") " pod="openshift-multus/multus-additional-cni-plugins-rjghl" Apr 17 07:51:19.750205 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.748018 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-host-cni-bin\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.750205 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.748048 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-host-cni-netd\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.750205 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.748070 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4331ee4b-17e0-4bfd-a306-b47ead03f055-multus-daemon-config\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.750205 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.748080 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-hostroot\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.750205 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.748085 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/20cbf601-967c-4931-9e41-f9b3377a7284-cni-binary-copy\") pod \"multus-additional-cni-plugins-rjghl\" (UID: \"20cbf601-967c-4931-9e41-f9b3377a7284\") " pod="openshift-multus/multus-additional-cni-plugins-rjghl" Apr 17 07:51:19.750205 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.748137 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4331ee4b-17e0-4bfd-a306-b47ead03f055-host-var-lib-kubelet\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.750205 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.748197 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2e0e406d-3d55-41f3-ba63-448c73f82ded-run-openvswitch\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.750205 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.748261 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/20cbf601-967c-4931-9e41-f9b3377a7284-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rjghl\" (UID: \"20cbf601-967c-4931-9e41-f9b3377a7284\") " pod="openshift-multus/multus-additional-cni-plugins-rjghl" Apr 17 07:51:19.750205 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.748501 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/edf1e79e-27d7-4945-a525-2276d0340c21-konnectivity-ca\") pod \"konnectivity-agent-txzjm\" (UID: \"edf1e79e-27d7-4945-a525-2276d0340c21\") " pod="kube-system/konnectivity-agent-txzjm" Apr 17 07:51:19.750205 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.749625 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2e0e406d-3d55-41f3-ba63-448c73f82ded-ovn-node-metrics-cert\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.750205 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.749696 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/edf1e79e-27d7-4945-a525-2276d0340c21-agent-certs\") pod \"konnectivity-agent-txzjm\" (UID: \"edf1e79e-27d7-4945-a525-2276d0340c21\") " pod="kube-system/konnectivity-agent-txzjm" Apr 17 07:51:19.754760 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:19.754740 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 17 07:51:19.754868 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:19.754767 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 17 07:51:19.754868 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:19.754780 2569 projected.go:194] Error preparing data for projected volume kube-api-access-6q7vj for pod openshift-network-diagnostics/network-check-target-t7s67: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 07:51:19.754868 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:19.754842 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f102eb44-3020-49b4-b898-dcf83e0d0a11-kube-api-access-6q7vj podName:f102eb44-3020-49b4-b898-dcf83e0d0a11 nodeName:}" failed. No retries permitted until 2026-04-17 07:51:20.254825884 +0000 UTC m=+3.030791895 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6q7vj" (UniqueName: "kubernetes.io/projected/f102eb44-3020-49b4-b898-dcf83e0d0a11-kube-api-access-6q7vj") pod "network-check-target-t7s67" (UID: "f102eb44-3020-49b4-b898-dcf83e0d0a11") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 07:51:19.756857 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.756833 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n27s\" (UniqueName: \"kubernetes.io/projected/6f23d3b8-dbbd-4489-8830-f7fb50e6d226-kube-api-access-8n27s\") pod \"network-metrics-daemon-pqns9\" (UID: \"6f23d3b8-dbbd-4489-8830-f7fb50e6d226\") " pod="openshift-multus/network-metrics-daemon-pqns9" Apr 17 07:51:19.756957 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.756879 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m2rv\" (UniqueName: \"kubernetes.io/projected/2e0e406d-3d55-41f3-ba63-448c73f82ded-kube-api-access-5m2rv\") pod \"ovnkube-node-wbxwr\" (UID: \"2e0e406d-3d55-41f3-ba63-448c73f82ded\") " pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.757062 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.757046 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c78xj\" (UniqueName: \"kubernetes.io/projected/20cbf601-967c-4931-9e41-f9b3377a7284-kube-api-access-c78xj\") pod \"multus-additional-cni-plugins-rjghl\" (UID: \"20cbf601-967c-4931-9e41-f9b3377a7284\") " pod="openshift-multus/multus-additional-cni-plugins-rjghl" Apr 17 07:51:19.761955 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.761930 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw5wv\" (UniqueName: \"kubernetes.io/projected/2f75afd8-16c7-448e-8f36-42d2b2219a87-kube-api-access-pw5wv\") pod \"iptables-alerter-k6m72\" (UID: \"2f75afd8-16c7-448e-8f36-42d2b2219a87\") " pod="openshift-network-operator/iptables-alerter-k6m72" Apr 17 07:51:19.762241 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.762217 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6p4g\" (UniqueName: \"kubernetes.io/projected/4331ee4b-17e0-4bfd-a306-b47ead03f055-kube-api-access-h6p4g\") pod \"multus-jm5zl\" (UID: \"4331ee4b-17e0-4bfd-a306-b47ead03f055\") " pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.762351 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.762334 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6246\" (UniqueName: \"kubernetes.io/projected/9719cdb1-840b-4a4d-8e68-be9ea50fc183-kube-api-access-x6246\") pod \"node-resolver-xg76x\" (UID: \"9719cdb1-840b-4a4d-8e68-be9ea50fc183\") " pod="openshift-dns/node-resolver-xg76x" Apr 17 07:51:19.763171 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.763147 2569 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-16 07:46:18 +0000 UTC" deadline="2027-12-11 09:58:39.869685037 +0000 UTC" Apr 17 07:51:19.763171 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.763170 2569 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="14474h7m20.106516628s" Apr 17 07:51:19.848794 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.848713 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-etc-tuned\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.848794 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.848754 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/41bd8583-55d0-44d9-b7c7-0dee1be59867-registration-dir\") pod \"aws-ebs-csi-driver-node-t8rsw\" (UID: \"41bd8583-55d0-44d9-b7c7-0dee1be59867\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-t8rsw" Apr 17 07:51:19.848794 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.848776 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bsrqx\" (UniqueName: \"kubernetes.io/projected/41bd8583-55d0-44d9-b7c7-0dee1be59867-kube-api-access-bsrqx\") pod \"aws-ebs-csi-driver-node-t8rsw\" (UID: \"41bd8583-55d0-44d9-b7c7-0dee1be59867\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-t8rsw" Apr 17 07:51:19.849054 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.848905 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/41bd8583-55d0-44d9-b7c7-0dee1be59867-registration-dir\") pod \"aws-ebs-csi-driver-node-t8rsw\" (UID: \"41bd8583-55d0-44d9-b7c7-0dee1be59867\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-t8rsw" Apr 17 07:51:19.849054 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.848943 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-lib-modules\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.849054 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.848981 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-etc-sysconfig\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.849054 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.849036 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-etc-sysconfig\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.849054 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.849037 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-host\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.849283 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.849076 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-etc-sysctl-d\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.849283 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.849103 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-run\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.849283 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.849119 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-lib-modules\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.849283 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.849129 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ntv5c\" (UniqueName: \"kubernetes.io/projected/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-kube-api-access-ntv5c\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.849283 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.849145 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-host\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.849283 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.849184 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-etc-kubernetes\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.849283 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.849191 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-run\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.849283 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.849206 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-etc-sysctl-d\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.849283 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.849215 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-etc-systemd\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.849283 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.849236 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-var-lib-kubelet\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.849283 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.849246 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-etc-kubernetes\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.849283 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.849257 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/41bd8583-55d0-44d9-b7c7-0dee1be59867-socket-dir\") pod \"aws-ebs-csi-driver-node-t8rsw\" (UID: \"41bd8583-55d0-44d9-b7c7-0dee1be59867\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-t8rsw" Apr 17 07:51:19.849283 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.849261 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-etc-systemd\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.849283 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.849277 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-etc-modprobe-d\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.849900 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.849296 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-var-lib-kubelet\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.849900 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.849299 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mh4nt\" (UniqueName: \"kubernetes.io/projected/5681141e-f336-418e-bb90-9a38ef69d0fc-kube-api-access-mh4nt\") pod \"node-ca-jmbz2\" (UID: \"5681141e-f336-418e-bb90-9a38ef69d0fc\") " pod="openshift-image-registry/node-ca-jmbz2" Apr 17 07:51:19.849900 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.849376 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/41bd8583-55d0-44d9-b7c7-0dee1be59867-socket-dir\") pod \"aws-ebs-csi-driver-node-t8rsw\" (UID: \"41bd8583-55d0-44d9-b7c7-0dee1be59867\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-t8rsw" Apr 17 07:51:19.849900 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.849382 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/41bd8583-55d0-44d9-b7c7-0dee1be59867-device-dir\") pod \"aws-ebs-csi-driver-node-t8rsw\" (UID: \"41bd8583-55d0-44d9-b7c7-0dee1be59867\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-t8rsw" Apr 17 07:51:19.849900 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.849411 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/41bd8583-55d0-44d9-b7c7-0dee1be59867-etc-selinux\") pod \"aws-ebs-csi-driver-node-t8rsw\" (UID: \"41bd8583-55d0-44d9-b7c7-0dee1be59867\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-t8rsw" Apr 17 07:51:19.849900 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.849426 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-etc-modprobe-d\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.849900 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.849430 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/41bd8583-55d0-44d9-b7c7-0dee1be59867-sys-fs\") pod \"aws-ebs-csi-driver-node-t8rsw\" (UID: \"41bd8583-55d0-44d9-b7c7-0dee1be59867\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-t8rsw" Apr 17 07:51:19.849900 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.849465 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/41bd8583-55d0-44d9-b7c7-0dee1be59867-sys-fs\") pod \"aws-ebs-csi-driver-node-t8rsw\" (UID: \"41bd8583-55d0-44d9-b7c7-0dee1be59867\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-t8rsw" Apr 17 07:51:19.849900 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.849488 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/41bd8583-55d0-44d9-b7c7-0dee1be59867-device-dir\") pod \"aws-ebs-csi-driver-node-t8rsw\" (UID: \"41bd8583-55d0-44d9-b7c7-0dee1be59867\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-t8rsw" Apr 17 07:51:19.849900 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.849491 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-etc-sysctl-conf\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.849900 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.849544 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-tmp\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.849900 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.849555 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/41bd8583-55d0-44d9-b7c7-0dee1be59867-etc-selinux\") pod \"aws-ebs-csi-driver-node-t8rsw\" (UID: \"41bd8583-55d0-44d9-b7c7-0dee1be59867\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-t8rsw" Apr 17 07:51:19.849900 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.849575 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5681141e-f336-418e-bb90-9a38ef69d0fc-host\") pod \"node-ca-jmbz2\" (UID: \"5681141e-f336-418e-bb90-9a38ef69d0fc\") " pod="openshift-image-registry/node-ca-jmbz2" Apr 17 07:51:19.849900 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.849585 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-etc-sysctl-conf\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.849900 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.849604 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5681141e-f336-418e-bb90-9a38ef69d0fc-serviceca\") pod \"node-ca-jmbz2\" (UID: \"5681141e-f336-418e-bb90-9a38ef69d0fc\") " pod="openshift-image-registry/node-ca-jmbz2" Apr 17 07:51:19.849900 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.849620 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5681141e-f336-418e-bb90-9a38ef69d0fc-host\") pod \"node-ca-jmbz2\" (UID: \"5681141e-f336-418e-bb90-9a38ef69d0fc\") " pod="openshift-image-registry/node-ca-jmbz2" Apr 17 07:51:19.849900 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.849634 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-sys\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.850659 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.849685 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41bd8583-55d0-44d9-b7c7-0dee1be59867-kubelet-dir\") pod \"aws-ebs-csi-driver-node-t8rsw\" (UID: \"41bd8583-55d0-44d9-b7c7-0dee1be59867\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-t8rsw" Apr 17 07:51:19.850659 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.849697 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-sys\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.850659 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.849764 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41bd8583-55d0-44d9-b7c7-0dee1be59867-kubelet-dir\") pod \"aws-ebs-csi-driver-node-t8rsw\" (UID: \"41bd8583-55d0-44d9-b7c7-0dee1be59867\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-t8rsw" Apr 17 07:51:19.850659 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.849982 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5681141e-f336-418e-bb90-9a38ef69d0fc-serviceca\") pod \"node-ca-jmbz2\" (UID: \"5681141e-f336-418e-bb90-9a38ef69d0fc\") " pod="openshift-image-registry/node-ca-jmbz2" Apr 17 07:51:19.851424 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.851397 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-etc-tuned\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.853304 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.853285 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-tmp\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.856946 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.856907 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntv5c\" (UniqueName: \"kubernetes.io/projected/8d6a3eec-3fe4-46c9-9cf9-999e26bceb92-kube-api-access-ntv5c\") pod \"tuned-ktjch\" (UID: \"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92\") " pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.857199 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.857183 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh4nt\" (UniqueName: \"kubernetes.io/projected/5681141e-f336-418e-bb90-9a38ef69d0fc-kube-api-access-mh4nt\") pod \"node-ca-jmbz2\" (UID: \"5681141e-f336-418e-bb90-9a38ef69d0fc\") " pod="openshift-image-registry/node-ca-jmbz2" Apr 17 07:51:19.857257 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.857213 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsrqx\" (UniqueName: \"kubernetes.io/projected/41bd8583-55d0-44d9-b7c7-0dee1be59867-kube-api-access-bsrqx\") pod \"aws-ebs-csi-driver-node-t8rsw\" (UID: \"41bd8583-55d0-44d9-b7c7-0dee1be59867\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-t8rsw" Apr 17 07:51:19.936396 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.936366 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xg76x" Apr 17 07:51:19.944030 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.944007 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rjghl" Apr 17 07:51:19.951711 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.951688 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-jm5zl" Apr 17 07:51:19.959126 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.959105 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-txzjm" Apr 17 07:51:19.965622 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.965599 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-k6m72" Apr 17 07:51:19.975283 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.975261 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:19.980958 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.980939 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-t8rsw" Apr 17 07:51:19.987531 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.987511 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-ktjch" Apr 17 07:51:19.991639 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:19.991620 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-jmbz2" Apr 17 07:51:20.252388 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:20.252311 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f23d3b8-dbbd-4489-8830-f7fb50e6d226-metrics-certs\") pod \"network-metrics-daemon-pqns9\" (UID: \"6f23d3b8-dbbd-4489-8830-f7fb50e6d226\") " pod="openshift-multus/network-metrics-daemon-pqns9" Apr 17 07:51:20.252552 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:20.252459 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 07:51:20.252607 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:20.252555 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f23d3b8-dbbd-4489-8830-f7fb50e6d226-metrics-certs podName:6f23d3b8-dbbd-4489-8830-f7fb50e6d226 nodeName:}" failed. No retries permitted until 2026-04-17 07:51:21.252524637 +0000 UTC m=+4.028490651 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6f23d3b8-dbbd-4489-8830-f7fb50e6d226-metrics-certs") pod "network-metrics-daemon-pqns9" (UID: "6f23d3b8-dbbd-4489-8830-f7fb50e6d226") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 07:51:20.302868 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:20.302668 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9719cdb1_840b_4a4d_8e68_be9ea50fc183.slice/crio-0f3d9f340bcc7262950125c10d4d0f5098b75dcbec02703d85b602b9e3f95b11 WatchSource:0}: Error finding container 0f3d9f340bcc7262950125c10d4d0f5098b75dcbec02703d85b602b9e3f95b11: Status 404 returned error can't find the container with id 0f3d9f340bcc7262950125c10d4d0f5098b75dcbec02703d85b602b9e3f95b11 Apr 17 07:51:20.303998 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:20.303973 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41bd8583_55d0_44d9_b7c7_0dee1be59867.slice/crio-4ad42928c1265e64ddb200b8e85a1178cda916c973053b483c2efa371b738baa WatchSource:0}: Error finding container 4ad42928c1265e64ddb200b8e85a1178cda916c973053b483c2efa371b738baa: Status 404 returned error can't find the container with id 4ad42928c1265e64ddb200b8e85a1178cda916c973053b483c2efa371b738baa Apr 17 07:51:20.308483 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:20.308444 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f75afd8_16c7_448e_8f36_42d2b2219a87.slice/crio-fd23fb85ba34a4903dc31b38fb7c42cf88c08a67e7dfb20028809f08c227acb3 WatchSource:0}: Error finding container fd23fb85ba34a4903dc31b38fb7c42cf88c08a67e7dfb20028809f08c227acb3: Status 404 returned error can't find the container with id fd23fb85ba34a4903dc31b38fb7c42cf88c08a67e7dfb20028809f08c227acb3 Apr 17 07:51:20.310145 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:20.310123 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d6a3eec_3fe4_46c9_9cf9_999e26bceb92.slice/crio-b4bd40c2d539baf67cf40fe22c85d3ae92b03f7ce347a67c4f0c858658db1a48 WatchSource:0}: Error finding container b4bd40c2d539baf67cf40fe22c85d3ae92b03f7ce347a67c4f0c858658db1a48: Status 404 returned error can't find the container with id b4bd40c2d539baf67cf40fe22c85d3ae92b03f7ce347a67c4f0c858658db1a48 Apr 17 07:51:20.311136 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:20.310990 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4331ee4b_17e0_4bfd_a306_b47ead03f055.slice/crio-386d7118434533a8c137ea9856e845350fcd8363a1cdb6e30bdf390880f495c3 WatchSource:0}: Error finding container 386d7118434533a8c137ea9856e845350fcd8363a1cdb6e30bdf390880f495c3: Status 404 returned error can't find the container with id 386d7118434533a8c137ea9856e845350fcd8363a1cdb6e30bdf390880f495c3 Apr 17 07:51:20.312458 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:20.312427 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5681141e_f336_418e_bb90_9a38ef69d0fc.slice/crio-c6cd5e0e3c08b4dfdf058c03abbeb0bd1cb6d5ff0ef10953c359f43cc5883e98 WatchSource:0}: Error finding container c6cd5e0e3c08b4dfdf058c03abbeb0bd1cb6d5ff0ef10953c359f43cc5883e98: Status 404 returned error can't find the container with id c6cd5e0e3c08b4dfdf058c03abbeb0bd1cb6d5ff0ef10953c359f43cc5883e98 Apr 17 07:51:20.313483 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:20.313454 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedf1e79e_27d7_4945_a525_2276d0340c21.slice/crio-1975ead548484e12e6f10932a1b24933a4374c980b6c12e7fed99509f3c5d382 WatchSource:0}: Error finding container 1975ead548484e12e6f10932a1b24933a4374c980b6c12e7fed99509f3c5d382: Status 404 returned error can't find the container with id 1975ead548484e12e6f10932a1b24933a4374c980b6c12e7fed99509f3c5d382 Apr 17 07:51:20.314718 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:20.314565 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20cbf601_967c_4931_9e41_f9b3377a7284.slice/crio-3d2ae3b57c318088dca2b9974598e2e87b26b1d83fd6cced4a5a0469dd8b1c9e WatchSource:0}: Error finding container 3d2ae3b57c318088dca2b9974598e2e87b26b1d83fd6cced4a5a0469dd8b1c9e: Status 404 returned error can't find the container with id 3d2ae3b57c318088dca2b9974598e2e87b26b1d83fd6cced4a5a0469dd8b1c9e Apr 17 07:51:20.315489 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:20.315467 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e0e406d_3d55_41f3_ba63_448c73f82ded.slice/crio-49f2fe04b8c5119dcca03dce02b80ddb709a7788e054942698d7ac6dc05b8899 WatchSource:0}: Error finding container 49f2fe04b8c5119dcca03dce02b80ddb709a7788e054942698d7ac6dc05b8899: Status 404 returned error can't find the container with id 49f2fe04b8c5119dcca03dce02b80ddb709a7788e054942698d7ac6dc05b8899 Apr 17 07:51:20.353377 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:20.353354 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6q7vj\" (UniqueName: \"kubernetes.io/projected/f102eb44-3020-49b4-b898-dcf83e0d0a11-kube-api-access-6q7vj\") pod \"network-check-target-t7s67\" (UID: \"f102eb44-3020-49b4-b898-dcf83e0d0a11\") " pod="openshift-network-diagnostics/network-check-target-t7s67" Apr 17 07:51:20.353548 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:20.353529 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 17 07:51:20.353598 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:20.353553 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 17 07:51:20.353598 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:20.353563 2569 projected.go:194] Error preparing data for projected volume kube-api-access-6q7vj for pod openshift-network-diagnostics/network-check-target-t7s67: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 07:51:20.353675 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:20.353614 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f102eb44-3020-49b4-b898-dcf83e0d0a11-kube-api-access-6q7vj podName:f102eb44-3020-49b4-b898-dcf83e0d0a11 nodeName:}" failed. No retries permitted until 2026-04-17 07:51:21.353598844 +0000 UTC m=+4.129564844 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-6q7vj" (UniqueName: "kubernetes.io/projected/f102eb44-3020-49b4-b898-dcf83e0d0a11-kube-api-access-6q7vj") pod "network-check-target-t7s67" (UID: "f102eb44-3020-49b4-b898-dcf83e0d0a11") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 07:51:20.760108 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:20.760070 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xg76x" event={"ID":"9719cdb1-840b-4a4d-8e68-be9ea50fc183","Type":"ContainerStarted","Data":"0f3d9f340bcc7262950125c10d4d0f5098b75dcbec02703d85b602b9e3f95b11"} Apr 17 07:51:20.764765 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:20.764061 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-132-178.ec2.internal" event={"ID":"3dd336e8c07dea5cd1d3b829faa22207","Type":"ContainerStarted","Data":"f1b7880d03eefad8204bdaef625a1ab514532246fc98bbb6170bcb911a53bb48"} Apr 17 07:51:20.764765 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:20.764717 2569 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-16 07:46:18 +0000 UTC" deadline="2027-12-06 19:46:48.007195234 +0000 UTC" Apr 17 07:51:20.764765 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:20.764740 2569 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="14363h55m27.24245901s" Apr 17 07:51:20.771014 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:20.770773 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" event={"ID":"2e0e406d-3d55-41f3-ba63-448c73f82ded","Type":"ContainerStarted","Data":"49f2fe04b8c5119dcca03dce02b80ddb709a7788e054942698d7ac6dc05b8899"} Apr 17 07:51:20.772895 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:20.772868 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-txzjm" event={"ID":"edf1e79e-27d7-4945-a525-2276d0340c21","Type":"ContainerStarted","Data":"1975ead548484e12e6f10932a1b24933a4374c980b6c12e7fed99509f3c5d382"} Apr 17 07:51:20.774460 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:20.774399 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjghl" event={"ID":"20cbf601-967c-4931-9e41-f9b3377a7284","Type":"ContainerStarted","Data":"3d2ae3b57c318088dca2b9974598e2e87b26b1d83fd6cced4a5a0469dd8b1c9e"} Apr 17 07:51:20.777482 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:20.777436 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jm5zl" event={"ID":"4331ee4b-17e0-4bfd-a306-b47ead03f055","Type":"ContainerStarted","Data":"386d7118434533a8c137ea9856e845350fcd8363a1cdb6e30bdf390880f495c3"} Apr 17 07:51:20.783224 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:20.783202 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-k6m72" event={"ID":"2f75afd8-16c7-448e-8f36-42d2b2219a87","Type":"ContainerStarted","Data":"fd23fb85ba34a4903dc31b38fb7c42cf88c08a67e7dfb20028809f08c227acb3"} Apr 17 07:51:20.786638 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:20.786613 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-t8rsw" event={"ID":"41bd8583-55d0-44d9-b7c7-0dee1be59867","Type":"ContainerStarted","Data":"4ad42928c1265e64ddb200b8e85a1178cda916c973053b483c2efa371b738baa"} Apr 17 07:51:20.795419 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:20.795391 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-jmbz2" event={"ID":"5681141e-f336-418e-bb90-9a38ef69d0fc","Type":"ContainerStarted","Data":"c6cd5e0e3c08b4dfdf058c03abbeb0bd1cb6d5ff0ef10953c359f43cc5883e98"} Apr 17 07:51:20.798095 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:20.798072 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-ktjch" event={"ID":"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92","Type":"ContainerStarted","Data":"b4bd40c2d539baf67cf40fe22c85d3ae92b03f7ce347a67c4f0c858658db1a48"} Apr 17 07:51:21.260973 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:21.260280 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f23d3b8-dbbd-4489-8830-f7fb50e6d226-metrics-certs\") pod \"network-metrics-daemon-pqns9\" (UID: \"6f23d3b8-dbbd-4489-8830-f7fb50e6d226\") " pod="openshift-multus/network-metrics-daemon-pqns9" Apr 17 07:51:21.260973 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:21.260460 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 07:51:21.260973 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:21.260522 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f23d3b8-dbbd-4489-8830-f7fb50e6d226-metrics-certs podName:6f23d3b8-dbbd-4489-8830-f7fb50e6d226 nodeName:}" failed. No retries permitted until 2026-04-17 07:51:23.260504038 +0000 UTC m=+6.036470053 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6f23d3b8-dbbd-4489-8830-f7fb50e6d226-metrics-certs") pod "network-metrics-daemon-pqns9" (UID: "6f23d3b8-dbbd-4489-8830-f7fb50e6d226") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 07:51:21.367711 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:21.361730 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6q7vj\" (UniqueName: \"kubernetes.io/projected/f102eb44-3020-49b4-b898-dcf83e0d0a11-kube-api-access-6q7vj\") pod \"network-check-target-t7s67\" (UID: \"f102eb44-3020-49b4-b898-dcf83e0d0a11\") " pod="openshift-network-diagnostics/network-check-target-t7s67" Apr 17 07:51:21.368666 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:21.368088 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 17 07:51:21.368666 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:21.368117 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 17 07:51:21.368666 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:21.368139 2569 projected.go:194] Error preparing data for projected volume kube-api-access-6q7vj for pod openshift-network-diagnostics/network-check-target-t7s67: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 07:51:21.368666 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:21.368225 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f102eb44-3020-49b4-b898-dcf83e0d0a11-kube-api-access-6q7vj podName:f102eb44-3020-49b4-b898-dcf83e0d0a11 nodeName:}" failed. No retries permitted until 2026-04-17 07:51:23.368183427 +0000 UTC m=+6.144149433 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6q7vj" (UniqueName: "kubernetes.io/projected/f102eb44-3020-49b4-b898-dcf83e0d0a11-kube-api-access-6q7vj") pod "network-check-target-t7s67" (UID: "f102eb44-3020-49b4-b898-dcf83e0d0a11") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 07:51:21.750114 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:21.749808 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pqns9" Apr 17 07:51:21.750114 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:21.749939 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pqns9" podUID="6f23d3b8-dbbd-4489-8830-f7fb50e6d226" Apr 17 07:51:21.750114 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:21.749948 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-t7s67" Apr 17 07:51:21.750114 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:21.750033 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-t7s67" podUID="f102eb44-3020-49b4-b898-dcf83e0d0a11" Apr 17 07:51:21.809434 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:21.808285 2569 generic.go:358] "Generic (PLEG): container finished" podID="151265600758e7af6e4648d3faae164a" containerID="731e4ebd4afdd8f67ae67dce0fc9376f0c0ef8537fb70fed4ee113c7b3f45eff" exitCode=0 Apr 17 07:51:21.809434 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:21.809206 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-178.ec2.internal" event={"ID":"151265600758e7af6e4648d3faae164a","Type":"ContainerDied","Data":"731e4ebd4afdd8f67ae67dce0fc9376f0c0ef8537fb70fed4ee113c7b3f45eff"} Apr 17 07:51:21.824427 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:21.824115 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-proxy-ip-10-0-132-178.ec2.internal" podStartSLOduration=2.824098441 podStartE2EDuration="2.824098441s" podCreationTimestamp="2026-04-17 07:51:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 07:51:20.779739755 +0000 UTC m=+3.555705781" watchObservedRunningTime="2026-04-17 07:51:21.824098441 +0000 UTC m=+4.600064467" Apr 17 07:51:22.817077 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:22.817038 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-178.ec2.internal" event={"ID":"151265600758e7af6e4648d3faae164a","Type":"ContainerStarted","Data":"879e334eaf2849404c8c72df3406d38347dc7f5646aef69e669cbbe878bcdaf8"} Apr 17 07:51:22.833224 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:22.833152 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-178.ec2.internal" podStartSLOduration=3.833135264 podStartE2EDuration="3.833135264s" podCreationTimestamp="2026-04-17 07:51:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 07:51:22.832079686 +0000 UTC m=+5.608045712" watchObservedRunningTime="2026-04-17 07:51:22.833135264 +0000 UTC m=+5.609101291" Apr 17 07:51:23.277657 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:23.277603 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f23d3b8-dbbd-4489-8830-f7fb50e6d226-metrics-certs\") pod \"network-metrics-daemon-pqns9\" (UID: \"6f23d3b8-dbbd-4489-8830-f7fb50e6d226\") " pod="openshift-multus/network-metrics-daemon-pqns9" Apr 17 07:51:23.277877 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:23.277857 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 07:51:23.277949 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:23.277931 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f23d3b8-dbbd-4489-8830-f7fb50e6d226-metrics-certs podName:6f23d3b8-dbbd-4489-8830-f7fb50e6d226 nodeName:}" failed. No retries permitted until 2026-04-17 07:51:27.277911353 +0000 UTC m=+10.053877415 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6f23d3b8-dbbd-4489-8830-f7fb50e6d226-metrics-certs") pod "network-metrics-daemon-pqns9" (UID: "6f23d3b8-dbbd-4489-8830-f7fb50e6d226") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 07:51:23.379044 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:23.378995 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6q7vj\" (UniqueName: \"kubernetes.io/projected/f102eb44-3020-49b4-b898-dcf83e0d0a11-kube-api-access-6q7vj\") pod \"network-check-target-t7s67\" (UID: \"f102eb44-3020-49b4-b898-dcf83e0d0a11\") " pod="openshift-network-diagnostics/network-check-target-t7s67" Apr 17 07:51:23.379198 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:23.379169 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 17 07:51:23.379198 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:23.379191 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 17 07:51:23.379371 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:23.379204 2569 projected.go:194] Error preparing data for projected volume kube-api-access-6q7vj for pod openshift-network-diagnostics/network-check-target-t7s67: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 07:51:23.379371 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:23.379268 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f102eb44-3020-49b4-b898-dcf83e0d0a11-kube-api-access-6q7vj podName:f102eb44-3020-49b4-b898-dcf83e0d0a11 nodeName:}" failed. No retries permitted until 2026-04-17 07:51:27.379249208 +0000 UTC m=+10.155215212 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6q7vj" (UniqueName: "kubernetes.io/projected/f102eb44-3020-49b4-b898-dcf83e0d0a11-kube-api-access-6q7vj") pod "network-check-target-t7s67" (UID: "f102eb44-3020-49b4-b898-dcf83e0d0a11") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 07:51:23.750707 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:23.750294 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pqns9" Apr 17 07:51:23.750707 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:23.750429 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pqns9" podUID="6f23d3b8-dbbd-4489-8830-f7fb50e6d226" Apr 17 07:51:23.750707 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:23.750444 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-t7s67" Apr 17 07:51:23.750707 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:23.750570 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-t7s67" podUID="f102eb44-3020-49b4-b898-dcf83e0d0a11" Apr 17 07:51:25.756184 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:25.756154 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-t7s67" Apr 17 07:51:25.756605 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:25.756189 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pqns9" Apr 17 07:51:25.756605 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:25.756269 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-t7s67" podUID="f102eb44-3020-49b4-b898-dcf83e0d0a11" Apr 17 07:51:25.756605 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:25.756373 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pqns9" podUID="6f23d3b8-dbbd-4489-8830-f7fb50e6d226" Apr 17 07:51:27.315304 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:27.315262 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f23d3b8-dbbd-4489-8830-f7fb50e6d226-metrics-certs\") pod \"network-metrics-daemon-pqns9\" (UID: \"6f23d3b8-dbbd-4489-8830-f7fb50e6d226\") " pod="openshift-multus/network-metrics-daemon-pqns9" Apr 17 07:51:27.315806 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:27.315419 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 07:51:27.315806 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:27.315509 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f23d3b8-dbbd-4489-8830-f7fb50e6d226-metrics-certs podName:6f23d3b8-dbbd-4489-8830-f7fb50e6d226 nodeName:}" failed. No retries permitted until 2026-04-17 07:51:35.315487387 +0000 UTC m=+18.091453407 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6f23d3b8-dbbd-4489-8830-f7fb50e6d226-metrics-certs") pod "network-metrics-daemon-pqns9" (UID: "6f23d3b8-dbbd-4489-8830-f7fb50e6d226") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 07:51:27.416334 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:27.416300 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6q7vj\" (UniqueName: \"kubernetes.io/projected/f102eb44-3020-49b4-b898-dcf83e0d0a11-kube-api-access-6q7vj\") pod \"network-check-target-t7s67\" (UID: \"f102eb44-3020-49b4-b898-dcf83e0d0a11\") " pod="openshift-network-diagnostics/network-check-target-t7s67" Apr 17 07:51:27.416518 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:27.416455 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 17 07:51:27.416518 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:27.416474 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 17 07:51:27.416518 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:27.416486 2569 projected.go:194] Error preparing data for projected volume kube-api-access-6q7vj for pod openshift-network-diagnostics/network-check-target-t7s67: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 07:51:27.416688 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:27.416539 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f102eb44-3020-49b4-b898-dcf83e0d0a11-kube-api-access-6q7vj podName:f102eb44-3020-49b4-b898-dcf83e0d0a11 nodeName:}" failed. No retries permitted until 2026-04-17 07:51:35.416520911 +0000 UTC m=+18.192486912 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-6q7vj" (UniqueName: "kubernetes.io/projected/f102eb44-3020-49b4-b898-dcf83e0d0a11-kube-api-access-6q7vj") pod "network-check-target-t7s67" (UID: "f102eb44-3020-49b4-b898-dcf83e0d0a11") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 07:51:27.753171 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:27.752774 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-t7s67" Apr 17 07:51:27.753171 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:27.752901 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-t7s67" podUID="f102eb44-3020-49b4-b898-dcf83e0d0a11" Apr 17 07:51:27.753171 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:27.752958 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pqns9" Apr 17 07:51:27.753171 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:27.753071 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pqns9" podUID="6f23d3b8-dbbd-4489-8830-f7fb50e6d226" Apr 17 07:51:28.241121 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:28.241086 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kube-system/global-pull-secret-syncer-6xq6f"] Apr 17 07:51:28.247562 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:28.247110 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-6xq6f" Apr 17 07:51:28.247562 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:28.247208 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-6xq6f" podUID="69371276-3a35-470f-aaf5-f3677601470b" Apr 17 07:51:28.321481 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:28.321443 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/69371276-3a35-470f-aaf5-f3677601470b-dbus\") pod \"global-pull-secret-syncer-6xq6f\" (UID: \"69371276-3a35-470f-aaf5-f3677601470b\") " pod="kube-system/global-pull-secret-syncer-6xq6f" Apr 17 07:51:28.321481 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:28.321484 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/69371276-3a35-470f-aaf5-f3677601470b-original-pull-secret\") pod \"global-pull-secret-syncer-6xq6f\" (UID: \"69371276-3a35-470f-aaf5-f3677601470b\") " pod="kube-system/global-pull-secret-syncer-6xq6f" Apr 17 07:51:28.321991 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:28.321529 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/69371276-3a35-470f-aaf5-f3677601470b-kubelet-config\") pod \"global-pull-secret-syncer-6xq6f\" (UID: \"69371276-3a35-470f-aaf5-f3677601470b\") " pod="kube-system/global-pull-secret-syncer-6xq6f" Apr 17 07:51:28.423448 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:28.422626 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/69371276-3a35-470f-aaf5-f3677601470b-original-pull-secret\") pod \"global-pull-secret-syncer-6xq6f\" (UID: \"69371276-3a35-470f-aaf5-f3677601470b\") " pod="kube-system/global-pull-secret-syncer-6xq6f" Apr 17 07:51:28.423448 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:28.422850 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/69371276-3a35-470f-aaf5-f3677601470b-kubelet-config\") pod \"global-pull-secret-syncer-6xq6f\" (UID: \"69371276-3a35-470f-aaf5-f3677601470b\") " pod="kube-system/global-pull-secret-syncer-6xq6f" Apr 17 07:51:28.423448 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:28.422914 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/69371276-3a35-470f-aaf5-f3677601470b-dbus\") pod \"global-pull-secret-syncer-6xq6f\" (UID: \"69371276-3a35-470f-aaf5-f3677601470b\") " pod="kube-system/global-pull-secret-syncer-6xq6f" Apr 17 07:51:28.423448 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:28.423078 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/69371276-3a35-470f-aaf5-f3677601470b-dbus\") pod \"global-pull-secret-syncer-6xq6f\" (UID: \"69371276-3a35-470f-aaf5-f3677601470b\") " pod="kube-system/global-pull-secret-syncer-6xq6f" Apr 17 07:51:28.423448 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:28.422757 2569 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 17 07:51:28.423448 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:28.423157 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69371276-3a35-470f-aaf5-f3677601470b-original-pull-secret podName:69371276-3a35-470f-aaf5-f3677601470b nodeName:}" failed. No retries permitted until 2026-04-17 07:51:28.923137311 +0000 UTC m=+11.699103314 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/69371276-3a35-470f-aaf5-f3677601470b-original-pull-secret") pod "global-pull-secret-syncer-6xq6f" (UID: "69371276-3a35-470f-aaf5-f3677601470b") : object "kube-system"/"original-pull-secret" not registered Apr 17 07:51:28.423448 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:28.423411 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/69371276-3a35-470f-aaf5-f3677601470b-kubelet-config\") pod \"global-pull-secret-syncer-6xq6f\" (UID: \"69371276-3a35-470f-aaf5-f3677601470b\") " pod="kube-system/global-pull-secret-syncer-6xq6f" Apr 17 07:51:28.927264 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:28.926754 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/69371276-3a35-470f-aaf5-f3677601470b-original-pull-secret\") pod \"global-pull-secret-syncer-6xq6f\" (UID: \"69371276-3a35-470f-aaf5-f3677601470b\") " pod="kube-system/global-pull-secret-syncer-6xq6f" Apr 17 07:51:28.927264 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:28.926884 2569 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 17 07:51:28.927264 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:28.926945 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69371276-3a35-470f-aaf5-f3677601470b-original-pull-secret podName:69371276-3a35-470f-aaf5-f3677601470b nodeName:}" failed. No retries permitted until 2026-04-17 07:51:29.926927125 +0000 UTC m=+12.702893129 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/69371276-3a35-470f-aaf5-f3677601470b-original-pull-secret") pod "global-pull-secret-syncer-6xq6f" (UID: "69371276-3a35-470f-aaf5-f3677601470b") : object "kube-system"/"original-pull-secret" not registered Apr 17 07:51:29.750338 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:29.750248 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-6xq6f" Apr 17 07:51:29.750820 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:29.750250 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-t7s67" Apr 17 07:51:29.750820 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:29.750375 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-6xq6f" podUID="69371276-3a35-470f-aaf5-f3677601470b" Apr 17 07:51:29.750820 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:29.750450 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-t7s67" podUID="f102eb44-3020-49b4-b898-dcf83e0d0a11" Apr 17 07:51:29.750820 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:29.750250 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pqns9" Apr 17 07:51:29.750820 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:29.750613 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pqns9" podUID="6f23d3b8-dbbd-4489-8830-f7fb50e6d226" Apr 17 07:51:29.934396 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:29.934362 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/69371276-3a35-470f-aaf5-f3677601470b-original-pull-secret\") pod \"global-pull-secret-syncer-6xq6f\" (UID: \"69371276-3a35-470f-aaf5-f3677601470b\") " pod="kube-system/global-pull-secret-syncer-6xq6f" Apr 17 07:51:29.934580 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:29.934525 2569 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 17 07:51:29.934668 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:29.934589 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69371276-3a35-470f-aaf5-f3677601470b-original-pull-secret podName:69371276-3a35-470f-aaf5-f3677601470b nodeName:}" failed. No retries permitted until 2026-04-17 07:51:31.934572275 +0000 UTC m=+14.710538289 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/69371276-3a35-470f-aaf5-f3677601470b-original-pull-secret") pod "global-pull-secret-syncer-6xq6f" (UID: "69371276-3a35-470f-aaf5-f3677601470b") : object "kube-system"/"original-pull-secret" not registered Apr 17 07:51:31.749821 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:31.749784 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pqns9" Apr 17 07:51:31.750239 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:31.749784 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-6xq6f" Apr 17 07:51:31.750239 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:31.749924 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pqns9" podUID="6f23d3b8-dbbd-4489-8830-f7fb50e6d226" Apr 17 07:51:31.750239 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:31.749798 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-t7s67" Apr 17 07:51:31.750239 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:31.750031 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-6xq6f" podUID="69371276-3a35-470f-aaf5-f3677601470b" Apr 17 07:51:31.750239 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:31.750074 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-t7s67" podUID="f102eb44-3020-49b4-b898-dcf83e0d0a11" Apr 17 07:51:31.948914 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:31.948871 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/69371276-3a35-470f-aaf5-f3677601470b-original-pull-secret\") pod \"global-pull-secret-syncer-6xq6f\" (UID: \"69371276-3a35-470f-aaf5-f3677601470b\") " pod="kube-system/global-pull-secret-syncer-6xq6f" Apr 17 07:51:31.949094 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:31.949019 2569 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 17 07:51:31.949094 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:31.949094 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69371276-3a35-470f-aaf5-f3677601470b-original-pull-secret podName:69371276-3a35-470f-aaf5-f3677601470b nodeName:}" failed. No retries permitted until 2026-04-17 07:51:35.949077786 +0000 UTC m=+18.725043788 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/69371276-3a35-470f-aaf5-f3677601470b-original-pull-secret") pod "global-pull-secret-syncer-6xq6f" (UID: "69371276-3a35-470f-aaf5-f3677601470b") : object "kube-system"/"original-pull-secret" not registered Apr 17 07:51:33.750020 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:33.749985 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-6xq6f" Apr 17 07:51:33.750484 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:33.750023 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pqns9" Apr 17 07:51:33.750484 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:33.750096 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-6xq6f" podUID="69371276-3a35-470f-aaf5-f3677601470b" Apr 17 07:51:33.750484 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:33.750193 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pqns9" podUID="6f23d3b8-dbbd-4489-8830-f7fb50e6d226" Apr 17 07:51:33.750484 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:33.750249 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-t7s67" Apr 17 07:51:33.750484 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:33.750310 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-t7s67" podUID="f102eb44-3020-49b4-b898-dcf83e0d0a11" Apr 17 07:51:35.371027 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:35.370986 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f23d3b8-dbbd-4489-8830-f7fb50e6d226-metrics-certs\") pod \"network-metrics-daemon-pqns9\" (UID: \"6f23d3b8-dbbd-4489-8830-f7fb50e6d226\") " pod="openshift-multus/network-metrics-daemon-pqns9" Apr 17 07:51:35.371520 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:35.371160 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 07:51:35.371520 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:35.371243 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f23d3b8-dbbd-4489-8830-f7fb50e6d226-metrics-certs podName:6f23d3b8-dbbd-4489-8830-f7fb50e6d226 nodeName:}" failed. No retries permitted until 2026-04-17 07:51:51.371222513 +0000 UTC m=+34.147188514 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6f23d3b8-dbbd-4489-8830-f7fb50e6d226-metrics-certs") pod "network-metrics-daemon-pqns9" (UID: "6f23d3b8-dbbd-4489-8830-f7fb50e6d226") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 07:51:35.472186 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:35.472146 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6q7vj\" (UniqueName: \"kubernetes.io/projected/f102eb44-3020-49b4-b898-dcf83e0d0a11-kube-api-access-6q7vj\") pod \"network-check-target-t7s67\" (UID: \"f102eb44-3020-49b4-b898-dcf83e0d0a11\") " pod="openshift-network-diagnostics/network-check-target-t7s67" Apr 17 07:51:35.472351 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:35.472316 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 17 07:51:35.472351 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:35.472336 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 17 07:51:35.472351 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:35.472345 2569 projected.go:194] Error preparing data for projected volume kube-api-access-6q7vj for pod openshift-network-diagnostics/network-check-target-t7s67: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 07:51:35.472524 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:35.472405 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f102eb44-3020-49b4-b898-dcf83e0d0a11-kube-api-access-6q7vj podName:f102eb44-3020-49b4-b898-dcf83e0d0a11 nodeName:}" failed. No retries permitted until 2026-04-17 07:51:51.472388566 +0000 UTC m=+34.248354568 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-6q7vj" (UniqueName: "kubernetes.io/projected/f102eb44-3020-49b4-b898-dcf83e0d0a11-kube-api-access-6q7vj") pod "network-check-target-t7s67" (UID: "f102eb44-3020-49b4-b898-dcf83e0d0a11") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 07:51:35.749448 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:35.749364 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-6xq6f" Apr 17 07:51:35.749448 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:35.749401 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-t7s67" Apr 17 07:51:35.749448 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:35.749406 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pqns9" Apr 17 07:51:35.749730 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:35.749511 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-6xq6f" podUID="69371276-3a35-470f-aaf5-f3677601470b" Apr 17 07:51:35.749730 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:35.749664 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-t7s67" podUID="f102eb44-3020-49b4-b898-dcf83e0d0a11" Apr 17 07:51:35.749837 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:35.749742 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pqns9" podUID="6f23d3b8-dbbd-4489-8830-f7fb50e6d226" Apr 17 07:51:35.976164 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:35.976120 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/69371276-3a35-470f-aaf5-f3677601470b-original-pull-secret\") pod \"global-pull-secret-syncer-6xq6f\" (UID: \"69371276-3a35-470f-aaf5-f3677601470b\") " pod="kube-system/global-pull-secret-syncer-6xq6f" Apr 17 07:51:35.976328 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:35.976276 2569 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 17 07:51:35.976389 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:35.976346 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69371276-3a35-470f-aaf5-f3677601470b-original-pull-secret podName:69371276-3a35-470f-aaf5-f3677601470b nodeName:}" failed. No retries permitted until 2026-04-17 07:51:43.976326873 +0000 UTC m=+26.752292875 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/69371276-3a35-470f-aaf5-f3677601470b-original-pull-secret") pod "global-pull-secret-syncer-6xq6f" (UID: "69371276-3a35-470f-aaf5-f3677601470b") : object "kube-system"/"original-pull-secret" not registered Apr 17 07:51:37.756484 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:37.756453 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pqns9" Apr 17 07:51:37.756841 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:37.756589 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pqns9" podUID="6f23d3b8-dbbd-4489-8830-f7fb50e6d226" Apr 17 07:51:37.756841 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:37.756705 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-t7s67" Apr 17 07:51:37.756841 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:37.756775 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-t7s67" podUID="f102eb44-3020-49b4-b898-dcf83e0d0a11" Apr 17 07:51:37.757002 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:37.756843 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-6xq6f" Apr 17 07:51:37.757002 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:37.756907 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-6xq6f" podUID="69371276-3a35-470f-aaf5-f3677601470b" Apr 17 07:51:37.848223 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:37.848084 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-jmbz2" event={"ID":"5681141e-f336-418e-bb90-9a38ef69d0fc","Type":"ContainerStarted","Data":"afe0aa1dab8adcd3ab4155aa812c8d4414b1d0ce0a1f42836c82f23a743195dc"} Apr 17 07:51:37.849609 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:37.849562 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-ktjch" event={"ID":"8d6a3eec-3fe4-46c9-9cf9-999e26bceb92","Type":"ContainerStarted","Data":"9a965fbf11cc354a32048b007d28083ecdff3b4a38103b3710a5ea26485e355c"} Apr 17 07:51:37.855276 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:37.855162 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" event={"ID":"2e0e406d-3d55-41f3-ba63-448c73f82ded","Type":"ContainerStarted","Data":"acdbb0af61c03fbb82930a26f93784c60f12f40c8c92cf55803880327a75d2ab"} Apr 17 07:51:37.860554 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:37.860518 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-jmbz2" podStartSLOduration=10.74775222 podStartE2EDuration="19.860507404s" podCreationTimestamp="2026-04-17 07:51:18 +0000 UTC" firstStartedPulling="2026-04-17 07:51:20.314436077 +0000 UTC m=+3.090402079" lastFinishedPulling="2026-04-17 07:51:29.427191261 +0000 UTC m=+12.203157263" observedRunningTime="2026-04-17 07:51:37.860263752 +0000 UTC m=+20.636229774" watchObservedRunningTime="2026-04-17 07:51:37.860507404 +0000 UTC m=+20.636473426" Apr 17 07:51:37.873340 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:37.873287 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-xg76x" podStartSLOduration=3.624902563 podStartE2EDuration="20.873270664s" podCreationTimestamp="2026-04-17 07:51:17 +0000 UTC" firstStartedPulling="2026-04-17 07:51:20.306093349 +0000 UTC m=+3.082059365" lastFinishedPulling="2026-04-17 07:51:37.554461464 +0000 UTC m=+20.330427466" observedRunningTime="2026-04-17 07:51:37.872565198 +0000 UTC m=+20.648531246" watchObservedRunningTime="2026-04-17 07:51:37.873270664 +0000 UTC m=+20.649236688" Apr 17 07:51:37.889408 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:37.889298 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-ktjch" podStartSLOduration=2.6183055939999997 podStartE2EDuration="19.889279029s" podCreationTimestamp="2026-04-17 07:51:18 +0000 UTC" firstStartedPulling="2026-04-17 07:51:20.311912142 +0000 UTC m=+3.087878145" lastFinishedPulling="2026-04-17 07:51:37.582885575 +0000 UTC m=+20.358851580" observedRunningTime="2026-04-17 07:51:37.88831457 +0000 UTC m=+20.664280595" watchObservedRunningTime="2026-04-17 07:51:37.889279029 +0000 UTC m=+20.665245055" Apr 17 07:51:38.781864 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:38.781841 2569 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock" Apr 17 07:51:38.858881 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:38.858851 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wbxwr_2e0e406d-3d55-41f3-ba63-448c73f82ded/ovn-acl-logging/0.log" Apr 17 07:51:38.859146 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:38.859124 2569 generic.go:358] "Generic (PLEG): container finished" podID="2e0e406d-3d55-41f3-ba63-448c73f82ded" containerID="64f779a75b68fcae750a4884b1fef73bbe36bf331ef9f1ded54c934c126eaea2" exitCode=1 Apr 17 07:51:38.859228 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:38.859193 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" event={"ID":"2e0e406d-3d55-41f3-ba63-448c73f82ded","Type":"ContainerStarted","Data":"a006842a050091bac75c1b4d52b490f9f70f48d5f8eec5e2ead797392f582e90"} Apr 17 07:51:38.859266 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:38.859227 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" event={"ID":"2e0e406d-3d55-41f3-ba63-448c73f82ded","Type":"ContainerStarted","Data":"4e7e8f0ebec1339c2de1c91c64425147d20ddaa3e01807bb844eb373f6ff6935"} Apr 17 07:51:38.859266 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:38.859237 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" event={"ID":"2e0e406d-3d55-41f3-ba63-448c73f82ded","Type":"ContainerStarted","Data":"db9c22f34e8cb547944cf14fd8e47dbfccfea76fe3f059cae069a998b197c446"} Apr 17 07:51:38.859266 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:38.859245 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" event={"ID":"2e0e406d-3d55-41f3-ba63-448c73f82ded","Type":"ContainerStarted","Data":"3a5d728129026970a074e21e813247d4b096546ec4a5c2ac1441292d960935f3"} Apr 17 07:51:38.859266 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:38.859253 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" event={"ID":"2e0e406d-3d55-41f3-ba63-448c73f82ded","Type":"ContainerDied","Data":"64f779a75b68fcae750a4884b1fef73bbe36bf331ef9f1ded54c934c126eaea2"} Apr 17 07:51:38.860289 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:38.860262 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-txzjm" event={"ID":"edf1e79e-27d7-4945-a525-2276d0340c21","Type":"ContainerStarted","Data":"6fe4a334000e287448030cb072e127847df309cde15cab8cad65e9ac300a96a0"} Apr 17 07:51:38.861510 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:38.861490 2569 generic.go:358] "Generic (PLEG): container finished" podID="20cbf601-967c-4931-9e41-f9b3377a7284" containerID="79e9db3a5df04a26813d8e3dc165b0a06ecc889c0c1e264c772e19db413a3121" exitCode=0 Apr 17 07:51:38.861592 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:38.861548 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjghl" event={"ID":"20cbf601-967c-4931-9e41-f9b3377a7284","Type":"ContainerDied","Data":"79e9db3a5df04a26813d8e3dc165b0a06ecc889c0c1e264c772e19db413a3121"} Apr 17 07:51:38.862909 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:38.862887 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jm5zl" event={"ID":"4331ee4b-17e0-4bfd-a306-b47ead03f055","Type":"ContainerStarted","Data":"e1e6cf22308d538e8a805a66261a39c36cf68db5efd97e3206856c2cb65e702d"} Apr 17 07:51:38.864476 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:38.864458 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-t8rsw" event={"ID":"41bd8583-55d0-44d9-b7c7-0dee1be59867","Type":"ContainerStarted","Data":"581c87e1ecb9975a0f39ded43ca8ea800c34e8993e1650d6f1555491050680a1"} Apr 17 07:51:38.864550 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:38.864484 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-t8rsw" event={"ID":"41bd8583-55d0-44d9-b7c7-0dee1be59867","Type":"ContainerStarted","Data":"b78961f713be7553d02b9cc878a29eea22a530b205dc003a3d9296a2198c7713"} Apr 17 07:51:38.865629 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:38.865608 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xg76x" event={"ID":"9719cdb1-840b-4a4d-8e68-be9ea50fc183","Type":"ContainerStarted","Data":"94994fd4777b1604a0cdff34a95e083df1c726eec4cb6ce978ce39cb3092d131"} Apr 17 07:51:38.873937 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:38.873902 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/konnectivity-agent-txzjm" podStartSLOduration=4.635848478 podStartE2EDuration="21.87389114s" podCreationTimestamp="2026-04-17 07:51:17 +0000 UTC" firstStartedPulling="2026-04-17 07:51:20.315755762 +0000 UTC m=+3.091721769" lastFinishedPulling="2026-04-17 07:51:37.553798431 +0000 UTC m=+20.329764431" observedRunningTime="2026-04-17 07:51:38.873323821 +0000 UTC m=+21.649289844" watchObservedRunningTime="2026-04-17 07:51:38.87389114 +0000 UTC m=+21.649857163" Apr 17 07:51:38.888782 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:38.888748 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-jm5zl" podStartSLOduration=4.43596706 podStartE2EDuration="21.888738277s" podCreationTimestamp="2026-04-17 07:51:17 +0000 UTC" firstStartedPulling="2026-04-17 07:51:20.312956523 +0000 UTC m=+3.088922526" lastFinishedPulling="2026-04-17 07:51:37.765727728 +0000 UTC m=+20.541693743" observedRunningTime="2026-04-17 07:51:38.888678604 +0000 UTC m=+21.664644627" watchObservedRunningTime="2026-04-17 07:51:38.888738277 +0000 UTC m=+21.664704332" Apr 17 07:51:39.719158 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:39.719029 2569 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock","Timestamp":"2026-04-17T07:51:38.78186138Z","UUID":"2cdefdcc-75b2-47ff-b6d5-bd0ac7275588","Handler":null,"Name":"","Endpoint":""} Apr 17 07:51:39.721058 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:39.721031 2569 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: ebs.csi.aws.com endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock versions: 1.0.0 Apr 17 07:51:39.721058 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:39.721063 2569 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: ebs.csi.aws.com at endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock Apr 17 07:51:39.752571 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:39.752532 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-t7s67" Apr 17 07:51:39.752737 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:39.752532 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-6xq6f" Apr 17 07:51:39.752790 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:39.752766 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-6xq6f" podUID="69371276-3a35-470f-aaf5-f3677601470b" Apr 17 07:51:39.752790 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:39.752542 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pqns9" Apr 17 07:51:39.752890 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:39.752868 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pqns9" podUID="6f23d3b8-dbbd-4489-8830-f7fb50e6d226" Apr 17 07:51:39.753339 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:39.752635 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-t7s67" podUID="f102eb44-3020-49b4-b898-dcf83e0d0a11" Apr 17 07:51:39.869500 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:39.869266 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-k6m72" event={"ID":"2f75afd8-16c7-448e-8f36-42d2b2219a87","Type":"ContainerStarted","Data":"6497bce812f7cc12f369688559788e65b087c770cc87115c5fedcd61aaa90aff"} Apr 17 07:51:39.871691 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:39.871616 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-t8rsw" event={"ID":"41bd8583-55d0-44d9-b7c7-0dee1be59867","Type":"ContainerStarted","Data":"b3009acd47bcdaecf72dad04a3f70a357fb5a48d4aafa5994b4a1a86717d16e8"} Apr 17 07:51:39.908556 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:39.908471 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/iptables-alerter-k6m72" podStartSLOduration=4.638296614 podStartE2EDuration="21.908457553s" podCreationTimestamp="2026-04-17 07:51:18 +0000 UTC" firstStartedPulling="2026-04-17 07:51:20.310748431 +0000 UTC m=+3.086714436" lastFinishedPulling="2026-04-17 07:51:37.580909373 +0000 UTC m=+20.356875375" observedRunningTime="2026-04-17 07:51:39.908387989 +0000 UTC m=+22.684354011" watchObservedRunningTime="2026-04-17 07:51:39.908457553 +0000 UTC m=+22.684423576" Apr 17 07:51:39.924603 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:39.924559 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-t8rsw" podStartSLOduration=2.5720262480000002 podStartE2EDuration="21.924545762s" podCreationTimestamp="2026-04-17 07:51:18 +0000 UTC" firstStartedPulling="2026-04-17 07:51:20.307219479 +0000 UTC m=+3.083185480" lastFinishedPulling="2026-04-17 07:51:39.65973899 +0000 UTC m=+22.435704994" observedRunningTime="2026-04-17 07:51:39.924403709 +0000 UTC m=+22.700369733" watchObservedRunningTime="2026-04-17 07:51:39.924545762 +0000 UTC m=+22.700511782" Apr 17 07:51:40.875484 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:40.875458 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wbxwr_2e0e406d-3d55-41f3-ba63-448c73f82ded/ovn-acl-logging/0.log" Apr 17 07:51:40.875920 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:40.875808 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" event={"ID":"2e0e406d-3d55-41f3-ba63-448c73f82ded","Type":"ContainerStarted","Data":"26b13cf18c01e4310dea5d869e6c3b75eb09622ae63c541b622f8326be8c9502"} Apr 17 07:51:41.752385 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:41.752355 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-6xq6f" Apr 17 07:51:41.752537 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:41.752355 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-t7s67" Apr 17 07:51:41.752537 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:41.752453 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-6xq6f" podUID="69371276-3a35-470f-aaf5-f3677601470b" Apr 17 07:51:41.752611 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:41.752533 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-t7s67" podUID="f102eb44-3020-49b4-b898-dcf83e0d0a11" Apr 17 07:51:41.752611 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:41.752361 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pqns9" Apr 17 07:51:41.752690 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:41.752610 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pqns9" podUID="6f23d3b8-dbbd-4489-8830-f7fb50e6d226" Apr 17 07:51:42.461115 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:42.460949 2569 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/konnectivity-agent-txzjm" Apr 17 07:51:42.463377 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:42.462088 2569 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/konnectivity-agent-txzjm" Apr 17 07:51:42.883157 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:42.882958 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wbxwr_2e0e406d-3d55-41f3-ba63-448c73f82ded/ovn-acl-logging/0.log" Apr 17 07:51:42.883594 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:42.883558 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" event={"ID":"2e0e406d-3d55-41f3-ba63-448c73f82ded","Type":"ContainerStarted","Data":"4b04686cdfb8f75698e5c2ab2a2f45027d07ee0b916622685bbafad0c15f1f57"} Apr 17 07:51:42.883761 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:42.883747 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kube-system/konnectivity-agent-txzjm" Apr 17 07:51:42.884254 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:42.884022 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:42.884254 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:42.884051 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:42.884254 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:42.884064 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:42.884254 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:42.884187 2569 scope.go:117] "RemoveContainer" containerID="64f779a75b68fcae750a4884b1fef73bbe36bf331ef9f1ded54c934c126eaea2" Apr 17 07:51:42.884480 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:42.884278 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/konnectivity-agent-txzjm" Apr 17 07:51:42.900961 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:42.900932 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:42.901366 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:42.901348 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:51:43.750202 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:43.750167 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-6xq6f" Apr 17 07:51:43.750202 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:43.750199 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-t7s67" Apr 17 07:51:43.750718 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:43.750179 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pqns9" Apr 17 07:51:43.750718 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:43.750280 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-6xq6f" podUID="69371276-3a35-470f-aaf5-f3677601470b" Apr 17 07:51:43.750718 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:43.750375 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pqns9" podUID="6f23d3b8-dbbd-4489-8830-f7fb50e6d226" Apr 17 07:51:43.750718 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:43.750462 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-t7s67" podUID="f102eb44-3020-49b4-b898-dcf83e0d0a11" Apr 17 07:51:44.034067 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:44.034027 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/69371276-3a35-470f-aaf5-f3677601470b-original-pull-secret\") pod \"global-pull-secret-syncer-6xq6f\" (UID: \"69371276-3a35-470f-aaf5-f3677601470b\") " pod="kube-system/global-pull-secret-syncer-6xq6f" Apr 17 07:51:44.034264 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:44.034159 2569 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 17 07:51:44.034264 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:44.034246 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69371276-3a35-470f-aaf5-f3677601470b-original-pull-secret podName:69371276-3a35-470f-aaf5-f3677601470b nodeName:}" failed. No retries permitted until 2026-04-17 07:52:00.034221585 +0000 UTC m=+42.810187609 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/69371276-3a35-470f-aaf5-f3677601470b-original-pull-secret") pod "global-pull-secret-syncer-6xq6f" (UID: "69371276-3a35-470f-aaf5-f3677601470b") : object "kube-system"/"original-pull-secret" not registered Apr 17 07:51:44.626143 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:44.626100 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-6xq6f"] Apr 17 07:51:44.626306 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:44.626227 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-6xq6f" Apr 17 07:51:44.626358 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:44.626328 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-6xq6f" podUID="69371276-3a35-470f-aaf5-f3677601470b" Apr 17 07:51:44.629085 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:44.629060 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-t7s67"] Apr 17 07:51:44.629215 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:44.629146 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-t7s67" Apr 17 07:51:44.629264 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:44.629244 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-t7s67" podUID="f102eb44-3020-49b4-b898-dcf83e0d0a11" Apr 17 07:51:44.629548 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:44.629530 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-pqns9"] Apr 17 07:51:44.629623 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:44.629611 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pqns9" Apr 17 07:51:44.629724 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:44.629709 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pqns9" podUID="6f23d3b8-dbbd-4489-8830-f7fb50e6d226" Apr 17 07:51:44.887797 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:44.887711 2569 generic.go:358] "Generic (PLEG): container finished" podID="20cbf601-967c-4931-9e41-f9b3377a7284" containerID="1d3892c075a7d24bec13d8a4f4cc14611af29a6e606fa2608704acf911c40fce" exitCode=0 Apr 17 07:51:44.888176 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:44.887792 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjghl" event={"ID":"20cbf601-967c-4931-9e41-f9b3377a7284","Type":"ContainerDied","Data":"1d3892c075a7d24bec13d8a4f4cc14611af29a6e606fa2608704acf911c40fce"} Apr 17 07:51:44.891308 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:44.891291 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wbxwr_2e0e406d-3d55-41f3-ba63-448c73f82ded/ovn-acl-logging/0.log" Apr 17 07:51:44.891690 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:44.891667 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" event={"ID":"2e0e406d-3d55-41f3-ba63-448c73f82ded","Type":"ContainerStarted","Data":"55e4177c347cb0c641619ef19cadec06e88292a5cbfccb3d095432d45f3fc4d9"} Apr 17 07:51:44.937052 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:44.937002 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" podStartSLOduration=9.633587947 podStartE2EDuration="26.936989384s" podCreationTimestamp="2026-04-17 07:51:18 +0000 UTC" firstStartedPulling="2026-04-17 07:51:20.318174094 +0000 UTC m=+3.094140101" lastFinishedPulling="2026-04-17 07:51:37.621575534 +0000 UTC m=+20.397541538" observedRunningTime="2026-04-17 07:51:44.93658815 +0000 UTC m=+27.712554171" watchObservedRunningTime="2026-04-17 07:51:44.936989384 +0000 UTC m=+27.712955407" Apr 17 07:51:46.749810 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:46.749728 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-6xq6f" Apr 17 07:51:46.750163 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:46.749728 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-t7s67" Apr 17 07:51:46.750163 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:46.749831 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-6xq6f" podUID="69371276-3a35-470f-aaf5-f3677601470b" Apr 17 07:51:46.750163 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:46.749728 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pqns9" Apr 17 07:51:46.750163 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:46.749910 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-t7s67" podUID="f102eb44-3020-49b4-b898-dcf83e0d0a11" Apr 17 07:51:46.750163 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:46.749977 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pqns9" podUID="6f23d3b8-dbbd-4489-8830-f7fb50e6d226" Apr 17 07:51:46.897919 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:46.897751 2569 generic.go:358] "Generic (PLEG): container finished" podID="20cbf601-967c-4931-9e41-f9b3377a7284" containerID="ac20c55b0af89cdc5162221fdc7de29f5c5107d36365e4f11f575d0765e41b95" exitCode=0 Apr 17 07:51:46.898067 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:46.897846 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjghl" event={"ID":"20cbf601-967c-4931-9e41-f9b3377a7284","Type":"ContainerDied","Data":"ac20c55b0af89cdc5162221fdc7de29f5c5107d36365e4f11f575d0765e41b95"} Apr 17 07:51:47.759752 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:47.759727 2569 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20cbf601_967c_4931_9e41_f9b3377a7284.slice/crio-conmon-7c104f5dec9630665005399c1abebdff6509b4fbb48c305ae1abfdf40bd082f1.scope\": RecentStats: unable to find data in memory cache]" Apr 17 07:51:47.902283 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:47.902192 2569 generic.go:358] "Generic (PLEG): container finished" podID="20cbf601-967c-4931-9e41-f9b3377a7284" containerID="7c104f5dec9630665005399c1abebdff6509b4fbb48c305ae1abfdf40bd082f1" exitCode=0 Apr 17 07:51:47.902283 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:47.902245 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjghl" event={"ID":"20cbf601-967c-4931-9e41-f9b3377a7284","Type":"ContainerDied","Data":"7c104f5dec9630665005399c1abebdff6509b4fbb48c305ae1abfdf40bd082f1"} Apr 17 07:51:48.749702 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:48.749612 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-6xq6f" Apr 17 07:51:48.749702 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:48.749687 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-t7s67" Apr 17 07:51:48.749702 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:48.749701 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pqns9" Apr 17 07:51:48.749913 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:48.749780 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-t7s67" podUID="f102eb44-3020-49b4-b898-dcf83e0d0a11" Apr 17 07:51:48.749913 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:48.749891 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pqns9" podUID="6f23d3b8-dbbd-4489-8830-f7fb50e6d226" Apr 17 07:51:48.749997 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:48.749962 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-6xq6f" podUID="69371276-3a35-470f-aaf5-f3677601470b" Apr 17 07:51:50.577320 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.577240 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-178.ec2.internal" event="NodeReady" Apr 17 07:51:50.577892 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.577403 2569 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Apr 17 07:51:50.611465 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.611433 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7d9464ccdc-xswwv"] Apr 17 07:51:50.619847 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.619813 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-576854cd45-gmvnf"] Apr 17 07:51:50.620023 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.619982 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7d9464ccdc-xswwv" Apr 17 07:51:50.622586 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.622562 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"open-cluster-management-image-pull-credentials\"" Apr 17 07:51:50.622973 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.622951 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"open-cluster-management-agent-addon\"/\"kube-root-ca.crt\"" Apr 17 07:51:50.623077 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.623043 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"managed-serviceaccount-hub-kubeconfig\"" Apr 17 07:51:50.623276 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.623256 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"open-cluster-management-agent-addon\"/\"openshift-service-ca.crt\"" Apr 17 07:51:50.623560 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.623409 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"managed-serviceaccount-dockercfg-5pmcp\"" Apr 17 07:51:50.624417 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.624397 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-757f7bcd7c-6bhnz"] Apr 17 07:51:50.624604 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.624582 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-576854cd45-gmvnf" Apr 17 07:51:50.626805 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.626688 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"work-manager-hub-kubeconfig\"" Apr 17 07:51:50.627497 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.627479 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["open-cluster-management-agent-addon/cluster-proxy-proxy-agent-674846d7bd-9qx66"] Apr 17 07:51:50.627677 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.627634 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" Apr 17 07:51:50.631385 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.631339 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7d9464ccdc-xswwv"] Apr 17 07:51:50.631385 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.631376 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-mlkb9"] Apr 17 07:51:50.632411 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.631704 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-674846d7bd-9qx66" Apr 17 07:51:50.633846 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.633127 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Apr 17 07:51:50.633846 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.633428 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Apr 17 07:51:50.633846 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.633624 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-private-configuration\"" Apr 17 07:51:50.634225 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.634209 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6b9tl\"" Apr 17 07:51:50.634680 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.634661 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-t7j9w"] Apr 17 07:51:50.635632 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.635612 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-ca\"" Apr 17 07:51:50.635713 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.635679 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-service-proxy-server-certificates\"" Apr 17 07:51:50.635904 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.635888 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-hub-kubeconfig\"" Apr 17 07:51:50.635970 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.635888 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-open-cluster-management.io-proxy-agent-signer-client-cert\"" Apr 17 07:51:50.638442 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.638233 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/cluster-proxy-proxy-agent-674846d7bd-9qx66"] Apr 17 07:51:50.638442 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.638259 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-757f7bcd7c-6bhnz"] Apr 17 07:51:50.638442 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.638327 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mlkb9" Apr 17 07:51:50.638442 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.638354 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-t7j9w" Apr 17 07:51:50.639496 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.639456 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-576854cd45-gmvnf"] Apr 17 07:51:50.639832 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.639802 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Apr 17 07:51:50.640971 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.640918 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Apr 17 07:51:50.641109 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.641069 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-tgj8b\"" Apr 17 07:51:50.641179 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.641140 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-l9k4x\"" Apr 17 07:51:50.641302 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.641283 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Apr 17 07:51:50.641302 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.641323 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Apr 17 07:51:50.641504 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.641486 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Apr 17 07:51:50.641716 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.641614 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Apr 17 07:51:50.642468 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.642448 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-mlkb9"] Apr 17 07:51:50.648021 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.647990 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-t7j9w"] Apr 17 07:51:50.750210 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.750168 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-6xq6f" Apr 17 07:51:50.750401 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.750302 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-t7s67" Apr 17 07:51:50.750455 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.750436 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pqns9" Apr 17 07:51:50.752876 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.752853 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-tqz2q\"" Apr 17 07:51:50.753002 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.752856 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 17 07:51:50.753118 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.753097 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"original-pull-secret\"" Apr 17 07:51:50.753230 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.753185 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 17 07:51:50.753299 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.753231 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 17 07:51:50.753358 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.753300 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-4xllf\"" Apr 17 07:51:50.784403 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.784374 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsm7d\" (UniqueName: \"kubernetes.io/projected/cb2327ec-f42e-4e5d-b122-579bbba751a8-kube-api-access-zsm7d\") pod \"dns-default-mlkb9\" (UID: \"cb2327ec-f42e-4e5d-b122-579bbba751a8\") " pod="openshift-dns/dns-default-mlkb9" Apr 17 07:51:50.784534 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.784406 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3724dff9-b116-4340-9f6c-3565e9aa82fd-ca-trust-extracted\") pod \"image-registry-757f7bcd7c-6bhnz\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" Apr 17 07:51:50.784534 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.784434 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3724dff9-b116-4340-9f6c-3565e9aa82fd-trusted-ca\") pod \"image-registry-757f7bcd7c-6bhnz\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" Apr 17 07:51:50.784534 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.784456 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg8gv\" (UniqueName: \"kubernetes.io/projected/555a944e-6ca6-491d-b2ef-b0257f2549a3-kube-api-access-gg8gv\") pod \"managed-serviceaccount-addon-agent-7d9464ccdc-xswwv\" (UID: \"555a944e-6ca6-491d-b2ef-b0257f2549a3\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7d9464ccdc-xswwv" Apr 17 07:51:50.784534 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.784484 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/3724dff9-b116-4340-9f6c-3565e9aa82fd-image-registry-private-configuration\") pod \"image-registry-757f7bcd7c-6bhnz\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" Apr 17 07:51:50.784534 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.784507 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca\" (UniqueName: \"kubernetes.io/secret/7302806b-ac03-4214-9412-5ec8447ef5fe-ca\") pod \"cluster-proxy-proxy-agent-674846d7bd-9qx66\" (UID: \"7302806b-ac03-4214-9412-5ec8447ef5fe\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-674846d7bd-9qx66" Apr 17 07:51:50.784534 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.784523 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcgtr\" (UniqueName: \"kubernetes.io/projected/7302806b-ac03-4214-9412-5ec8447ef5fe-kube-api-access-xcgtr\") pod \"cluster-proxy-proxy-agent-674846d7bd-9qx66\" (UID: \"7302806b-ac03-4214-9412-5ec8447ef5fe\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-674846d7bd-9qx66" Apr 17 07:51:50.784787 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.784579 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-bound-sa-token\") pod \"image-registry-757f7bcd7c-6bhnz\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" Apr 17 07:51:50.784787 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.784660 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lmc4\" (UniqueName: \"kubernetes.io/projected/e5decccf-5983-4694-ad30-679303b4d922-kube-api-access-9lmc4\") pod \"klusterlet-addon-workmgr-576854cd45-gmvnf\" (UID: \"e5decccf-5983-4694-ad30-679303b4d922\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-576854cd45-gmvnf" Apr 17 07:51:50.784787 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.784689 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cb2327ec-f42e-4e5d-b122-579bbba751a8-metrics-tls\") pod \"dns-default-mlkb9\" (UID: \"cb2327ec-f42e-4e5d-b122-579bbba751a8\") " pod="openshift-dns/dns-default-mlkb9" Apr 17 07:51:50.784787 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.784725 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/7302806b-ac03-4214-9412-5ec8447ef5fe-hub-kubeconfig\") pod \"cluster-proxy-proxy-agent-674846d7bd-9qx66\" (UID: \"7302806b-ac03-4214-9412-5ec8447ef5fe\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-674846d7bd-9qx66" Apr 17 07:51:50.784787 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.784750 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dffe644a-60cc-49b3-96e0-1da3bf018246-cert\") pod \"ingress-canary-t7j9w\" (UID: \"dffe644a-60cc-49b3-96e0-1da3bf018246\") " pod="openshift-ingress-canary/ingress-canary-t7j9w" Apr 17 07:51:50.784787 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.784768 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/cb2327ec-f42e-4e5d-b122-579bbba751a8-tmp-dir\") pod \"dns-default-mlkb9\" (UID: \"cb2327ec-f42e-4e5d-b122-579bbba751a8\") " pod="openshift-dns/dns-default-mlkb9" Apr 17 07:51:50.785073 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.784809 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ocpservice-ca\" (UniqueName: \"kubernetes.io/configmap/7302806b-ac03-4214-9412-5ec8447ef5fe-ocpservice-ca\") pod \"cluster-proxy-proxy-agent-674846d7bd-9qx66\" (UID: \"7302806b-ac03-4214-9412-5ec8447ef5fe\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-674846d7bd-9qx66" Apr 17 07:51:50.785073 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.784859 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e5decccf-5983-4694-ad30-679303b4d922-tmp\") pod \"klusterlet-addon-workmgr-576854cd45-gmvnf\" (UID: \"e5decccf-5983-4694-ad30-679303b4d922\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-576854cd45-gmvnf" Apr 17 07:51:50.785073 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.784884 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/e5decccf-5983-4694-ad30-679303b4d922-klusterlet-config\") pod \"klusterlet-addon-workmgr-576854cd45-gmvnf\" (UID: \"e5decccf-5983-4694-ad30-679303b4d922\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-576854cd45-gmvnf" Apr 17 07:51:50.785073 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.784913 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-registry-tls\") pod \"image-registry-757f7bcd7c-6bhnz\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" Apr 17 07:51:50.785073 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.784938 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/555a944e-6ca6-491d-b2ef-b0257f2549a3-hub-kubeconfig\") pod \"managed-serviceaccount-addon-agent-7d9464ccdc-xswwv\" (UID: \"555a944e-6ca6-491d-b2ef-b0257f2549a3\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7d9464ccdc-xswwv" Apr 17 07:51:50.785073 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.784970 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3724dff9-b116-4340-9f6c-3565e9aa82fd-registry-certificates\") pod \"image-registry-757f7bcd7c-6bhnz\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" Apr 17 07:51:50.785073 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.785007 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gd7p\" (UniqueName: \"kubernetes.io/projected/dffe644a-60cc-49b3-96e0-1da3bf018246-kube-api-access-5gd7p\") pod \"ingress-canary-t7j9w\" (UID: \"dffe644a-60cc-49b3-96e0-1da3bf018246\") " pod="openshift-ingress-canary/ingress-canary-t7j9w" Apr 17 07:51:50.785073 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.785033 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3724dff9-b116-4340-9f6c-3565e9aa82fd-installation-pull-secrets\") pod \"image-registry-757f7bcd7c-6bhnz\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" Apr 17 07:51:50.785073 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.785061 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7bp8\" (UniqueName: \"kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-kube-api-access-f7bp8\") pod \"image-registry-757f7bcd7c-6bhnz\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" Apr 17 07:51:50.785408 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.785085 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-proxy-server-cert\" (UniqueName: \"kubernetes.io/secret/7302806b-ac03-4214-9412-5ec8447ef5fe-service-proxy-server-cert\") pod \"cluster-proxy-proxy-agent-674846d7bd-9qx66\" (UID: \"7302806b-ac03-4214-9412-5ec8447ef5fe\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-674846d7bd-9qx66" Apr 17 07:51:50.785408 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.785129 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hub\" (UniqueName: \"kubernetes.io/secret/7302806b-ac03-4214-9412-5ec8447ef5fe-hub\") pod \"cluster-proxy-proxy-agent-674846d7bd-9qx66\" (UID: \"7302806b-ac03-4214-9412-5ec8447ef5fe\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-674846d7bd-9qx66" Apr 17 07:51:50.785408 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.785158 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb2327ec-f42e-4e5d-b122-579bbba751a8-config-volume\") pod \"dns-default-mlkb9\" (UID: \"cb2327ec-f42e-4e5d-b122-579bbba751a8\") " pod="openshift-dns/dns-default-mlkb9" Apr 17 07:51:50.885896 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.885811 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-bound-sa-token\") pod \"image-registry-757f7bcd7c-6bhnz\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" Apr 17 07:51:50.885896 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.885873 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9lmc4\" (UniqueName: \"kubernetes.io/projected/e5decccf-5983-4694-ad30-679303b4d922-kube-api-access-9lmc4\") pod \"klusterlet-addon-workmgr-576854cd45-gmvnf\" (UID: \"e5decccf-5983-4694-ad30-679303b4d922\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-576854cd45-gmvnf" Apr 17 07:51:50.886122 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.885900 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cb2327ec-f42e-4e5d-b122-579bbba751a8-metrics-tls\") pod \"dns-default-mlkb9\" (UID: \"cb2327ec-f42e-4e5d-b122-579bbba751a8\") " pod="openshift-dns/dns-default-mlkb9" Apr 17 07:51:50.886122 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.885937 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/7302806b-ac03-4214-9412-5ec8447ef5fe-hub-kubeconfig\") pod \"cluster-proxy-proxy-agent-674846d7bd-9qx66\" (UID: \"7302806b-ac03-4214-9412-5ec8447ef5fe\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-674846d7bd-9qx66" Apr 17 07:51:50.886122 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.885976 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dffe644a-60cc-49b3-96e0-1da3bf018246-cert\") pod \"ingress-canary-t7j9w\" (UID: \"dffe644a-60cc-49b3-96e0-1da3bf018246\") " pod="openshift-ingress-canary/ingress-canary-t7j9w" Apr 17 07:51:50.886122 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.885996 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/cb2327ec-f42e-4e5d-b122-579bbba751a8-tmp-dir\") pod \"dns-default-mlkb9\" (UID: \"cb2327ec-f42e-4e5d-b122-579bbba751a8\") " pod="openshift-dns/dns-default-mlkb9" Apr 17 07:51:50.886122 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.886018 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ocpservice-ca\" (UniqueName: \"kubernetes.io/configmap/7302806b-ac03-4214-9412-5ec8447ef5fe-ocpservice-ca\") pod \"cluster-proxy-proxy-agent-674846d7bd-9qx66\" (UID: \"7302806b-ac03-4214-9412-5ec8447ef5fe\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-674846d7bd-9qx66" Apr 17 07:51:50.886122 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:50.886044 2569 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 17 07:51:50.886122 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.886049 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e5decccf-5983-4694-ad30-679303b4d922-tmp\") pod \"klusterlet-addon-workmgr-576854cd45-gmvnf\" (UID: \"e5decccf-5983-4694-ad30-679303b4d922\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-576854cd45-gmvnf" Apr 17 07:51:50.886469 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:50.886136 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cb2327ec-f42e-4e5d-b122-579bbba751a8-metrics-tls podName:cb2327ec-f42e-4e5d-b122-579bbba751a8 nodeName:}" failed. No retries permitted until 2026-04-17 07:51:51.386116273 +0000 UTC m=+34.162082285 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/cb2327ec-f42e-4e5d-b122-579bbba751a8-metrics-tls") pod "dns-default-mlkb9" (UID: "cb2327ec-f42e-4e5d-b122-579bbba751a8") : secret "dns-default-metrics-tls" not found Apr 17 07:51:50.886469 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.886173 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/e5decccf-5983-4694-ad30-679303b4d922-klusterlet-config\") pod \"klusterlet-addon-workmgr-576854cd45-gmvnf\" (UID: \"e5decccf-5983-4694-ad30-679303b4d922\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-576854cd45-gmvnf" Apr 17 07:51:50.886469 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.886212 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-registry-tls\") pod \"image-registry-757f7bcd7c-6bhnz\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" Apr 17 07:51:50.886469 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.886239 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/555a944e-6ca6-491d-b2ef-b0257f2549a3-hub-kubeconfig\") pod \"managed-serviceaccount-addon-agent-7d9464ccdc-xswwv\" (UID: \"555a944e-6ca6-491d-b2ef-b0257f2549a3\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7d9464ccdc-xswwv" Apr 17 07:51:50.886469 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.886272 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3724dff9-b116-4340-9f6c-3565e9aa82fd-registry-certificates\") pod \"image-registry-757f7bcd7c-6bhnz\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" Apr 17 07:51:50.886469 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:50.886291 2569 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 17 07:51:50.886469 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.886303 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5gd7p\" (UniqueName: \"kubernetes.io/projected/dffe644a-60cc-49b3-96e0-1da3bf018246-kube-api-access-5gd7p\") pod \"ingress-canary-t7j9w\" (UID: \"dffe644a-60cc-49b3-96e0-1da3bf018246\") " pod="openshift-ingress-canary/ingress-canary-t7j9w" Apr 17 07:51:50.886469 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.886330 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3724dff9-b116-4340-9f6c-3565e9aa82fd-installation-pull-secrets\") pod \"image-registry-757f7bcd7c-6bhnz\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" Apr 17 07:51:50.886469 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:50.886345 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dffe644a-60cc-49b3-96e0-1da3bf018246-cert podName:dffe644a-60cc-49b3-96e0-1da3bf018246 nodeName:}" failed. No retries permitted until 2026-04-17 07:51:51.386329344 +0000 UTC m=+34.162295352 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dffe644a-60cc-49b3-96e0-1da3bf018246-cert") pod "ingress-canary-t7j9w" (UID: "dffe644a-60cc-49b3-96e0-1da3bf018246") : secret "canary-serving-cert" not found Apr 17 07:51:50.886469 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.886381 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f7bp8\" (UniqueName: \"kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-kube-api-access-f7bp8\") pod \"image-registry-757f7bcd7c-6bhnz\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" Apr 17 07:51:50.886469 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.886409 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-proxy-server-cert\" (UniqueName: \"kubernetes.io/secret/7302806b-ac03-4214-9412-5ec8447ef5fe-service-proxy-server-cert\") pod \"cluster-proxy-proxy-agent-674846d7bd-9qx66\" (UID: \"7302806b-ac03-4214-9412-5ec8447ef5fe\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-674846d7bd-9qx66" Apr 17 07:51:50.886469 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.886450 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hub\" (UniqueName: \"kubernetes.io/secret/7302806b-ac03-4214-9412-5ec8447ef5fe-hub\") pod \"cluster-proxy-proxy-agent-674846d7bd-9qx66\" (UID: \"7302806b-ac03-4214-9412-5ec8447ef5fe\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-674846d7bd-9qx66" Apr 17 07:51:50.886469 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.886450 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e5decccf-5983-4694-ad30-679303b4d922-tmp\") pod \"klusterlet-addon-workmgr-576854cd45-gmvnf\" (UID: \"e5decccf-5983-4694-ad30-679303b4d922\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-576854cd45-gmvnf" Apr 17 07:51:50.887195 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.886480 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb2327ec-f42e-4e5d-b122-579bbba751a8-config-volume\") pod \"dns-default-mlkb9\" (UID: \"cb2327ec-f42e-4e5d-b122-579bbba751a8\") " pod="openshift-dns/dns-default-mlkb9" Apr 17 07:51:50.887195 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.886530 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zsm7d\" (UniqueName: \"kubernetes.io/projected/cb2327ec-f42e-4e5d-b122-579bbba751a8-kube-api-access-zsm7d\") pod \"dns-default-mlkb9\" (UID: \"cb2327ec-f42e-4e5d-b122-579bbba751a8\") " pod="openshift-dns/dns-default-mlkb9" Apr 17 07:51:50.887195 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.886561 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3724dff9-b116-4340-9f6c-3565e9aa82fd-ca-trust-extracted\") pod \"image-registry-757f7bcd7c-6bhnz\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" Apr 17 07:51:50.887195 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.886590 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3724dff9-b116-4340-9f6c-3565e9aa82fd-trusted-ca\") pod \"image-registry-757f7bcd7c-6bhnz\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" Apr 17 07:51:50.887195 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.886612 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gg8gv\" (UniqueName: \"kubernetes.io/projected/555a944e-6ca6-491d-b2ef-b0257f2549a3-kube-api-access-gg8gv\") pod \"managed-serviceaccount-addon-agent-7d9464ccdc-xswwv\" (UID: \"555a944e-6ca6-491d-b2ef-b0257f2549a3\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7d9464ccdc-xswwv" Apr 17 07:51:50.887195 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.886632 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/3724dff9-b116-4340-9f6c-3565e9aa82fd-image-registry-private-configuration\") pod \"image-registry-757f7bcd7c-6bhnz\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" Apr 17 07:51:50.887195 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:50.886634 2569 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 17 07:51:50.887195 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.886676 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca\" (UniqueName: \"kubernetes.io/secret/7302806b-ac03-4214-9412-5ec8447ef5fe-ca\") pod \"cluster-proxy-proxy-agent-674846d7bd-9qx66\" (UID: \"7302806b-ac03-4214-9412-5ec8447ef5fe\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-674846d7bd-9qx66" Apr 17 07:51:50.887195 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.886700 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xcgtr\" (UniqueName: \"kubernetes.io/projected/7302806b-ac03-4214-9412-5ec8447ef5fe-kube-api-access-xcgtr\") pod \"cluster-proxy-proxy-agent-674846d7bd-9qx66\" (UID: \"7302806b-ac03-4214-9412-5ec8447ef5fe\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-674846d7bd-9qx66" Apr 17 07:51:50.887195 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.887015 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb2327ec-f42e-4e5d-b122-579bbba751a8-config-volume\") pod \"dns-default-mlkb9\" (UID: \"cb2327ec-f42e-4e5d-b122-579bbba751a8\") " pod="openshift-dns/dns-default-mlkb9" Apr 17 07:51:50.887195 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.887172 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ocpservice-ca\" (UniqueName: \"kubernetes.io/configmap/7302806b-ac03-4214-9412-5ec8447ef5fe-ocpservice-ca\") pod \"cluster-proxy-proxy-agent-674846d7bd-9qx66\" (UID: \"7302806b-ac03-4214-9412-5ec8447ef5fe\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-674846d7bd-9qx66" Apr 17 07:51:50.887782 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.887316 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3724dff9-b116-4340-9f6c-3565e9aa82fd-ca-trust-extracted\") pod \"image-registry-757f7bcd7c-6bhnz\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" Apr 17 07:51:50.887782 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.887402 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/cb2327ec-f42e-4e5d-b122-579bbba751a8-tmp-dir\") pod \"dns-default-mlkb9\" (UID: \"cb2327ec-f42e-4e5d-b122-579bbba751a8\") " pod="openshift-dns/dns-default-mlkb9" Apr 17 07:51:50.887782 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.887746 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3724dff9-b116-4340-9f6c-3565e9aa82fd-registry-certificates\") pod \"image-registry-757f7bcd7c-6bhnz\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" Apr 17 07:51:50.887938 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:50.886677 2569 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-757f7bcd7c-6bhnz: secret "image-registry-tls" not found Apr 17 07:51:50.887938 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:50.887887 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-registry-tls podName:3724dff9-b116-4340-9f6c-3565e9aa82fd nodeName:}" failed. No retries permitted until 2026-04-17 07:51:51.387864537 +0000 UTC m=+34.163830543 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-registry-tls") pod "image-registry-757f7bcd7c-6bhnz" (UID: "3724dff9-b116-4340-9f6c-3565e9aa82fd") : secret "image-registry-tls" not found Apr 17 07:51:50.888944 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.888897 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3724dff9-b116-4340-9f6c-3565e9aa82fd-trusted-ca\") pod \"image-registry-757f7bcd7c-6bhnz\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" Apr 17 07:51:50.891249 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.891225 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/3724dff9-b116-4340-9f6c-3565e9aa82fd-image-registry-private-configuration\") pod \"image-registry-757f7bcd7c-6bhnz\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" Apr 17 07:51:50.891420 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.891396 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/555a944e-6ca6-491d-b2ef-b0257f2549a3-hub-kubeconfig\") pod \"managed-serviceaccount-addon-agent-7d9464ccdc-xswwv\" (UID: \"555a944e-6ca6-491d-b2ef-b0257f2549a3\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7d9464ccdc-xswwv" Apr 17 07:51:50.891544 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.891522 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/e5decccf-5983-4694-ad30-679303b4d922-klusterlet-config\") pod \"klusterlet-addon-workmgr-576854cd45-gmvnf\" (UID: \"e5decccf-5983-4694-ad30-679303b4d922\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-576854cd45-gmvnf" Apr 17 07:51:50.891601 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.891524 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/7302806b-ac03-4214-9412-5ec8447ef5fe-hub-kubeconfig\") pod \"cluster-proxy-proxy-agent-674846d7bd-9qx66\" (UID: \"7302806b-ac03-4214-9412-5ec8447ef5fe\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-674846d7bd-9qx66" Apr 17 07:51:50.892042 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.892018 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3724dff9-b116-4340-9f6c-3565e9aa82fd-installation-pull-secrets\") pod \"image-registry-757f7bcd7c-6bhnz\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" Apr 17 07:51:50.893581 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.893554 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca\" (UniqueName: \"kubernetes.io/secret/7302806b-ac03-4214-9412-5ec8447ef5fe-ca\") pod \"cluster-proxy-proxy-agent-674846d7bd-9qx66\" (UID: \"7302806b-ac03-4214-9412-5ec8447ef5fe\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-674846d7bd-9qx66" Apr 17 07:51:50.893884 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.893832 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-proxy-server-cert\" (UniqueName: \"kubernetes.io/secret/7302806b-ac03-4214-9412-5ec8447ef5fe-service-proxy-server-cert\") pod \"cluster-proxy-proxy-agent-674846d7bd-9qx66\" (UID: \"7302806b-ac03-4214-9412-5ec8447ef5fe\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-674846d7bd-9qx66" Apr 17 07:51:50.893985 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.893889 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hub\" (UniqueName: \"kubernetes.io/secret/7302806b-ac03-4214-9412-5ec8447ef5fe-hub\") pod \"cluster-proxy-proxy-agent-674846d7bd-9qx66\" (UID: \"7302806b-ac03-4214-9412-5ec8447ef5fe\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-674846d7bd-9qx66" Apr 17 07:51:50.894443 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.894388 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-bound-sa-token\") pod \"image-registry-757f7bcd7c-6bhnz\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" Apr 17 07:51:50.894951 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.894901 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lmc4\" (UniqueName: \"kubernetes.io/projected/e5decccf-5983-4694-ad30-679303b4d922-kube-api-access-9lmc4\") pod \"klusterlet-addon-workmgr-576854cd45-gmvnf\" (UID: \"e5decccf-5983-4694-ad30-679303b4d922\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-576854cd45-gmvnf" Apr 17 07:51:50.896223 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.896201 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gd7p\" (UniqueName: \"kubernetes.io/projected/dffe644a-60cc-49b3-96e0-1da3bf018246-kube-api-access-5gd7p\") pod \"ingress-canary-t7j9w\" (UID: \"dffe644a-60cc-49b3-96e0-1da3bf018246\") " pod="openshift-ingress-canary/ingress-canary-t7j9w" Apr 17 07:51:50.896684 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.896663 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg8gv\" (UniqueName: \"kubernetes.io/projected/555a944e-6ca6-491d-b2ef-b0257f2549a3-kube-api-access-gg8gv\") pod \"managed-serviceaccount-addon-agent-7d9464ccdc-xswwv\" (UID: \"555a944e-6ca6-491d-b2ef-b0257f2549a3\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7d9464ccdc-xswwv" Apr 17 07:51:50.896772 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.896752 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7bp8\" (UniqueName: \"kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-kube-api-access-f7bp8\") pod \"image-registry-757f7bcd7c-6bhnz\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" Apr 17 07:51:50.897310 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.897288 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcgtr\" (UniqueName: \"kubernetes.io/projected/7302806b-ac03-4214-9412-5ec8447ef5fe-kube-api-access-xcgtr\") pod \"cluster-proxy-proxy-agent-674846d7bd-9qx66\" (UID: \"7302806b-ac03-4214-9412-5ec8447ef5fe\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-674846d7bd-9qx66" Apr 17 07:51:50.897521 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.897485 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsm7d\" (UniqueName: \"kubernetes.io/projected/cb2327ec-f42e-4e5d-b122-579bbba751a8-kube-api-access-zsm7d\") pod \"dns-default-mlkb9\" (UID: \"cb2327ec-f42e-4e5d-b122-579bbba751a8\") " pod="openshift-dns/dns-default-mlkb9" Apr 17 07:51:50.944716 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.944684 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7d9464ccdc-xswwv" Apr 17 07:51:50.953605 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.953565 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-576854cd45-gmvnf" Apr 17 07:51:50.968843 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:50.968578 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-674846d7bd-9qx66" Apr 17 07:51:51.133975 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:51.133944 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-576854cd45-gmvnf"] Apr 17 07:51:51.138459 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:51.138367 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7d9464ccdc-xswwv"] Apr 17 07:51:51.142890 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:51.142866 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5decccf_5983_4694_ad30_679303b4d922.slice/crio-a605dcf90d8ad4f956ef57d8041692553f0b78b1e4aa0892f6cb9b9e40967126 WatchSource:0}: Error finding container a605dcf90d8ad4f956ef57d8041692553f0b78b1e4aa0892f6cb9b9e40967126: Status 404 returned error can't find the container with id a605dcf90d8ad4f956ef57d8041692553f0b78b1e4aa0892f6cb9b9e40967126 Apr 17 07:51:51.148627 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:51.148599 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod555a944e_6ca6_491d_b2ef_b0257f2549a3.slice/crio-137ac03b1eb5418523d9a4ac6ba29cbb5ccfa2f129c1203ea6649c9daa11f211 WatchSource:0}: Error finding container 137ac03b1eb5418523d9a4ac6ba29cbb5ccfa2f129c1203ea6649c9daa11f211: Status 404 returned error can't find the container with id 137ac03b1eb5418523d9a4ac6ba29cbb5ccfa2f129c1203ea6649c9daa11f211 Apr 17 07:51:51.159658 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:51.159619 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/cluster-proxy-proxy-agent-674846d7bd-9qx66"] Apr 17 07:51:51.167938 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:51.167909 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7302806b_ac03_4214_9412_5ec8447ef5fe.slice/crio-3f64ea29046a2c1bf8c79be66ccbcc446f65fd02435293fcf4bffec7dd64f7fc WatchSource:0}: Error finding container 3f64ea29046a2c1bf8c79be66ccbcc446f65fd02435293fcf4bffec7dd64f7fc: Status 404 returned error can't find the container with id 3f64ea29046a2c1bf8c79be66ccbcc446f65fd02435293fcf4bffec7dd64f7fc Apr 17 07:51:51.391084 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:51.390997 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cb2327ec-f42e-4e5d-b122-579bbba751a8-metrics-tls\") pod \"dns-default-mlkb9\" (UID: \"cb2327ec-f42e-4e5d-b122-579bbba751a8\") " pod="openshift-dns/dns-default-mlkb9" Apr 17 07:51:51.391084 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:51.391045 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f23d3b8-dbbd-4489-8830-f7fb50e6d226-metrics-certs\") pod \"network-metrics-daemon-pqns9\" (UID: \"6f23d3b8-dbbd-4489-8830-f7fb50e6d226\") " pod="openshift-multus/network-metrics-daemon-pqns9" Apr 17 07:51:51.391285 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:51.391094 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dffe644a-60cc-49b3-96e0-1da3bf018246-cert\") pod \"ingress-canary-t7j9w\" (UID: \"dffe644a-60cc-49b3-96e0-1da3bf018246\") " pod="openshift-ingress-canary/ingress-canary-t7j9w" Apr 17 07:51:51.391285 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:51.391123 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-registry-tls\") pod \"image-registry-757f7bcd7c-6bhnz\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" Apr 17 07:51:51.391285 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:51.391174 2569 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 17 07:51:51.391285 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:51.391220 2569 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 17 07:51:51.391285 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:51.391238 2569 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 17 07:51:51.391285 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:51.391251 2569 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-757f7bcd7c-6bhnz: secret "image-registry-tls" not found Apr 17 07:51:51.391285 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:51.391277 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cb2327ec-f42e-4e5d-b122-579bbba751a8-metrics-tls podName:cb2327ec-f42e-4e5d-b122-579bbba751a8 nodeName:}" failed. No retries permitted until 2026-04-17 07:51:52.391255076 +0000 UTC m=+35.167221093 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/cb2327ec-f42e-4e5d-b122-579bbba751a8-metrics-tls") pod "dns-default-mlkb9" (UID: "cb2327ec-f42e-4e5d-b122-579bbba751a8") : secret "dns-default-metrics-tls" not found Apr 17 07:51:51.391574 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:51.391299 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dffe644a-60cc-49b3-96e0-1da3bf018246-cert podName:dffe644a-60cc-49b3-96e0-1da3bf018246 nodeName:}" failed. No retries permitted until 2026-04-17 07:51:52.391288817 +0000 UTC m=+35.167254818 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dffe644a-60cc-49b3-96e0-1da3bf018246-cert") pod "ingress-canary-t7j9w" (UID: "dffe644a-60cc-49b3-96e0-1da3bf018246") : secret "canary-serving-cert" not found Apr 17 07:51:51.391574 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:51.391297 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Apr 17 07:51:51.391574 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:51.391315 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-registry-tls podName:3724dff9-b116-4340-9f6c-3565e9aa82fd nodeName:}" failed. No retries permitted until 2026-04-17 07:51:52.391306594 +0000 UTC m=+35.167272595 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-registry-tls") pod "image-registry-757f7bcd7c-6bhnz" (UID: "3724dff9-b116-4340-9f6c-3565e9aa82fd") : secret "image-registry-tls" not found Apr 17 07:51:51.391574 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:51.391355 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f23d3b8-dbbd-4489-8830-f7fb50e6d226-metrics-certs podName:6f23d3b8-dbbd-4489-8830-f7fb50e6d226 nodeName:}" failed. No retries permitted until 2026-04-17 07:52:23.391340069 +0000 UTC m=+66.167306071 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6f23d3b8-dbbd-4489-8830-f7fb50e6d226-metrics-certs") pod "network-metrics-daemon-pqns9" (UID: "6f23d3b8-dbbd-4489-8830-f7fb50e6d226") : secret "metrics-daemon-secret" not found Apr 17 07:51:51.491999 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:51.491959 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6q7vj\" (UniqueName: \"kubernetes.io/projected/f102eb44-3020-49b4-b898-dcf83e0d0a11-kube-api-access-6q7vj\") pod \"network-check-target-t7s67\" (UID: \"f102eb44-3020-49b4-b898-dcf83e0d0a11\") " pod="openshift-network-diagnostics/network-check-target-t7s67" Apr 17 07:51:51.495701 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:51.495677 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q7vj\" (UniqueName: \"kubernetes.io/projected/f102eb44-3020-49b4-b898-dcf83e0d0a11-kube-api-access-6q7vj\") pod \"network-check-target-t7s67\" (UID: \"f102eb44-3020-49b4-b898-dcf83e0d0a11\") " pod="openshift-network-diagnostics/network-check-target-t7s67" Apr 17 07:51:51.670055 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:51.669838 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-t7s67" Apr 17 07:51:51.830105 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:51.830071 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-t7s67"] Apr 17 07:51:51.913499 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:51.913421 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-576854cd45-gmvnf" event={"ID":"e5decccf-5983-4694-ad30-679303b4d922","Type":"ContainerStarted","Data":"a605dcf90d8ad4f956ef57d8041692553f0b78b1e4aa0892f6cb9b9e40967126"} Apr 17 07:51:51.918362 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:51.916696 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7d9464ccdc-xswwv" event={"ID":"555a944e-6ca6-491d-b2ef-b0257f2549a3","Type":"ContainerStarted","Data":"137ac03b1eb5418523d9a4ac6ba29cbb5ccfa2f129c1203ea6649c9daa11f211"} Apr 17 07:51:51.918866 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:51.918836 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-674846d7bd-9qx66" event={"ID":"7302806b-ac03-4214-9412-5ec8447ef5fe","Type":"ContainerStarted","Data":"3f64ea29046a2c1bf8c79be66ccbcc446f65fd02435293fcf4bffec7dd64f7fc"} Apr 17 07:51:52.400420 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:52.400334 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cb2327ec-f42e-4e5d-b122-579bbba751a8-metrics-tls\") pod \"dns-default-mlkb9\" (UID: \"cb2327ec-f42e-4e5d-b122-579bbba751a8\") " pod="openshift-dns/dns-default-mlkb9" Apr 17 07:51:52.400420 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:52.400404 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dffe644a-60cc-49b3-96e0-1da3bf018246-cert\") pod \"ingress-canary-t7j9w\" (UID: \"dffe644a-60cc-49b3-96e0-1da3bf018246\") " pod="openshift-ingress-canary/ingress-canary-t7j9w" Apr 17 07:51:52.400640 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:52.400443 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-registry-tls\") pod \"image-registry-757f7bcd7c-6bhnz\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" Apr 17 07:51:52.400640 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:52.400600 2569 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 17 07:51:52.400640 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:52.400615 2569 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-757f7bcd7c-6bhnz: secret "image-registry-tls" not found Apr 17 07:51:52.400828 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:52.400695 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-registry-tls podName:3724dff9-b116-4340-9f6c-3565e9aa82fd nodeName:}" failed. No retries permitted until 2026-04-17 07:51:54.400671862 +0000 UTC m=+37.176637874 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-registry-tls") pod "image-registry-757f7bcd7c-6bhnz" (UID: "3724dff9-b116-4340-9f6c-3565e9aa82fd") : secret "image-registry-tls" not found Apr 17 07:51:52.401132 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:52.401112 2569 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 17 07:51:52.401210 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:52.401166 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cb2327ec-f42e-4e5d-b122-579bbba751a8-metrics-tls podName:cb2327ec-f42e-4e5d-b122-579bbba751a8 nodeName:}" failed. No retries permitted until 2026-04-17 07:51:54.40115169 +0000 UTC m=+37.177117697 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/cb2327ec-f42e-4e5d-b122-579bbba751a8-metrics-tls") pod "dns-default-mlkb9" (UID: "cb2327ec-f42e-4e5d-b122-579bbba751a8") : secret "dns-default-metrics-tls" not found Apr 17 07:51:52.401268 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:52.401223 2569 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 17 07:51:52.401268 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:52.401252 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dffe644a-60cc-49b3-96e0-1da3bf018246-cert podName:dffe644a-60cc-49b3-96e0-1da3bf018246 nodeName:}" failed. No retries permitted until 2026-04-17 07:51:54.401241988 +0000 UTC m=+37.177207992 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dffe644a-60cc-49b3-96e0-1da3bf018246-cert") pod "ingress-canary-t7j9w" (UID: "dffe644a-60cc-49b3-96e0-1da3bf018246") : secret "canary-serving-cert" not found Apr 17 07:51:54.418428 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:54.418390 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cb2327ec-f42e-4e5d-b122-579bbba751a8-metrics-tls\") pod \"dns-default-mlkb9\" (UID: \"cb2327ec-f42e-4e5d-b122-579bbba751a8\") " pod="openshift-dns/dns-default-mlkb9" Apr 17 07:51:54.418849 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:54.418454 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dffe644a-60cc-49b3-96e0-1da3bf018246-cert\") pod \"ingress-canary-t7j9w\" (UID: \"dffe644a-60cc-49b3-96e0-1da3bf018246\") " pod="openshift-ingress-canary/ingress-canary-t7j9w" Apr 17 07:51:54.418849 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:54.418500 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-registry-tls\") pod \"image-registry-757f7bcd7c-6bhnz\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" Apr 17 07:51:54.418849 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:54.418549 2569 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 17 07:51:54.418849 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:54.418614 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cb2327ec-f42e-4e5d-b122-579bbba751a8-metrics-tls podName:cb2327ec-f42e-4e5d-b122-579bbba751a8 nodeName:}" failed. No retries permitted until 2026-04-17 07:51:58.418595825 +0000 UTC m=+41.194561840 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/cb2327ec-f42e-4e5d-b122-579bbba751a8-metrics-tls") pod "dns-default-mlkb9" (UID: "cb2327ec-f42e-4e5d-b122-579bbba751a8") : secret "dns-default-metrics-tls" not found Apr 17 07:51:54.418849 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:54.418638 2569 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 17 07:51:54.418849 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:54.418677 2569 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-757f7bcd7c-6bhnz: secret "image-registry-tls" not found Apr 17 07:51:54.418849 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:54.418713 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-registry-tls podName:3724dff9-b116-4340-9f6c-3565e9aa82fd nodeName:}" failed. No retries permitted until 2026-04-17 07:51:58.418700815 +0000 UTC m=+41.194666816 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-registry-tls") pod "image-registry-757f7bcd7c-6bhnz" (UID: "3724dff9-b116-4340-9f6c-3565e9aa82fd") : secret "image-registry-tls" not found Apr 17 07:51:54.418849 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:54.418772 2569 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 17 07:51:54.418849 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:54.418800 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dffe644a-60cc-49b3-96e0-1da3bf018246-cert podName:dffe644a-60cc-49b3-96e0-1da3bf018246 nodeName:}" failed. No retries permitted until 2026-04-17 07:51:58.4187905 +0000 UTC m=+41.194756503 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dffe644a-60cc-49b3-96e0-1da3bf018246-cert") pod "ingress-canary-t7j9w" (UID: "dffe644a-60cc-49b3-96e0-1da3bf018246") : secret "canary-serving-cert" not found Apr 17 07:51:55.159784 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:51:55.159740 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf102eb44_3020_49b4_b898_dcf83e0d0a11.slice/crio-796856739cb6ef3bc8a0a155178f916f65d1f8978dd74a7e5ce93d5cf784e471 WatchSource:0}: Error finding container 796856739cb6ef3bc8a0a155178f916f65d1f8978dd74a7e5ce93d5cf784e471: Status 404 returned error can't find the container with id 796856739cb6ef3bc8a0a155178f916f65d1f8978dd74a7e5ce93d5cf784e471 Apr 17 07:51:55.929397 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:55.929190 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-t7s67" event={"ID":"f102eb44-3020-49b4-b898-dcf83e0d0a11","Type":"ContainerStarted","Data":"796856739cb6ef3bc8a0a155178f916f65d1f8978dd74a7e5ce93d5cf784e471"} Apr 17 07:51:58.453241 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:58.453199 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cb2327ec-f42e-4e5d-b122-579bbba751a8-metrics-tls\") pod \"dns-default-mlkb9\" (UID: \"cb2327ec-f42e-4e5d-b122-579bbba751a8\") " pod="openshift-dns/dns-default-mlkb9" Apr 17 07:51:58.453741 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:58.453260 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dffe644a-60cc-49b3-96e0-1da3bf018246-cert\") pod \"ingress-canary-t7j9w\" (UID: \"dffe644a-60cc-49b3-96e0-1da3bf018246\") " pod="openshift-ingress-canary/ingress-canary-t7j9w" Apr 17 07:51:58.453741 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:51:58.453298 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-registry-tls\") pod \"image-registry-757f7bcd7c-6bhnz\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" Apr 17 07:51:58.453741 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:58.453407 2569 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 17 07:51:58.453741 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:58.453432 2569 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 17 07:51:58.453741 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:58.453448 2569 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-757f7bcd7c-6bhnz: secret "image-registry-tls" not found Apr 17 07:51:58.453741 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:58.453431 2569 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 17 07:51:58.453741 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:58.453487 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cb2327ec-f42e-4e5d-b122-579bbba751a8-metrics-tls podName:cb2327ec-f42e-4e5d-b122-579bbba751a8 nodeName:}" failed. No retries permitted until 2026-04-17 07:52:06.453467614 +0000 UTC m=+49.229433616 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/cb2327ec-f42e-4e5d-b122-579bbba751a8-metrics-tls") pod "dns-default-mlkb9" (UID: "cb2327ec-f42e-4e5d-b122-579bbba751a8") : secret "dns-default-metrics-tls" not found Apr 17 07:51:58.453741 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:58.453511 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dffe644a-60cc-49b3-96e0-1da3bf018246-cert podName:dffe644a-60cc-49b3-96e0-1da3bf018246 nodeName:}" failed. No retries permitted until 2026-04-17 07:52:06.453495696 +0000 UTC m=+49.229461698 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dffe644a-60cc-49b3-96e0-1da3bf018246-cert") pod "ingress-canary-t7j9w" (UID: "dffe644a-60cc-49b3-96e0-1da3bf018246") : secret "canary-serving-cert" not found Apr 17 07:51:58.453741 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:51:58.453528 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-registry-tls podName:3724dff9-b116-4340-9f6c-3565e9aa82fd nodeName:}" failed. No retries permitted until 2026-04-17 07:52:06.453519121 +0000 UTC m=+49.229485127 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-registry-tls") pod "image-registry-757f7bcd7c-6bhnz" (UID: "3724dff9-b116-4340-9f6c-3565e9aa82fd") : secret "image-registry-tls" not found Apr 17 07:52:00.068587 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:00.068548 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/69371276-3a35-470f-aaf5-f3677601470b-original-pull-secret\") pod \"global-pull-secret-syncer-6xq6f\" (UID: \"69371276-3a35-470f-aaf5-f3677601470b\") " pod="kube-system/global-pull-secret-syncer-6xq6f" Apr 17 07:52:00.072265 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:00.072232 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/69371276-3a35-470f-aaf5-f3677601470b-original-pull-secret\") pod \"global-pull-secret-syncer-6xq6f\" (UID: \"69371276-3a35-470f-aaf5-f3677601470b\") " pod="kube-system/global-pull-secret-syncer-6xq6f" Apr 17 07:52:00.362702 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:00.362600 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-6xq6f" Apr 17 07:52:00.737668 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:00.737605 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-6xq6f"] Apr 17 07:52:00.748548 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:52:00.748498 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69371276_3a35_470f_aaf5_f3677601470b.slice/crio-78ed92bc200eac3ba81229fe98c348be2a01454aac2ecc632ad6728293e96315 WatchSource:0}: Error finding container 78ed92bc200eac3ba81229fe98c348be2a01454aac2ecc632ad6728293e96315: Status 404 returned error can't find the container with id 78ed92bc200eac3ba81229fe98c348be2a01454aac2ecc632ad6728293e96315 Apr 17 07:52:00.941293 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:00.941255 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-576854cd45-gmvnf" event={"ID":"e5decccf-5983-4694-ad30-679303b4d922","Type":"ContainerStarted","Data":"2c66af2dc9b6488eabe2a123fc39fbc409f7d2f2d6854163b19d9272e6045b28"} Apr 17 07:52:00.941482 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:00.941465 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-576854cd45-gmvnf" Apr 17 07:52:00.942807 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:00.942777 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7d9464ccdc-xswwv" event={"ID":"555a944e-6ca6-491d-b2ef-b0257f2549a3","Type":"ContainerStarted","Data":"dc637c4ed377e084112bb9fa31996a66e8f8bb50caa4c0e05486a9cd3d7f86da"} Apr 17 07:52:00.943346 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:00.943315 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-576854cd45-gmvnf" Apr 17 07:52:00.945143 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:00.945122 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjghl" event={"ID":"20cbf601-967c-4931-9e41-f9b3377a7284","Type":"ContainerStarted","Data":"cc95a8c1cc5712944d46f870f8db6e9f836441fdcb6b8df9490745f9a87acf0a"} Apr 17 07:52:00.946243 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:00.946223 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-674846d7bd-9qx66" event={"ID":"7302806b-ac03-4214-9412-5ec8447ef5fe","Type":"ContainerStarted","Data":"feb493d6fb2fbad990b3ccf18c58dfd88bb74a99f30b77605024344cd411bf08"} Apr 17 07:52:00.947017 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:00.946997 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-6xq6f" event={"ID":"69371276-3a35-470f-aaf5-f3677601470b","Type":"ContainerStarted","Data":"78ed92bc200eac3ba81229fe98c348be2a01454aac2ecc632ad6728293e96315"} Apr 17 07:52:00.957351 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:00.957313 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-576854cd45-gmvnf" podStartSLOduration=5.522836636 podStartE2EDuration="14.957300959s" podCreationTimestamp="2026-04-17 07:51:46 +0000 UTC" firstStartedPulling="2026-04-17 07:51:51.145347645 +0000 UTC m=+33.921313646" lastFinishedPulling="2026-04-17 07:52:00.579811965 +0000 UTC m=+43.355777969" observedRunningTime="2026-04-17 07:52:00.956566626 +0000 UTC m=+43.732532650" watchObservedRunningTime="2026-04-17 07:52:00.957300959 +0000 UTC m=+43.733266981" Apr 17 07:52:00.985021 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:00.984973 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7d9464ccdc-xswwv" podStartSLOduration=6.621655534 podStartE2EDuration="14.984957938s" podCreationTimestamp="2026-04-17 07:51:46 +0000 UTC" firstStartedPulling="2026-04-17 07:51:51.150521858 +0000 UTC m=+33.926487864" lastFinishedPulling="2026-04-17 07:51:59.513824265 +0000 UTC m=+42.289790268" observedRunningTime="2026-04-17 07:52:00.984862473 +0000 UTC m=+43.760828496" watchObservedRunningTime="2026-04-17 07:52:00.984957938 +0000 UTC m=+43.760923961" Apr 17 07:52:01.952162 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:01.951992 2569 generic.go:358] "Generic (PLEG): container finished" podID="20cbf601-967c-4931-9e41-f9b3377a7284" containerID="cc95a8c1cc5712944d46f870f8db6e9f836441fdcb6b8df9490745f9a87acf0a" exitCode=0 Apr 17 07:52:01.952162 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:01.952082 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjghl" event={"ID":"20cbf601-967c-4931-9e41-f9b3377a7284","Type":"ContainerDied","Data":"cc95a8c1cc5712944d46f870f8db6e9f836441fdcb6b8df9490745f9a87acf0a"} Apr 17 07:52:01.954558 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:01.954533 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-t7s67" event={"ID":"f102eb44-3020-49b4-b898-dcf83e0d0a11","Type":"ContainerStarted","Data":"6a24ba018b6aeb5baabe98d2e779f5482c33a376cbdfd7e4d99fb5256663097c"} Apr 17 07:52:01.954713 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:01.954697 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-t7s67" Apr 17 07:52:01.991095 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:01.991053 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-target-t7s67" podStartSLOduration=38.105863321 podStartE2EDuration="43.991039463s" podCreationTimestamp="2026-04-17 07:51:18 +0000 UTC" firstStartedPulling="2026-04-17 07:51:55.161726154 +0000 UTC m=+37.937692155" lastFinishedPulling="2026-04-17 07:52:01.046902296 +0000 UTC m=+43.822868297" observedRunningTime="2026-04-17 07:52:01.989865767 +0000 UTC m=+44.765831790" watchObservedRunningTime="2026-04-17 07:52:01.991039463 +0000 UTC m=+44.767005486" Apr 17 07:52:02.959855 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:02.959705 2569 generic.go:358] "Generic (PLEG): container finished" podID="20cbf601-967c-4931-9e41-f9b3377a7284" containerID="4d38c5afa441495db6d7e1c5554356815f81d7b5546e90dc5ef1594b2df268ca" exitCode=0 Apr 17 07:52:02.959855 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:02.959800 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjghl" event={"ID":"20cbf601-967c-4931-9e41-f9b3377a7284","Type":"ContainerDied","Data":"4d38c5afa441495db6d7e1c5554356815f81d7b5546e90dc5ef1594b2df268ca"} Apr 17 07:52:04.968571 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:04.968538 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-674846d7bd-9qx66" event={"ID":"7302806b-ac03-4214-9412-5ec8447ef5fe","Type":"ContainerStarted","Data":"3b6c3108eb19a96c5b9e0d553f95c5cbe17d434eb0ff1b053e28b19ed4ddb390"} Apr 17 07:52:05.982995 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:05.982951 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-674846d7bd-9qx66" event={"ID":"7302806b-ac03-4214-9412-5ec8447ef5fe","Type":"ContainerStarted","Data":"2d269fb45d17eccd0191d0eb1cdb76c2cbdaaa3b55ad74883c7cf8897e3fcbc1"} Apr 17 07:52:05.984187 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:05.984159 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-6xq6f" event={"ID":"69371276-3a35-470f-aaf5-f3677601470b","Type":"ContainerStarted","Data":"128a149da7148c3b90c510e8347c05a4894c532a00937fdd1953f02a04bddd7f"} Apr 17 07:52:05.987070 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:05.987047 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjghl" event={"ID":"20cbf601-967c-4931-9e41-f9b3377a7284","Type":"ContainerStarted","Data":"d76d137cd8c30e6bcd0b2b421284ff3b6a4e0556e6533019e413952c85041ca7"} Apr 17 07:52:06.001636 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:06.001586 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-674846d7bd-9qx66" podStartSLOduration=6.325352133 podStartE2EDuration="20.001570285s" podCreationTimestamp="2026-04-17 07:51:46 +0000 UTC" firstStartedPulling="2026-04-17 07:51:51.169905485 +0000 UTC m=+33.945871486" lastFinishedPulling="2026-04-17 07:52:04.846123635 +0000 UTC m=+47.622089638" observedRunningTime="2026-04-17 07:52:06.000571203 +0000 UTC m=+48.776537246" watchObservedRunningTime="2026-04-17 07:52:06.001570285 +0000 UTC m=+48.777536307" Apr 17 07:52:06.021058 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:06.021003 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-rjghl" podStartSLOduration=9.824913032 podStartE2EDuration="49.020991057s" podCreationTimestamp="2026-04-17 07:51:17 +0000 UTC" firstStartedPulling="2026-04-17 07:51:20.317084981 +0000 UTC m=+3.093050982" lastFinishedPulling="2026-04-17 07:51:59.513163002 +0000 UTC m=+42.289129007" observedRunningTime="2026-04-17 07:52:06.019164128 +0000 UTC m=+48.795130150" watchObservedRunningTime="2026-04-17 07:52:06.020991057 +0000 UTC m=+48.796957080" Apr 17 07:52:06.036712 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:06.036665 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/global-pull-secret-syncer-6xq6f" podStartSLOduration=33.925548155 podStartE2EDuration="38.036631795s" podCreationTimestamp="2026-04-17 07:51:28 +0000 UTC" firstStartedPulling="2026-04-17 07:52:00.750627158 +0000 UTC m=+43.526593164" lastFinishedPulling="2026-04-17 07:52:04.86171079 +0000 UTC m=+47.637676804" observedRunningTime="2026-04-17 07:52:06.036077428 +0000 UTC m=+48.812043450" watchObservedRunningTime="2026-04-17 07:52:06.036631795 +0000 UTC m=+48.812597817" Apr 17 07:52:06.524626 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:06.524582 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-registry-tls\") pod \"image-registry-757f7bcd7c-6bhnz\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" Apr 17 07:52:06.524859 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:06.524670 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cb2327ec-f42e-4e5d-b122-579bbba751a8-metrics-tls\") pod \"dns-default-mlkb9\" (UID: \"cb2327ec-f42e-4e5d-b122-579bbba751a8\") " pod="openshift-dns/dns-default-mlkb9" Apr 17 07:52:06.524859 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:06.524700 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dffe644a-60cc-49b3-96e0-1da3bf018246-cert\") pod \"ingress-canary-t7j9w\" (UID: \"dffe644a-60cc-49b3-96e0-1da3bf018246\") " pod="openshift-ingress-canary/ingress-canary-t7j9w" Apr 17 07:52:06.524859 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:52:06.524738 2569 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 17 07:52:06.524859 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:52:06.524760 2569 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-757f7bcd7c-6bhnz: secret "image-registry-tls" not found Apr 17 07:52:06.524859 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:52:06.524776 2569 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 17 07:52:06.524859 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:52:06.524795 2569 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 17 07:52:06.524859 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:52:06.524816 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-registry-tls podName:3724dff9-b116-4340-9f6c-3565e9aa82fd nodeName:}" failed. No retries permitted until 2026-04-17 07:52:22.524800816 +0000 UTC m=+65.300766818 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-registry-tls") pod "image-registry-757f7bcd7c-6bhnz" (UID: "3724dff9-b116-4340-9f6c-3565e9aa82fd") : secret "image-registry-tls" not found Apr 17 07:52:06.524859 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:52:06.524828 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dffe644a-60cc-49b3-96e0-1da3bf018246-cert podName:dffe644a-60cc-49b3-96e0-1da3bf018246 nodeName:}" failed. No retries permitted until 2026-04-17 07:52:22.524822646 +0000 UTC m=+65.300788647 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dffe644a-60cc-49b3-96e0-1da3bf018246-cert") pod "ingress-canary-t7j9w" (UID: "dffe644a-60cc-49b3-96e0-1da3bf018246") : secret "canary-serving-cert" not found Apr 17 07:52:06.524859 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:52:06.524853 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cb2327ec-f42e-4e5d-b122-579bbba751a8-metrics-tls podName:cb2327ec-f42e-4e5d-b122-579bbba751a8 nodeName:}" failed. No retries permitted until 2026-04-17 07:52:22.524839839 +0000 UTC m=+65.300805840 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/cb2327ec-f42e-4e5d-b122-579bbba751a8-metrics-tls") pod "dns-default-mlkb9" (UID: "cb2327ec-f42e-4e5d-b122-579bbba751a8") : secret "dns-default-metrics-tls" not found Apr 17 07:52:15.904048 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:15.904020 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wbxwr" Apr 17 07:52:22.550012 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:22.549965 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dffe644a-60cc-49b3-96e0-1da3bf018246-cert\") pod \"ingress-canary-t7j9w\" (UID: \"dffe644a-60cc-49b3-96e0-1da3bf018246\") " pod="openshift-ingress-canary/ingress-canary-t7j9w" Apr 17 07:52:22.550453 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:22.550029 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-registry-tls\") pod \"image-registry-757f7bcd7c-6bhnz\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" Apr 17 07:52:22.550453 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:22.550106 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cb2327ec-f42e-4e5d-b122-579bbba751a8-metrics-tls\") pod \"dns-default-mlkb9\" (UID: \"cb2327ec-f42e-4e5d-b122-579bbba751a8\") " pod="openshift-dns/dns-default-mlkb9" Apr 17 07:52:22.550453 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:52:22.550143 2569 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 17 07:52:22.550453 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:52:22.550207 2569 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 17 07:52:22.550453 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:52:22.550214 2569 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 17 07:52:22.550453 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:52:22.550233 2569 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-757f7bcd7c-6bhnz: secret "image-registry-tls" not found Apr 17 07:52:22.550453 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:52:22.550250 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dffe644a-60cc-49b3-96e0-1da3bf018246-cert podName:dffe644a-60cc-49b3-96e0-1da3bf018246 nodeName:}" failed. No retries permitted until 2026-04-17 07:52:54.550228517 +0000 UTC m=+97.326194518 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dffe644a-60cc-49b3-96e0-1da3bf018246-cert") pod "ingress-canary-t7j9w" (UID: "dffe644a-60cc-49b3-96e0-1da3bf018246") : secret "canary-serving-cert" not found Apr 17 07:52:22.550453 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:52:22.550265 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cb2327ec-f42e-4e5d-b122-579bbba751a8-metrics-tls podName:cb2327ec-f42e-4e5d-b122-579bbba751a8 nodeName:}" failed. No retries permitted until 2026-04-17 07:52:54.550259277 +0000 UTC m=+97.326225278 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/cb2327ec-f42e-4e5d-b122-579bbba751a8-metrics-tls") pod "dns-default-mlkb9" (UID: "cb2327ec-f42e-4e5d-b122-579bbba751a8") : secret "dns-default-metrics-tls" not found Apr 17 07:52:22.550453 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:52:22.550276 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-registry-tls podName:3724dff9-b116-4340-9f6c-3565e9aa82fd nodeName:}" failed. No retries permitted until 2026-04-17 07:52:54.550270714 +0000 UTC m=+97.326236715 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-registry-tls") pod "image-registry-757f7bcd7c-6bhnz" (UID: "3724dff9-b116-4340-9f6c-3565e9aa82fd") : secret "image-registry-tls" not found Apr 17 07:52:23.458251 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:23.458217 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f23d3b8-dbbd-4489-8830-f7fb50e6d226-metrics-certs\") pod \"network-metrics-daemon-pqns9\" (UID: \"6f23d3b8-dbbd-4489-8830-f7fb50e6d226\") " pod="openshift-multus/network-metrics-daemon-pqns9" Apr 17 07:52:23.458440 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:52:23.458343 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Apr 17 07:52:23.458440 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:52:23.458396 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f23d3b8-dbbd-4489-8830-f7fb50e6d226-metrics-certs podName:6f23d3b8-dbbd-4489-8830-f7fb50e6d226 nodeName:}" failed. No retries permitted until 2026-04-17 07:53:27.458381897 +0000 UTC m=+130.234347899 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6f23d3b8-dbbd-4489-8830-f7fb50e6d226-metrics-certs") pod "network-metrics-daemon-pqns9" (UID: "6f23d3b8-dbbd-4489-8830-f7fb50e6d226") : secret "metrics-daemon-secret" not found Apr 17 07:52:32.963063 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:32.963034 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-t7s67" Apr 17 07:52:54.586905 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:54.586866 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-registry-tls\") pod \"image-registry-757f7bcd7c-6bhnz\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" Apr 17 07:52:54.587398 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:54.586927 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cb2327ec-f42e-4e5d-b122-579bbba751a8-metrics-tls\") pod \"dns-default-mlkb9\" (UID: \"cb2327ec-f42e-4e5d-b122-579bbba751a8\") " pod="openshift-dns/dns-default-mlkb9" Apr 17 07:52:54.587398 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:52:54.586958 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dffe644a-60cc-49b3-96e0-1da3bf018246-cert\") pod \"ingress-canary-t7j9w\" (UID: \"dffe644a-60cc-49b3-96e0-1da3bf018246\") " pod="openshift-ingress-canary/ingress-canary-t7j9w" Apr 17 07:52:54.587398 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:52:54.587037 2569 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 17 07:52:54.587398 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:52:54.587050 2569 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 17 07:52:54.587398 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:52:54.587070 2569 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 17 07:52:54.587398 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:52:54.587100 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dffe644a-60cc-49b3-96e0-1da3bf018246-cert podName:dffe644a-60cc-49b3-96e0-1da3bf018246 nodeName:}" failed. No retries permitted until 2026-04-17 07:53:58.587086578 +0000 UTC m=+161.363052578 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dffe644a-60cc-49b3-96e0-1da3bf018246-cert") pod "ingress-canary-t7j9w" (UID: "dffe644a-60cc-49b3-96e0-1da3bf018246") : secret "canary-serving-cert" not found Apr 17 07:52:54.587398 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:52:54.587074 2569 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-757f7bcd7c-6bhnz: secret "image-registry-tls" not found Apr 17 07:52:54.587398 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:52:54.587161 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cb2327ec-f42e-4e5d-b122-579bbba751a8-metrics-tls podName:cb2327ec-f42e-4e5d-b122-579bbba751a8 nodeName:}" failed. No retries permitted until 2026-04-17 07:53:58.587142335 +0000 UTC m=+161.363108337 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/cb2327ec-f42e-4e5d-b122-579bbba751a8-metrics-tls") pod "dns-default-mlkb9" (UID: "cb2327ec-f42e-4e5d-b122-579bbba751a8") : secret "dns-default-metrics-tls" not found Apr 17 07:52:54.587398 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:52:54.587227 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-registry-tls podName:3724dff9-b116-4340-9f6c-3565e9aa82fd nodeName:}" failed. No retries permitted until 2026-04-17 07:53:58.587212648 +0000 UTC m=+161.363178652 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-registry-tls") pod "image-registry-757f7bcd7c-6bhnz" (UID: "3724dff9-b116-4340-9f6c-3565e9aa82fd") : secret "image-registry-tls" not found Apr 17 07:53:27.536562 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:53:27.536509 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f23d3b8-dbbd-4489-8830-f7fb50e6d226-metrics-certs\") pod \"network-metrics-daemon-pqns9\" (UID: \"6f23d3b8-dbbd-4489-8830-f7fb50e6d226\") " pod="openshift-multus/network-metrics-daemon-pqns9" Apr 17 07:53:27.537203 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:53:27.536711 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Apr 17 07:53:27.537203 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:53:27.536812 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f23d3b8-dbbd-4489-8830-f7fb50e6d226-metrics-certs podName:6f23d3b8-dbbd-4489-8830-f7fb50e6d226 nodeName:}" failed. No retries permitted until 2026-04-17 07:55:29.536788526 +0000 UTC m=+252.312754535 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6f23d3b8-dbbd-4489-8830-f7fb50e6d226-metrics-certs") pod "network-metrics-daemon-pqns9" (UID: "6f23d3b8-dbbd-4489-8830-f7fb50e6d226") : secret "metrics-daemon-secret" not found Apr 17 07:53:53.661465 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:53:53.661425 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[registry-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" podUID="3724dff9-b116-4340-9f6c-3565e9aa82fd" Apr 17 07:53:53.685626 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:53:53.685581 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[metrics-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-dns/dns-default-mlkb9" podUID="cb2327ec-f42e-4e5d-b122-579bbba751a8" Apr 17 07:53:53.691719 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:53:53.691696 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-ingress-canary/ingress-canary-t7j9w" podUID="dffe644a-60cc-49b3-96e0-1da3bf018246" Apr 17 07:53:53.775917 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:53:53.775877 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[metrics-certs], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-multus/network-metrics-daemon-pqns9" podUID="6f23d3b8-dbbd-4489-8830-f7fb50e6d226" Apr 17 07:53:54.241415 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:53:54.241382 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" Apr 17 07:53:54.241581 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:53:54.241382 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-t7j9w" Apr 17 07:53:54.241581 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:53:54.241402 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mlkb9" Apr 17 07:53:58.670156 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:53:58.670118 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-registry-tls\") pod \"image-registry-757f7bcd7c-6bhnz\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" Apr 17 07:53:58.670545 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:53:58.670185 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cb2327ec-f42e-4e5d-b122-579bbba751a8-metrics-tls\") pod \"dns-default-mlkb9\" (UID: \"cb2327ec-f42e-4e5d-b122-579bbba751a8\") " pod="openshift-dns/dns-default-mlkb9" Apr 17 07:53:58.670545 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:53:58.670236 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dffe644a-60cc-49b3-96e0-1da3bf018246-cert\") pod \"ingress-canary-t7j9w\" (UID: \"dffe644a-60cc-49b3-96e0-1da3bf018246\") " pod="openshift-ingress-canary/ingress-canary-t7j9w" Apr 17 07:53:58.670545 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:53:58.670273 2569 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 17 07:53:58.670545 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:53:58.670296 2569 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-757f7bcd7c-6bhnz: secret "image-registry-tls" not found Apr 17 07:53:58.670545 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:53:58.670316 2569 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 17 07:53:58.670545 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:53:58.670359 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-registry-tls podName:3724dff9-b116-4340-9f6c-3565e9aa82fd nodeName:}" failed. No retries permitted until 2026-04-17 07:56:00.670343324 +0000 UTC m=+283.446309330 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-registry-tls") pod "image-registry-757f7bcd7c-6bhnz" (UID: "3724dff9-b116-4340-9f6c-3565e9aa82fd") : secret "image-registry-tls" not found Apr 17 07:53:58.670545 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:53:58.670385 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dffe644a-60cc-49b3-96e0-1da3bf018246-cert podName:dffe644a-60cc-49b3-96e0-1da3bf018246 nodeName:}" failed. No retries permitted until 2026-04-17 07:56:00.670372547 +0000 UTC m=+283.446338549 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dffe644a-60cc-49b3-96e0-1da3bf018246-cert") pod "ingress-canary-t7j9w" (UID: "dffe644a-60cc-49b3-96e0-1da3bf018246") : secret "canary-serving-cert" not found Apr 17 07:53:58.670545 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:53:58.670316 2569 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 17 07:53:58.670545 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:53:58.670418 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cb2327ec-f42e-4e5d-b122-579bbba751a8-metrics-tls podName:cb2327ec-f42e-4e5d-b122-579bbba751a8 nodeName:}" failed. No retries permitted until 2026-04-17 07:56:00.670409625 +0000 UTC m=+283.446375627 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/cb2327ec-f42e-4e5d-b122-579bbba751a8-metrics-tls") pod "dns-default-mlkb9" (UID: "cb2327ec-f42e-4e5d-b122-579bbba751a8") : secret "dns-default-metrics-tls" not found Apr 17 07:54:00.941926 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:00.941866 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-576854cd45-gmvnf" podUID="e5decccf-5983-4694-ad30-679303b4d922" containerName="acm-agent" probeResult="failure" output="Get \"http://10.132.0.9:8000/readyz\": dial tcp 10.132.0.9:8000: connect: connection refused" Apr 17 07:54:00.945202 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:00.945175 2569 prober.go:120] "Probe failed" probeType="Liveness" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7d9464ccdc-xswwv" podUID="555a944e-6ca6-491d-b2ef-b0257f2549a3" containerName="addon-agent" probeResult="failure" output="Get \"http://10.132.0.6:8000/healthz\": dial tcp 10.132.0.6:8000: connect: connection refused" Apr 17 07:54:00.954226 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:00.954197 2569 prober.go:120] "Probe failed" probeType="Liveness" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-576854cd45-gmvnf" podUID="e5decccf-5983-4694-ad30-679303b4d922" containerName="acm-agent" probeResult="failure" output="Get \"http://10.132.0.9:8000/healthz\": dial tcp 10.132.0.9:8000: connect: connection refused" Apr 17 07:54:01.257659 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:01.257613 2569 generic.go:358] "Generic (PLEG): container finished" podID="e5decccf-5983-4694-ad30-679303b4d922" containerID="2c66af2dc9b6488eabe2a123fc39fbc409f7d2f2d6854163b19d9272e6045b28" exitCode=1 Apr 17 07:54:01.257833 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:01.257685 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-576854cd45-gmvnf" event={"ID":"e5decccf-5983-4694-ad30-679303b4d922","Type":"ContainerDied","Data":"2c66af2dc9b6488eabe2a123fc39fbc409f7d2f2d6854163b19d9272e6045b28"} Apr 17 07:54:01.258046 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:01.258028 2569 scope.go:117] "RemoveContainer" containerID="2c66af2dc9b6488eabe2a123fc39fbc409f7d2f2d6854163b19d9272e6045b28" Apr 17 07:54:01.258919 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:01.258899 2569 generic.go:358] "Generic (PLEG): container finished" podID="555a944e-6ca6-491d-b2ef-b0257f2549a3" containerID="dc637c4ed377e084112bb9fa31996a66e8f8bb50caa4c0e05486a9cd3d7f86da" exitCode=255 Apr 17 07:54:01.258988 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:01.258943 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7d9464ccdc-xswwv" event={"ID":"555a944e-6ca6-491d-b2ef-b0257f2549a3","Type":"ContainerDied","Data":"dc637c4ed377e084112bb9fa31996a66e8f8bb50caa4c0e05486a9cd3d7f86da"} Apr 17 07:54:01.259261 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:01.259244 2569 scope.go:117] "RemoveContainer" containerID="dc637c4ed377e084112bb9fa31996a66e8f8bb50caa4c0e05486a9cd3d7f86da" Apr 17 07:54:02.262892 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:02.262856 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-576854cd45-gmvnf" event={"ID":"e5decccf-5983-4694-ad30-679303b4d922","Type":"ContainerStarted","Data":"a007d387ac7fee362f45e2ea054089d9357405ca3f7f0539a3be34dd4e9773f3"} Apr 17 07:54:02.263329 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:02.263190 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-576854cd45-gmvnf" Apr 17 07:54:02.264198 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:02.264178 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-576854cd45-gmvnf" Apr 17 07:54:02.264565 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:02.264547 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7d9464ccdc-xswwv" event={"ID":"555a944e-6ca6-491d-b2ef-b0257f2549a3","Type":"ContainerStarted","Data":"986344bbe2f7b2580bedd5ca79f758a12fa5a861bccb112755dafb6f603aad7e"} Apr 17 07:54:06.749815 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:06.749722 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pqns9" Apr 17 07:54:07.640537 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:07.640509 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-xg76x_9719cdb1-840b-4a4d-8e68-be9ea50fc183/dns-node-resolver/0.log" Apr 17 07:54:08.245248 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:08.245223 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-jmbz2_5681141e-f336-418e-bb90-9a38ef69d0fc/node-ca/0.log" Apr 17 07:54:30.874832 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:30.874798 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-runtime-extractor-pf4zb"] Apr 17 07:54:30.877940 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:30.877922 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-pf4zb" Apr 17 07:54:30.880179 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:30.880158 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-tls\"" Apr 17 07:54:30.881684 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:30.881670 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-sa-dockercfg-m2z7h\"" Apr 17 07:54:30.882227 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:30.882211 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"openshift-service-ca.crt\"" Apr 17 07:54:30.882284 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:30.882211 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-rbac-proxy\"" Apr 17 07:54:30.882699 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:30.882681 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-root-ca.crt\"" Apr 17 07:54:30.891033 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:30.891007 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-pf4zb"] Apr 17 07:54:31.023962 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:31.023919 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/74c1df39-93b8-4a20-b9cd-ec32f8438732-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-pf4zb\" (UID: \"74c1df39-93b8-4a20-b9cd-ec32f8438732\") " pod="openshift-insights/insights-runtime-extractor-pf4zb" Apr 17 07:54:31.024141 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:31.024025 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/74c1df39-93b8-4a20-b9cd-ec32f8438732-data-volume\") pod \"insights-runtime-extractor-pf4zb\" (UID: \"74c1df39-93b8-4a20-b9cd-ec32f8438732\") " pod="openshift-insights/insights-runtime-extractor-pf4zb" Apr 17 07:54:31.024141 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:31.024073 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8jmm\" (UniqueName: \"kubernetes.io/projected/74c1df39-93b8-4a20-b9cd-ec32f8438732-kube-api-access-l8jmm\") pod \"insights-runtime-extractor-pf4zb\" (UID: \"74c1df39-93b8-4a20-b9cd-ec32f8438732\") " pod="openshift-insights/insights-runtime-extractor-pf4zb" Apr 17 07:54:31.024141 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:31.024106 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/74c1df39-93b8-4a20-b9cd-ec32f8438732-crio-socket\") pod \"insights-runtime-extractor-pf4zb\" (UID: \"74c1df39-93b8-4a20-b9cd-ec32f8438732\") " pod="openshift-insights/insights-runtime-extractor-pf4zb" Apr 17 07:54:31.024141 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:31.024122 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/74c1df39-93b8-4a20-b9cd-ec32f8438732-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-pf4zb\" (UID: \"74c1df39-93b8-4a20-b9cd-ec32f8438732\") " pod="openshift-insights/insights-runtime-extractor-pf4zb" Apr 17 07:54:31.125337 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:31.125262 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/74c1df39-93b8-4a20-b9cd-ec32f8438732-data-volume\") pod \"insights-runtime-extractor-pf4zb\" (UID: \"74c1df39-93b8-4a20-b9cd-ec32f8438732\") " pod="openshift-insights/insights-runtime-extractor-pf4zb" Apr 17 07:54:31.125337 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:31.125318 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l8jmm\" (UniqueName: \"kubernetes.io/projected/74c1df39-93b8-4a20-b9cd-ec32f8438732-kube-api-access-l8jmm\") pod \"insights-runtime-extractor-pf4zb\" (UID: \"74c1df39-93b8-4a20-b9cd-ec32f8438732\") " pod="openshift-insights/insights-runtime-extractor-pf4zb" Apr 17 07:54:31.125490 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:31.125351 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/74c1df39-93b8-4a20-b9cd-ec32f8438732-crio-socket\") pod \"insights-runtime-extractor-pf4zb\" (UID: \"74c1df39-93b8-4a20-b9cd-ec32f8438732\") " pod="openshift-insights/insights-runtime-extractor-pf4zb" Apr 17 07:54:31.125490 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:31.125367 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/74c1df39-93b8-4a20-b9cd-ec32f8438732-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-pf4zb\" (UID: \"74c1df39-93b8-4a20-b9cd-ec32f8438732\") " pod="openshift-insights/insights-runtime-extractor-pf4zb" Apr 17 07:54:31.125490 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:31.125387 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/74c1df39-93b8-4a20-b9cd-ec32f8438732-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-pf4zb\" (UID: \"74c1df39-93b8-4a20-b9cd-ec32f8438732\") " pod="openshift-insights/insights-runtime-extractor-pf4zb" Apr 17 07:54:31.125490 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:31.125468 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/74c1df39-93b8-4a20-b9cd-ec32f8438732-crio-socket\") pod \"insights-runtime-extractor-pf4zb\" (UID: \"74c1df39-93b8-4a20-b9cd-ec32f8438732\") " pod="openshift-insights/insights-runtime-extractor-pf4zb" Apr 17 07:54:31.125694 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:31.125607 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/74c1df39-93b8-4a20-b9cd-ec32f8438732-data-volume\") pod \"insights-runtime-extractor-pf4zb\" (UID: \"74c1df39-93b8-4a20-b9cd-ec32f8438732\") " pod="openshift-insights/insights-runtime-extractor-pf4zb" Apr 17 07:54:31.125966 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:31.125947 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/74c1df39-93b8-4a20-b9cd-ec32f8438732-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-pf4zb\" (UID: \"74c1df39-93b8-4a20-b9cd-ec32f8438732\") " pod="openshift-insights/insights-runtime-extractor-pf4zb" Apr 17 07:54:31.127781 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:31.127766 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/74c1df39-93b8-4a20-b9cd-ec32f8438732-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-pf4zb\" (UID: \"74c1df39-93b8-4a20-b9cd-ec32f8438732\") " pod="openshift-insights/insights-runtime-extractor-pf4zb" Apr 17 07:54:31.142854 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:31.142822 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8jmm\" (UniqueName: \"kubernetes.io/projected/74c1df39-93b8-4a20-b9cd-ec32f8438732-kube-api-access-l8jmm\") pod \"insights-runtime-extractor-pf4zb\" (UID: \"74c1df39-93b8-4a20-b9cd-ec32f8438732\") " pod="openshift-insights/insights-runtime-extractor-pf4zb" Apr 17 07:54:31.186802 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:31.186771 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-pf4zb" Apr 17 07:54:31.302191 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:31.302153 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-pf4zb"] Apr 17 07:54:31.305944 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:54:31.305913 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74c1df39_93b8_4a20_b9cd_ec32f8438732.slice/crio-92e9e07ca1252d311ddf43f38d1e7d777347c3734c6ea5b42b03a72d74c4bf52 WatchSource:0}: Error finding container 92e9e07ca1252d311ddf43f38d1e7d777347c3734c6ea5b42b03a72d74c4bf52: Status 404 returned error can't find the container with id 92e9e07ca1252d311ddf43f38d1e7d777347c3734c6ea5b42b03a72d74c4bf52 Apr 17 07:54:31.333432 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:31.333395 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-pf4zb" event={"ID":"74c1df39-93b8-4a20-b9cd-ec32f8438732","Type":"ContainerStarted","Data":"92e9e07ca1252d311ddf43f38d1e7d777347c3734c6ea5b42b03a72d74c4bf52"} Apr 17 07:54:32.336823 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:32.336788 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-pf4zb" event={"ID":"74c1df39-93b8-4a20-b9cd-ec32f8438732","Type":"ContainerStarted","Data":"6675e4eb98e772646dee084393625acbb52f7e396be0bed2910df6182a88f507"} Apr 17 07:54:32.336823 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:32.336825 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-pf4zb" event={"ID":"74c1df39-93b8-4a20-b9cd-ec32f8438732","Type":"ContainerStarted","Data":"7888384a9823b1a27f9b51c043dcc882839299d3a1c98959f0ccfdffacda429f"} Apr 17 07:54:34.348066 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:34.348031 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-pf4zb" event={"ID":"74c1df39-93b8-4a20-b9cd-ec32f8438732","Type":"ContainerStarted","Data":"75933c3e8e571b44962fefb4af375d4060443c58b8b9c413b44309693b2626ed"} Apr 17 07:54:34.366041 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:34.365993 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-runtime-extractor-pf4zb" podStartSLOduration=2.451546355 podStartE2EDuration="4.365980586s" podCreationTimestamp="2026-04-17 07:54:30 +0000 UTC" firstStartedPulling="2026-04-17 07:54:31.372382198 +0000 UTC m=+194.148348199" lastFinishedPulling="2026-04-17 07:54:33.28681642 +0000 UTC m=+196.062782430" observedRunningTime="2026-04-17 07:54:34.364902914 +0000 UTC m=+197.140868938" watchObservedRunningTime="2026-04-17 07:54:34.365980586 +0000 UTC m=+197.141946609" Apr 17 07:54:41.944560 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:41.944515 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-5bldv"] Apr 17 07:54:41.947677 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:41.947638 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-5bldv" Apr 17 07:54:41.950046 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:41.950014 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-kube-rbac-proxy-config\"" Apr 17 07:54:41.950046 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:41.950036 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kube-root-ca.crt\"" Apr 17 07:54:41.950226 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:41.950168 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"node-exporter-accelerators-collector-config\"" Apr 17 07:54:41.950368 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:41.950348 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-tls\"" Apr 17 07:54:41.951222 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:41.951203 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"openshift-service-ca.crt\"" Apr 17 07:54:41.951922 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:41.951871 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-dockercfg-68tcw\"" Apr 17 07:54:41.952327 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:41.952005 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"metrics-client-ca\"" Apr 17 07:54:42.010408 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:42.010372 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-5bldv\" (UID: \"0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0\") " pod="openshift-monitoring/node-exporter-5bldv" Apr 17 07:54:42.010408 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:42.010408 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0-node-exporter-accelerators-collector-config\") pod \"node-exporter-5bldv\" (UID: \"0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0\") " pod="openshift-monitoring/node-exporter-5bldv" Apr 17 07:54:42.010697 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:42.010430 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0-metrics-client-ca\") pod \"node-exporter-5bldv\" (UID: \"0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0\") " pod="openshift-monitoring/node-exporter-5bldv" Apr 17 07:54:42.010697 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:42.010491 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0-node-exporter-tls\") pod \"node-exporter-5bldv\" (UID: \"0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0\") " pod="openshift-monitoring/node-exporter-5bldv" Apr 17 07:54:42.010697 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:42.010534 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0-sys\") pod \"node-exporter-5bldv\" (UID: \"0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0\") " pod="openshift-monitoring/node-exporter-5bldv" Apr 17 07:54:42.010697 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:42.010553 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0-node-exporter-wtmp\") pod \"node-exporter-5bldv\" (UID: \"0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0\") " pod="openshift-monitoring/node-exporter-5bldv" Apr 17 07:54:42.010697 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:42.010623 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0-node-exporter-textfile\") pod \"node-exporter-5bldv\" (UID: \"0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0\") " pod="openshift-monitoring/node-exporter-5bldv" Apr 17 07:54:42.010915 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:42.010720 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0-root\") pod \"node-exporter-5bldv\" (UID: \"0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0\") " pod="openshift-monitoring/node-exporter-5bldv" Apr 17 07:54:42.010915 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:42.010748 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsttn\" (UniqueName: \"kubernetes.io/projected/0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0-kube-api-access-tsttn\") pod \"node-exporter-5bldv\" (UID: \"0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0\") " pod="openshift-monitoring/node-exporter-5bldv" Apr 17 07:54:42.111908 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:42.111869 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-5bldv\" (UID: \"0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0\") " pod="openshift-monitoring/node-exporter-5bldv" Apr 17 07:54:42.111908 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:42.111910 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0-node-exporter-accelerators-collector-config\") pod \"node-exporter-5bldv\" (UID: \"0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0\") " pod="openshift-monitoring/node-exporter-5bldv" Apr 17 07:54:42.112165 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:42.111928 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0-metrics-client-ca\") pod \"node-exporter-5bldv\" (UID: \"0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0\") " pod="openshift-monitoring/node-exporter-5bldv" Apr 17 07:54:42.112165 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:42.111955 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0-node-exporter-tls\") pod \"node-exporter-5bldv\" (UID: \"0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0\") " pod="openshift-monitoring/node-exporter-5bldv" Apr 17 07:54:42.112165 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:42.111987 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0-sys\") pod \"node-exporter-5bldv\" (UID: \"0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0\") " pod="openshift-monitoring/node-exporter-5bldv" Apr 17 07:54:42.112165 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:42.112010 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0-node-exporter-wtmp\") pod \"node-exporter-5bldv\" (UID: \"0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0\") " pod="openshift-monitoring/node-exporter-5bldv" Apr 17 07:54:42.112165 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:42.112063 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0-node-exporter-textfile\") pod \"node-exporter-5bldv\" (UID: \"0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0\") " pod="openshift-monitoring/node-exporter-5bldv" Apr 17 07:54:42.112165 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:42.112108 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0-root\") pod \"node-exporter-5bldv\" (UID: \"0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0\") " pod="openshift-monitoring/node-exporter-5bldv" Apr 17 07:54:42.112165 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:42.112131 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tsttn\" (UniqueName: \"kubernetes.io/projected/0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0-kube-api-access-tsttn\") pod \"node-exporter-5bldv\" (UID: \"0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0\") " pod="openshift-monitoring/node-exporter-5bldv" Apr 17 07:54:42.112165 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:42.112134 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0-sys\") pod \"node-exporter-5bldv\" (UID: \"0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0\") " pod="openshift-monitoring/node-exporter-5bldv" Apr 17 07:54:42.112556 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:42.112203 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0-node-exporter-wtmp\") pod \"node-exporter-5bldv\" (UID: \"0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0\") " pod="openshift-monitoring/node-exporter-5bldv" Apr 17 07:54:42.112556 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:42.112257 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0-root\") pod \"node-exporter-5bldv\" (UID: \"0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0\") " pod="openshift-monitoring/node-exporter-5bldv" Apr 17 07:54:42.112556 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:42.112506 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0-node-exporter-textfile\") pod \"node-exporter-5bldv\" (UID: \"0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0\") " pod="openshift-monitoring/node-exporter-5bldv" Apr 17 07:54:42.112720 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:42.112583 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0-metrics-client-ca\") pod \"node-exporter-5bldv\" (UID: \"0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0\") " pod="openshift-monitoring/node-exporter-5bldv" Apr 17 07:54:42.112720 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:42.112601 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0-node-exporter-accelerators-collector-config\") pod \"node-exporter-5bldv\" (UID: \"0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0\") " pod="openshift-monitoring/node-exporter-5bldv" Apr 17 07:54:42.114875 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:42.114846 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0-node-exporter-tls\") pod \"node-exporter-5bldv\" (UID: \"0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0\") " pod="openshift-monitoring/node-exporter-5bldv" Apr 17 07:54:42.114982 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:42.114881 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-5bldv\" (UID: \"0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0\") " pod="openshift-monitoring/node-exporter-5bldv" Apr 17 07:54:42.124551 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:42.124524 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsttn\" (UniqueName: \"kubernetes.io/projected/0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0-kube-api-access-tsttn\") pod \"node-exporter-5bldv\" (UID: \"0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0\") " pod="openshift-monitoring/node-exporter-5bldv" Apr 17 07:54:42.260664 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:42.260542 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-5bldv" Apr 17 07:54:42.270313 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:54:42.270287 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0075d905_cd03_4cbe_ab5a_c8df3eb1fbe0.slice/crio-af494cb19ee8d3b6f371cb6342389e8843ee52ef7c1231745258484560799991 WatchSource:0}: Error finding container af494cb19ee8d3b6f371cb6342389e8843ee52ef7c1231745258484560799991: Status 404 returned error can't find the container with id af494cb19ee8d3b6f371cb6342389e8843ee52ef7c1231745258484560799991 Apr 17 07:54:42.367794 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:42.367756 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-5bldv" event={"ID":"0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0","Type":"ContainerStarted","Data":"af494cb19ee8d3b6f371cb6342389e8843ee52ef7c1231745258484560799991"} Apr 17 07:54:43.371577 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:43.371495 2569 generic.go:358] "Generic (PLEG): container finished" podID="0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0" containerID="836df9eeef88991d6950e55118b5e9854ba23a82b83cc95b9099c35bc1f3ed57" exitCode=0 Apr 17 07:54:43.371577 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:43.371557 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-5bldv" event={"ID":"0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0","Type":"ContainerDied","Data":"836df9eeef88991d6950e55118b5e9854ba23a82b83cc95b9099c35bc1f3ed57"} Apr 17 07:54:44.375251 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:44.375218 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-5bldv" event={"ID":"0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0","Type":"ContainerStarted","Data":"9efa51b3df9afb645a9ef5f4a940247b31af156d18c1dbc1342123b2f664c504"} Apr 17 07:54:44.375251 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:44.375254 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-5bldv" event={"ID":"0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0","Type":"ContainerStarted","Data":"2442b6f34c31404bfd35d3ba6f986011bc5b05371698e931fb7969f079386090"} Apr 17 07:54:44.417680 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:44.417616 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-5bldv" podStartSLOduration=2.637073483 podStartE2EDuration="3.417600132s" podCreationTimestamp="2026-04-17 07:54:41 +0000 UTC" firstStartedPulling="2026-04-17 07:54:42.272077986 +0000 UTC m=+205.048043987" lastFinishedPulling="2026-04-17 07:54:43.052604621 +0000 UTC m=+205.828570636" observedRunningTime="2026-04-17 07:54:44.416144096 +0000 UTC m=+207.192110119" watchObservedRunningTime="2026-04-17 07:54:44.417600132 +0000 UTC m=+207.193566158" Apr 17 07:54:52.610088 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:52.607024 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-757f7bcd7c-6bhnz"] Apr 17 07:54:52.610088 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:54:52.607554 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[registry-tls], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" podUID="3724dff9-b116-4340-9f6c-3565e9aa82fd" Apr 17 07:54:53.398057 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:53.398028 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" Apr 17 07:54:53.402087 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:53.402069 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" Apr 17 07:54:53.497598 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:53.497565 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7bp8\" (UniqueName: \"kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-kube-api-access-f7bp8\") pod \"3724dff9-b116-4340-9f6c-3565e9aa82fd\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " Apr 17 07:54:53.497797 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:53.497617 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3724dff9-b116-4340-9f6c-3565e9aa82fd-trusted-ca\") pod \"3724dff9-b116-4340-9f6c-3565e9aa82fd\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " Apr 17 07:54:53.497797 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:53.497640 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3724dff9-b116-4340-9f6c-3565e9aa82fd-registry-certificates\") pod \"3724dff9-b116-4340-9f6c-3565e9aa82fd\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " Apr 17 07:54:53.497797 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:53.497687 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3724dff9-b116-4340-9f6c-3565e9aa82fd-ca-trust-extracted\") pod \"3724dff9-b116-4340-9f6c-3565e9aa82fd\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " Apr 17 07:54:53.497949 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:53.497819 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3724dff9-b116-4340-9f6c-3565e9aa82fd-installation-pull-secrets\") pod \"3724dff9-b116-4340-9f6c-3565e9aa82fd\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " Apr 17 07:54:53.497949 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:53.497887 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/3724dff9-b116-4340-9f6c-3565e9aa82fd-image-registry-private-configuration\") pod \"3724dff9-b116-4340-9f6c-3565e9aa82fd\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " Apr 17 07:54:53.497949 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:53.497931 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-bound-sa-token\") pod \"3724dff9-b116-4340-9f6c-3565e9aa82fd\" (UID: \"3724dff9-b116-4340-9f6c-3565e9aa82fd\") " Apr 17 07:54:53.498101 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:53.497977 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3724dff9-b116-4340-9f6c-3565e9aa82fd-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "3724dff9-b116-4340-9f6c-3565e9aa82fd" (UID: "3724dff9-b116-4340-9f6c-3565e9aa82fd"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 17 07:54:53.498101 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:53.498045 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3724dff9-b116-4340-9f6c-3565e9aa82fd-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "3724dff9-b116-4340-9f6c-3565e9aa82fd" (UID: "3724dff9-b116-4340-9f6c-3565e9aa82fd"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 07:54:53.498101 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:53.498085 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3724dff9-b116-4340-9f6c-3565e9aa82fd-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "3724dff9-b116-4340-9f6c-3565e9aa82fd" (UID: "3724dff9-b116-4340-9f6c-3565e9aa82fd"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 07:54:53.498256 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:53.498212 2569 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3724dff9-b116-4340-9f6c-3565e9aa82fd-trusted-ca\") on node \"ip-10-0-132-178.ec2.internal\" DevicePath \"\"" Apr 17 07:54:53.498256 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:53.498227 2569 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3724dff9-b116-4340-9f6c-3565e9aa82fd-registry-certificates\") on node \"ip-10-0-132-178.ec2.internal\" DevicePath \"\"" Apr 17 07:54:53.498256 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:53.498238 2569 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3724dff9-b116-4340-9f6c-3565e9aa82fd-ca-trust-extracted\") on node \"ip-10-0-132-178.ec2.internal\" DevicePath \"\"" Apr 17 07:54:53.500158 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:53.500128 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-kube-api-access-f7bp8" (OuterVolumeSpecName: "kube-api-access-f7bp8") pod "3724dff9-b116-4340-9f6c-3565e9aa82fd" (UID: "3724dff9-b116-4340-9f6c-3565e9aa82fd"). InnerVolumeSpecName "kube-api-access-f7bp8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 07:54:53.500251 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:53.500156 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "3724dff9-b116-4340-9f6c-3565e9aa82fd" (UID: "3724dff9-b116-4340-9f6c-3565e9aa82fd"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 07:54:53.500317 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:53.500298 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3724dff9-b116-4340-9f6c-3565e9aa82fd-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "3724dff9-b116-4340-9f6c-3565e9aa82fd" (UID: "3724dff9-b116-4340-9f6c-3565e9aa82fd"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 07:54:53.500403 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:53.500378 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3724dff9-b116-4340-9f6c-3565e9aa82fd-image-registry-private-configuration" (OuterVolumeSpecName: "image-registry-private-configuration") pod "3724dff9-b116-4340-9f6c-3565e9aa82fd" (UID: "3724dff9-b116-4340-9f6c-3565e9aa82fd"). InnerVolumeSpecName "image-registry-private-configuration". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 07:54:53.599182 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:53.599137 2569 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-bound-sa-token\") on node \"ip-10-0-132-178.ec2.internal\" DevicePath \"\"" Apr 17 07:54:53.599182 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:53.599175 2569 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f7bp8\" (UniqueName: \"kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-kube-api-access-f7bp8\") on node \"ip-10-0-132-178.ec2.internal\" DevicePath \"\"" Apr 17 07:54:53.599182 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:53.599188 2569 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3724dff9-b116-4340-9f6c-3565e9aa82fd-installation-pull-secrets\") on node \"ip-10-0-132-178.ec2.internal\" DevicePath \"\"" Apr 17 07:54:53.599408 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:53.599198 2569 reconciler_common.go:299] "Volume detached for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/3724dff9-b116-4340-9f6c-3565e9aa82fd-image-registry-private-configuration\") on node \"ip-10-0-132-178.ec2.internal\" DevicePath \"\"" Apr 17 07:54:54.400558 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:54.400483 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-757f7bcd7c-6bhnz" Apr 17 07:54:54.432933 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:54.432903 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-757f7bcd7c-6bhnz"] Apr 17 07:54:54.436597 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:54.436571 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-757f7bcd7c-6bhnz"] Apr 17 07:54:54.505839 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:54.505807 2569 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3724dff9-b116-4340-9f6c-3565e9aa82fd-registry-tls\") on node \"ip-10-0-132-178.ec2.internal\" DevicePath \"\"" Apr 17 07:54:55.752625 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:54:55.752591 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3724dff9-b116-4340-9f6c-3565e9aa82fd" path="/var/lib/kubelet/pods/3724dff9-b116-4340-9f6c-3565e9aa82fd/volumes" Apr 17 07:55:00.970601 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:55:00.970558 2569 prober.go:120] "Probe failed" probeType="Liveness" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-674846d7bd-9qx66" podUID="7302806b-ac03-4214-9412-5ec8447ef5fe" containerName="service-proxy" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 17 07:55:10.970421 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:55:10.970379 2569 prober.go:120] "Probe failed" probeType="Liveness" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-674846d7bd-9qx66" podUID="7302806b-ac03-4214-9412-5ec8447ef5fe" containerName="service-proxy" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 17 07:55:20.969940 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:55:20.969898 2569 prober.go:120] "Probe failed" probeType="Liveness" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-674846d7bd-9qx66" podUID="7302806b-ac03-4214-9412-5ec8447ef5fe" containerName="service-proxy" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 17 07:55:20.970396 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:55:20.969983 2569 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-674846d7bd-9qx66" Apr 17 07:55:20.970572 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:55:20.970537 2569 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="service-proxy" containerStatusID={"Type":"cri-o","ID":"2d269fb45d17eccd0191d0eb1cdb76c2cbdaaa3b55ad74883c7cf8897e3fcbc1"} pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-674846d7bd-9qx66" containerMessage="Container service-proxy failed liveness probe, will be restarted" Apr 17 07:55:20.970622 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:55:20.970607 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-674846d7bd-9qx66" podUID="7302806b-ac03-4214-9412-5ec8447ef5fe" containerName="service-proxy" containerID="cri-o://2d269fb45d17eccd0191d0eb1cdb76c2cbdaaa3b55ad74883c7cf8897e3fcbc1" gracePeriod=30 Apr 17 07:55:21.469290 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:55:21.469259 2569 generic.go:358] "Generic (PLEG): container finished" podID="7302806b-ac03-4214-9412-5ec8447ef5fe" containerID="2d269fb45d17eccd0191d0eb1cdb76c2cbdaaa3b55ad74883c7cf8897e3fcbc1" exitCode=2 Apr 17 07:55:21.469466 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:55:21.469326 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-674846d7bd-9qx66" event={"ID":"7302806b-ac03-4214-9412-5ec8447ef5fe","Type":"ContainerDied","Data":"2d269fb45d17eccd0191d0eb1cdb76c2cbdaaa3b55ad74883c7cf8897e3fcbc1"} Apr 17 07:55:21.469466 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:55:21.469358 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-674846d7bd-9qx66" event={"ID":"7302806b-ac03-4214-9412-5ec8447ef5fe","Type":"ContainerStarted","Data":"98d84d4d702653b485f6ab717177632e7396b99983c77329558c60d31f174dd3"} Apr 17 07:55:27.186559 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:55:27.186527 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-xg76x_9719cdb1-840b-4a4d-8e68-be9ea50fc183/dns-node-resolver/0.log" Apr 17 07:55:29.582475 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:55:29.582418 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f23d3b8-dbbd-4489-8830-f7fb50e6d226-metrics-certs\") pod \"network-metrics-daemon-pqns9\" (UID: \"6f23d3b8-dbbd-4489-8830-f7fb50e6d226\") " pod="openshift-multus/network-metrics-daemon-pqns9" Apr 17 07:55:29.584690 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:55:29.584668 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f23d3b8-dbbd-4489-8830-f7fb50e6d226-metrics-certs\") pod \"network-metrics-daemon-pqns9\" (UID: \"6f23d3b8-dbbd-4489-8830-f7fb50e6d226\") " pod="openshift-multus/network-metrics-daemon-pqns9" Apr 17 07:55:29.853047 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:55:29.852965 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-tqz2q\"" Apr 17 07:55:29.861020 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:55:29.860998 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pqns9" Apr 17 07:55:29.971521 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:55:29.971486 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-pqns9"] Apr 17 07:55:29.975845 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:55:29.975816 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f23d3b8_dbbd_4489_8830_f7fb50e6d226.slice/crio-fa7366ddfb2dc67e63374e67910876eff7b081fd7d7df287e59cbc885af0adff WatchSource:0}: Error finding container fa7366ddfb2dc67e63374e67910876eff7b081fd7d7df287e59cbc885af0adff: Status 404 returned error can't find the container with id fa7366ddfb2dc67e63374e67910876eff7b081fd7d7df287e59cbc885af0adff Apr 17 07:55:30.491684 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:55:30.491631 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-pqns9" event={"ID":"6f23d3b8-dbbd-4489-8830-f7fb50e6d226","Type":"ContainerStarted","Data":"fa7366ddfb2dc67e63374e67910876eff7b081fd7d7df287e59cbc885af0adff"} Apr 17 07:55:31.495841 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:55:31.495806 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-pqns9" event={"ID":"6f23d3b8-dbbd-4489-8830-f7fb50e6d226","Type":"ContainerStarted","Data":"388f8607e38a5279e563180975ceffde5744fc29e3b89753bc7faab50f90e662"} Apr 17 07:55:31.495841 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:55:31.495842 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-pqns9" event={"ID":"6f23d3b8-dbbd-4489-8830-f7fb50e6d226","Type":"ContainerStarted","Data":"2b0d84989a8fecaf192f3fce18f7e6cb183ed15daa43ddb13c69d31ca59cf301"} Apr 17 07:55:31.513713 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:55:31.513665 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-pqns9" podStartSLOduration=253.549488614 podStartE2EDuration="4m14.513634875s" podCreationTimestamp="2026-04-17 07:51:17 +0000 UTC" firstStartedPulling="2026-04-17 07:55:29.977592126 +0000 UTC m=+252.753558133" lastFinishedPulling="2026-04-17 07:55:30.941738389 +0000 UTC m=+253.717704394" observedRunningTime="2026-04-17 07:55:31.512835141 +0000 UTC m=+254.288801166" watchObservedRunningTime="2026-04-17 07:55:31.513634875 +0000 UTC m=+254.289600939" Apr 17 07:55:57.242406 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:55:57.242340 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-ingress-canary/ingress-canary-t7j9w" podUID="dffe644a-60cc-49b3-96e0-1da3bf018246" Apr 17 07:55:57.242406 ip-10-0-132-178 kubenswrapper[2569]: E0417 07:55:57.242340 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[metrics-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-dns/dns-default-mlkb9" podUID="cb2327ec-f42e-4e5d-b122-579bbba751a8" Apr 17 07:55:57.564826 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:55:57.564798 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mlkb9" Apr 17 07:55:57.564986 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:55:57.564798 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-t7j9w" Apr 17 07:56:00.713091 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:56:00.713051 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cb2327ec-f42e-4e5d-b122-579bbba751a8-metrics-tls\") pod \"dns-default-mlkb9\" (UID: \"cb2327ec-f42e-4e5d-b122-579bbba751a8\") " pod="openshift-dns/dns-default-mlkb9" Apr 17 07:56:00.713091 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:56:00.713094 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dffe644a-60cc-49b3-96e0-1da3bf018246-cert\") pod \"ingress-canary-t7j9w\" (UID: \"dffe644a-60cc-49b3-96e0-1da3bf018246\") " pod="openshift-ingress-canary/ingress-canary-t7j9w" Apr 17 07:56:00.715487 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:56:00.715465 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dffe644a-60cc-49b3-96e0-1da3bf018246-cert\") pod \"ingress-canary-t7j9w\" (UID: \"dffe644a-60cc-49b3-96e0-1da3bf018246\") " pod="openshift-ingress-canary/ingress-canary-t7j9w" Apr 17 07:56:00.715552 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:56:00.715465 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cb2327ec-f42e-4e5d-b122-579bbba751a8-metrics-tls\") pod \"dns-default-mlkb9\" (UID: \"cb2327ec-f42e-4e5d-b122-579bbba751a8\") " pod="openshift-dns/dns-default-mlkb9" Apr 17 07:56:00.868013 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:56:00.867983 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-l9k4x\"" Apr 17 07:56:00.868992 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:56:00.868974 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-tgj8b\"" Apr 17 07:56:00.876112 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:56:00.876085 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mlkb9" Apr 17 07:56:00.876204 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:56:00.876098 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-t7j9w" Apr 17 07:56:01.001344 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:56:01.001307 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-t7j9w"] Apr 17 07:56:01.005141 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:56:01.005111 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddffe644a_60cc_49b3_96e0_1da3bf018246.slice/crio-d5469e4b83d49be10ac070847f15e2624891dc2c8620a2d30c06f16266acd0fd WatchSource:0}: Error finding container d5469e4b83d49be10ac070847f15e2624891dc2c8620a2d30c06f16266acd0fd: Status 404 returned error can't find the container with id d5469e4b83d49be10ac070847f15e2624891dc2c8620a2d30c06f16266acd0fd Apr 17 07:56:01.015505 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:56:01.015482 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-mlkb9"] Apr 17 07:56:01.017808 ip-10-0-132-178 kubenswrapper[2569]: W0417 07:56:01.017787 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb2327ec_f42e_4e5d_b122_579bbba751a8.slice/crio-a6a2e760d607538bb04aaac61eca1a918a280967ee23eebe64c67cfc4d9a71fc WatchSource:0}: Error finding container a6a2e760d607538bb04aaac61eca1a918a280967ee23eebe64c67cfc4d9a71fc: Status 404 returned error can't find the container with id a6a2e760d607538bb04aaac61eca1a918a280967ee23eebe64c67cfc4d9a71fc Apr 17 07:56:01.575848 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:56:01.575815 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-t7j9w" event={"ID":"dffe644a-60cc-49b3-96e0-1da3bf018246","Type":"ContainerStarted","Data":"d5469e4b83d49be10ac070847f15e2624891dc2c8620a2d30c06f16266acd0fd"} Apr 17 07:56:01.577063 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:56:01.577031 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mlkb9" event={"ID":"cb2327ec-f42e-4e5d-b122-579bbba751a8","Type":"ContainerStarted","Data":"a6a2e760d607538bb04aaac61eca1a918a280967ee23eebe64c67cfc4d9a71fc"} Apr 17 07:56:03.583664 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:56:03.583606 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-t7j9w" event={"ID":"dffe644a-60cc-49b3-96e0-1da3bf018246","Type":"ContainerStarted","Data":"f15863a6645314c0ef8f0ca8857d0c42009cc71aa74f87cf703dcd7701f6a429"} Apr 17 07:56:03.585094 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:56:03.585071 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mlkb9" event={"ID":"cb2327ec-f42e-4e5d-b122-579bbba751a8","Type":"ContainerStarted","Data":"d5387fcbec869e513bd83629f63c2f13516f7442782d899632b9f89f70119bb5"} Apr 17 07:56:03.585094 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:56:03.585096 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mlkb9" event={"ID":"cb2327ec-f42e-4e5d-b122-579bbba751a8","Type":"ContainerStarted","Data":"ea37146eb3b28d069637750acf8f81216d4107d34702df37a3f4892cf28b8937"} Apr 17 07:56:03.585275 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:56:03.585202 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-mlkb9" Apr 17 07:56:03.600282 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:56:03.600239 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-t7j9w" podStartSLOduration=251.656726642 podStartE2EDuration="4m13.600225252s" podCreationTimestamp="2026-04-17 07:51:50 +0000 UTC" firstStartedPulling="2026-04-17 07:56:01.006854827 +0000 UTC m=+283.782820828" lastFinishedPulling="2026-04-17 07:56:02.950353424 +0000 UTC m=+285.726319438" observedRunningTime="2026-04-17 07:56:03.598558345 +0000 UTC m=+286.374524368" watchObservedRunningTime="2026-04-17 07:56:03.600225252 +0000 UTC m=+286.376191274" Apr 17 07:56:03.614008 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:56:03.613958 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-mlkb9" podStartSLOduration=251.687010319 podStartE2EDuration="4m13.613942322s" podCreationTimestamp="2026-04-17 07:51:50 +0000 UTC" firstStartedPulling="2026-04-17 07:56:01.019518803 +0000 UTC m=+283.795484805" lastFinishedPulling="2026-04-17 07:56:02.946450806 +0000 UTC m=+285.722416808" observedRunningTime="2026-04-17 07:56:03.612995425 +0000 UTC m=+286.388961451" watchObservedRunningTime="2026-04-17 07:56:03.613942322 +0000 UTC m=+286.389908346" Apr 17 07:56:13.589558 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:56:13.589522 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-mlkb9" Apr 17 07:56:17.655491 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:56:17.655464 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wbxwr_2e0e406d-3d55-41f3-ba63-448c73f82ded/ovn-acl-logging/0.log" Apr 17 07:56:17.655491 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:56:17.655480 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wbxwr_2e0e406d-3d55-41f3-ba63-448c73f82ded/ovn-acl-logging/0.log" Apr 17 07:56:17.662208 ip-10-0-132-178 kubenswrapper[2569]: I0417 07:56:17.662188 2569 kubelet.go:1628] "Image garbage collection succeeded" Apr 17 08:01:17.672078 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:17.672047 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wbxwr_2e0e406d-3d55-41f3-ba63-448c73f82ded/ovn-acl-logging/0.log" Apr 17 08:01:17.672959 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:17.672939 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wbxwr_2e0e406d-3d55-41f3-ba63-448c73f82ded/ovn-acl-logging/0.log" Apr 17 08:01:29.872144 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:29.872109 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/keda-metrics-apiserver-7c9f485588-d4cvx"] Apr 17 08:01:29.875091 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:29.875072 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-d4cvx" Apr 17 08:01:29.877547 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:29.877516 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-metrics-apiserver-certs\"" Apr 17 08:01:29.877674 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:29.877574 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"kedaorg-certs\"" Apr 17 08:01:29.877674 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:29.877628 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"keda-ocp-cabundle\"" Apr 17 08:01:29.877674 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:29.877664 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"kube-root-ca.crt\"" Apr 17 08:01:29.878686 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:29.878669 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"openshift-service-ca.crt\"" Apr 17 08:01:29.878768 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:29.878754 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-operator-dockercfg-p4tfj\"" Apr 17 08:01:29.881786 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:29.881761 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-metrics-apiserver-7c9f485588-d4cvx"] Apr 17 08:01:29.946939 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:29.946899 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8xlx\" (UniqueName: \"kubernetes.io/projected/51788a71-8fa4-4fb2-8d60-8284b9112c3c-kube-api-access-j8xlx\") pod \"keda-metrics-apiserver-7c9f485588-d4cvx\" (UID: \"51788a71-8fa4-4fb2-8d60-8284b9112c3c\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-d4cvx" Apr 17 08:01:29.946939 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:29.946939 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/51788a71-8fa4-4fb2-8d60-8284b9112c3c-certificates\") pod \"keda-metrics-apiserver-7c9f485588-d4cvx\" (UID: \"51788a71-8fa4-4fb2-8d60-8284b9112c3c\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-d4cvx" Apr 17 08:01:29.947145 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:29.946993 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"temp-vol\" (UniqueName: \"kubernetes.io/empty-dir/51788a71-8fa4-4fb2-8d60-8284b9112c3c-temp-vol\") pod \"keda-metrics-apiserver-7c9f485588-d4cvx\" (UID: \"51788a71-8fa4-4fb2-8d60-8284b9112c3c\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-d4cvx" Apr 17 08:01:30.047830 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:30.047799 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j8xlx\" (UniqueName: \"kubernetes.io/projected/51788a71-8fa4-4fb2-8d60-8284b9112c3c-kube-api-access-j8xlx\") pod \"keda-metrics-apiserver-7c9f485588-d4cvx\" (UID: \"51788a71-8fa4-4fb2-8d60-8284b9112c3c\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-d4cvx" Apr 17 08:01:30.047998 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:30.047845 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/51788a71-8fa4-4fb2-8d60-8284b9112c3c-certificates\") pod \"keda-metrics-apiserver-7c9f485588-d4cvx\" (UID: \"51788a71-8fa4-4fb2-8d60-8284b9112c3c\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-d4cvx" Apr 17 08:01:30.047998 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:30.047890 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"temp-vol\" (UniqueName: \"kubernetes.io/empty-dir/51788a71-8fa4-4fb2-8d60-8284b9112c3c-temp-vol\") pod \"keda-metrics-apiserver-7c9f485588-d4cvx\" (UID: \"51788a71-8fa4-4fb2-8d60-8284b9112c3c\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-d4cvx" Apr 17 08:01:30.048103 ip-10-0-132-178 kubenswrapper[2569]: E0417 08:01:30.048021 2569 secret.go:281] references non-existent secret key: tls.crt Apr 17 08:01:30.048103 ip-10-0-132-178 kubenswrapper[2569]: E0417 08:01:30.048046 2569 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 17 08:01:30.048103 ip-10-0-132-178 kubenswrapper[2569]: E0417 08:01:30.048066 2569 projected.go:264] Couldn't get secret openshift-keda/keda-metrics-apiserver-certs: secret "keda-metrics-apiserver-certs" not found Apr 17 08:01:30.048103 ip-10-0-132-178 kubenswrapper[2569]: E0417 08:01:30.048088 2569 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-d4cvx: [references non-existent secret key: tls.crt, secret "keda-metrics-apiserver-certs" not found] Apr 17 08:01:30.048266 ip-10-0-132-178 kubenswrapper[2569]: E0417 08:01:30.048164 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/51788a71-8fa4-4fb2-8d60-8284b9112c3c-certificates podName:51788a71-8fa4-4fb2-8d60-8284b9112c3c nodeName:}" failed. No retries permitted until 2026-04-17 08:01:30.548145655 +0000 UTC m=+613.324111663 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/51788a71-8fa4-4fb2-8d60-8284b9112c3c-certificates") pod "keda-metrics-apiserver-7c9f485588-d4cvx" (UID: "51788a71-8fa4-4fb2-8d60-8284b9112c3c") : [references non-existent secret key: tls.crt, secret "keda-metrics-apiserver-certs" not found] Apr 17 08:01:30.048316 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:30.048271 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"temp-vol\" (UniqueName: \"kubernetes.io/empty-dir/51788a71-8fa4-4fb2-8d60-8284b9112c3c-temp-vol\") pod \"keda-metrics-apiserver-7c9f485588-d4cvx\" (UID: \"51788a71-8fa4-4fb2-8d60-8284b9112c3c\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-d4cvx" Apr 17 08:01:30.056835 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:30.056808 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8xlx\" (UniqueName: \"kubernetes.io/projected/51788a71-8fa4-4fb2-8d60-8284b9112c3c-kube-api-access-j8xlx\") pod \"keda-metrics-apiserver-7c9f485588-d4cvx\" (UID: \"51788a71-8fa4-4fb2-8d60-8284b9112c3c\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-d4cvx" Apr 17 08:01:30.169301 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:30.169224 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/keda-admission-cf49989db-6vfjg"] Apr 17 08:01:30.172251 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:30.172231 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-admission-cf49989db-6vfjg" Apr 17 08:01:30.174662 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:30.174626 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-admission-webhooks-certs\"" Apr 17 08:01:30.182510 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:30.182488 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-admission-cf49989db-6vfjg"] Apr 17 08:01:30.250298 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:30.250260 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlmj9\" (UniqueName: \"kubernetes.io/projected/2836f8b3-9f77-4b39-b01b-4e785d932343-kube-api-access-rlmj9\") pod \"keda-admission-cf49989db-6vfjg\" (UID: \"2836f8b3-9f77-4b39-b01b-4e785d932343\") " pod="openshift-keda/keda-admission-cf49989db-6vfjg" Apr 17 08:01:30.250298 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:30.250298 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/2836f8b3-9f77-4b39-b01b-4e785d932343-certificates\") pod \"keda-admission-cf49989db-6vfjg\" (UID: \"2836f8b3-9f77-4b39-b01b-4e785d932343\") " pod="openshift-keda/keda-admission-cf49989db-6vfjg" Apr 17 08:01:30.351434 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:30.351399 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rlmj9\" (UniqueName: \"kubernetes.io/projected/2836f8b3-9f77-4b39-b01b-4e785d932343-kube-api-access-rlmj9\") pod \"keda-admission-cf49989db-6vfjg\" (UID: \"2836f8b3-9f77-4b39-b01b-4e785d932343\") " pod="openshift-keda/keda-admission-cf49989db-6vfjg" Apr 17 08:01:30.351434 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:30.351441 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/2836f8b3-9f77-4b39-b01b-4e785d932343-certificates\") pod \"keda-admission-cf49989db-6vfjg\" (UID: \"2836f8b3-9f77-4b39-b01b-4e785d932343\") " pod="openshift-keda/keda-admission-cf49989db-6vfjg" Apr 17 08:01:30.353870 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:30.353849 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/2836f8b3-9f77-4b39-b01b-4e785d932343-certificates\") pod \"keda-admission-cf49989db-6vfjg\" (UID: \"2836f8b3-9f77-4b39-b01b-4e785d932343\") " pod="openshift-keda/keda-admission-cf49989db-6vfjg" Apr 17 08:01:30.359287 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:30.359265 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlmj9\" (UniqueName: \"kubernetes.io/projected/2836f8b3-9f77-4b39-b01b-4e785d932343-kube-api-access-rlmj9\") pod \"keda-admission-cf49989db-6vfjg\" (UID: \"2836f8b3-9f77-4b39-b01b-4e785d932343\") " pod="openshift-keda/keda-admission-cf49989db-6vfjg" Apr 17 08:01:30.483068 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:30.482978 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-admission-cf49989db-6vfjg" Apr 17 08:01:30.553060 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:30.552983 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/51788a71-8fa4-4fb2-8d60-8284b9112c3c-certificates\") pod \"keda-metrics-apiserver-7c9f485588-d4cvx\" (UID: \"51788a71-8fa4-4fb2-8d60-8284b9112c3c\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-d4cvx" Apr 17 08:01:30.553196 ip-10-0-132-178 kubenswrapper[2569]: E0417 08:01:30.553141 2569 secret.go:281] references non-existent secret key: tls.crt Apr 17 08:01:30.553196 ip-10-0-132-178 kubenswrapper[2569]: E0417 08:01:30.553162 2569 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 17 08:01:30.553196 ip-10-0-132-178 kubenswrapper[2569]: E0417 08:01:30.553184 2569 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-d4cvx: references non-existent secret key: tls.crt Apr 17 08:01:30.553296 ip-10-0-132-178 kubenswrapper[2569]: E0417 08:01:30.553249 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/51788a71-8fa4-4fb2-8d60-8284b9112c3c-certificates podName:51788a71-8fa4-4fb2-8d60-8284b9112c3c nodeName:}" failed. No retries permitted until 2026-04-17 08:01:31.553231233 +0000 UTC m=+614.329197237 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/51788a71-8fa4-4fb2-8d60-8284b9112c3c-certificates") pod "keda-metrics-apiserver-7c9f485588-d4cvx" (UID: "51788a71-8fa4-4fb2-8d60-8284b9112c3c") : references non-existent secret key: tls.crt Apr 17 08:01:30.597610 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:30.597580 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-admission-cf49989db-6vfjg"] Apr 17 08:01:30.600456 ip-10-0-132-178 kubenswrapper[2569]: W0417 08:01:30.600412 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2836f8b3_9f77_4b39_b01b_4e785d932343.slice/crio-61cbeed39589286cf6d833f3ad36cacb747c68c1cee41371a20747a2668d5412 WatchSource:0}: Error finding container 61cbeed39589286cf6d833f3ad36cacb747c68c1cee41371a20747a2668d5412: Status 404 returned error can't find the container with id 61cbeed39589286cf6d833f3ad36cacb747c68c1cee41371a20747a2668d5412 Apr 17 08:01:30.601679 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:30.601662 2569 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 17 08:01:31.425787 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:31.425745 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-admission-cf49989db-6vfjg" event={"ID":"2836f8b3-9f77-4b39-b01b-4e785d932343","Type":"ContainerStarted","Data":"61cbeed39589286cf6d833f3ad36cacb747c68c1cee41371a20747a2668d5412"} Apr 17 08:01:31.560405 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:31.560371 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/51788a71-8fa4-4fb2-8d60-8284b9112c3c-certificates\") pod \"keda-metrics-apiserver-7c9f485588-d4cvx\" (UID: \"51788a71-8fa4-4fb2-8d60-8284b9112c3c\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-d4cvx" Apr 17 08:01:31.560577 ip-10-0-132-178 kubenswrapper[2569]: E0417 08:01:31.560544 2569 secret.go:281] references non-existent secret key: tls.crt Apr 17 08:01:31.560577 ip-10-0-132-178 kubenswrapper[2569]: E0417 08:01:31.560568 2569 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 17 08:01:31.560708 ip-10-0-132-178 kubenswrapper[2569]: E0417 08:01:31.560591 2569 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-d4cvx: references non-existent secret key: tls.crt Apr 17 08:01:31.560708 ip-10-0-132-178 kubenswrapper[2569]: E0417 08:01:31.560679 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/51788a71-8fa4-4fb2-8d60-8284b9112c3c-certificates podName:51788a71-8fa4-4fb2-8d60-8284b9112c3c nodeName:}" failed. No retries permitted until 2026-04-17 08:01:33.560641595 +0000 UTC m=+616.336607610 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/51788a71-8fa4-4fb2-8d60-8284b9112c3c-certificates") pod "keda-metrics-apiserver-7c9f485588-d4cvx" (UID: "51788a71-8fa4-4fb2-8d60-8284b9112c3c") : references non-existent secret key: tls.crt Apr 17 08:01:33.432760 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:33.432664 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-admission-cf49989db-6vfjg" event={"ID":"2836f8b3-9f77-4b39-b01b-4e785d932343","Type":"ContainerStarted","Data":"35884206f243426976fa055e689535218b6a263a1ce54c53b67d666932e904f5"} Apr 17 08:01:33.433113 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:33.432874 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-admission-cf49989db-6vfjg" Apr 17 08:01:33.448301 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:33.448254 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/keda-admission-cf49989db-6vfjg" podStartSLOduration=0.961284836 podStartE2EDuration="3.44824059s" podCreationTimestamp="2026-04-17 08:01:30 +0000 UTC" firstStartedPulling="2026-04-17 08:01:30.601790435 +0000 UTC m=+613.377756436" lastFinishedPulling="2026-04-17 08:01:33.088746189 +0000 UTC m=+615.864712190" observedRunningTime="2026-04-17 08:01:33.447523468 +0000 UTC m=+616.223489503" watchObservedRunningTime="2026-04-17 08:01:33.44824059 +0000 UTC m=+616.224206612" Apr 17 08:01:33.578003 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:33.577964 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/51788a71-8fa4-4fb2-8d60-8284b9112c3c-certificates\") pod \"keda-metrics-apiserver-7c9f485588-d4cvx\" (UID: \"51788a71-8fa4-4fb2-8d60-8284b9112c3c\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-d4cvx" Apr 17 08:01:33.580526 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:33.580496 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/51788a71-8fa4-4fb2-8d60-8284b9112c3c-certificates\") pod \"keda-metrics-apiserver-7c9f485588-d4cvx\" (UID: \"51788a71-8fa4-4fb2-8d60-8284b9112c3c\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-d4cvx" Apr 17 08:01:33.785808 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:33.785766 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-d4cvx" Apr 17 08:01:33.902026 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:33.901995 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-metrics-apiserver-7c9f485588-d4cvx"] Apr 17 08:01:33.904750 ip-10-0-132-178 kubenswrapper[2569]: W0417 08:01:33.904718 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51788a71_8fa4_4fb2_8d60_8284b9112c3c.slice/crio-6c113c53702bfcc0c72d389b8811e353910ce2f1985edd1bbafcd151b6fa7b90 WatchSource:0}: Error finding container 6c113c53702bfcc0c72d389b8811e353910ce2f1985edd1bbafcd151b6fa7b90: Status 404 returned error can't find the container with id 6c113c53702bfcc0c72d389b8811e353910ce2f1985edd1bbafcd151b6fa7b90 Apr 17 08:01:34.436986 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:34.436950 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-d4cvx" event={"ID":"51788a71-8fa4-4fb2-8d60-8284b9112c3c","Type":"ContainerStarted","Data":"6c113c53702bfcc0c72d389b8811e353910ce2f1985edd1bbafcd151b6fa7b90"} Apr 17 08:01:36.442788 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:36.442695 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-d4cvx" event={"ID":"51788a71-8fa4-4fb2-8d60-8284b9112c3c","Type":"ContainerStarted","Data":"57fa487a032c39abd892242db7733739dd561b5ffc9cd3fb847776ca1ad3fd83"} Apr 17 08:01:36.442788 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:36.442765 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-d4cvx" Apr 17 08:01:36.458439 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:36.458389 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-d4cvx" podStartSLOduration=5.094125829 podStartE2EDuration="7.458372159s" podCreationTimestamp="2026-04-17 08:01:29 +0000 UTC" firstStartedPulling="2026-04-17 08:01:33.906104112 +0000 UTC m=+616.682070113" lastFinishedPulling="2026-04-17 08:01:36.270350442 +0000 UTC m=+619.046316443" observedRunningTime="2026-04-17 08:01:36.457662108 +0000 UTC m=+619.233628124" watchObservedRunningTime="2026-04-17 08:01:36.458372159 +0000 UTC m=+619.234338182" Apr 17 08:01:47.450383 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:47.450354 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-d4cvx" Apr 17 08:01:54.439789 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:01:54.439758 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-admission-cf49989db-6vfjg" Apr 17 08:02:36.300763 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:02:36.300732 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/seaweedfs-86cc847c5c-hcqvl"] Apr 17 08:02:36.303752 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:02:36.303734 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/seaweedfs-86cc847c5c-hcqvl" Apr 17 08:02:36.306252 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:02:36.306216 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"kube-root-ca.crt\"" Apr 17 08:02:36.306252 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:02:36.306234 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"mlpipeline-s3-artifact\"" Apr 17 08:02:36.306409 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:02:36.306260 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"openshift-service-ca.crt\"" Apr 17 08:02:36.307237 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:02:36.307223 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"default-dockercfg-brlbt\"" Apr 17 08:02:36.310867 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:02:36.310843 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/seaweedfs-86cc847c5c-hcqvl"] Apr 17 08:02:36.315388 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:02:36.315368 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/88201a3e-a275-4930-b587-58251c2a1bbd-data\") pod \"seaweedfs-86cc847c5c-hcqvl\" (UID: \"88201a3e-a275-4930-b587-58251c2a1bbd\") " pod="kserve/seaweedfs-86cc847c5c-hcqvl" Apr 17 08:02:36.315493 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:02:36.315395 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5xkp\" (UniqueName: \"kubernetes.io/projected/88201a3e-a275-4930-b587-58251c2a1bbd-kube-api-access-j5xkp\") pod \"seaweedfs-86cc847c5c-hcqvl\" (UID: \"88201a3e-a275-4930-b587-58251c2a1bbd\") " pod="kserve/seaweedfs-86cc847c5c-hcqvl" Apr 17 08:02:36.416053 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:02:36.416013 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/88201a3e-a275-4930-b587-58251c2a1bbd-data\") pod \"seaweedfs-86cc847c5c-hcqvl\" (UID: \"88201a3e-a275-4930-b587-58251c2a1bbd\") " pod="kserve/seaweedfs-86cc847c5c-hcqvl" Apr 17 08:02:36.416053 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:02:36.416056 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j5xkp\" (UniqueName: \"kubernetes.io/projected/88201a3e-a275-4930-b587-58251c2a1bbd-kube-api-access-j5xkp\") pod \"seaweedfs-86cc847c5c-hcqvl\" (UID: \"88201a3e-a275-4930-b587-58251c2a1bbd\") " pod="kserve/seaweedfs-86cc847c5c-hcqvl" Apr 17 08:02:36.416478 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:02:36.416455 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/88201a3e-a275-4930-b587-58251c2a1bbd-data\") pod \"seaweedfs-86cc847c5c-hcqvl\" (UID: \"88201a3e-a275-4930-b587-58251c2a1bbd\") " pod="kserve/seaweedfs-86cc847c5c-hcqvl" Apr 17 08:02:36.425450 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:02:36.425428 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5xkp\" (UniqueName: \"kubernetes.io/projected/88201a3e-a275-4930-b587-58251c2a1bbd-kube-api-access-j5xkp\") pod \"seaweedfs-86cc847c5c-hcqvl\" (UID: \"88201a3e-a275-4930-b587-58251c2a1bbd\") " pod="kserve/seaweedfs-86cc847c5c-hcqvl" Apr 17 08:02:36.614169 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:02:36.614087 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/seaweedfs-86cc847c5c-hcqvl" Apr 17 08:02:36.731104 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:02:36.731045 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/seaweedfs-86cc847c5c-hcqvl"] Apr 17 08:02:36.733142 ip-10-0-132-178 kubenswrapper[2569]: W0417 08:02:36.733110 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88201a3e_a275_4930_b587_58251c2a1bbd.slice/crio-fe5be398931b054056fe4a419527699ed2664103fb9c2c2af471a2cc46ae054b WatchSource:0}: Error finding container fe5be398931b054056fe4a419527699ed2664103fb9c2c2af471a2cc46ae054b: Status 404 returned error can't find the container with id fe5be398931b054056fe4a419527699ed2664103fb9c2c2af471a2cc46ae054b Apr 17 08:02:37.614761 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:02:37.614730 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/seaweedfs-86cc847c5c-hcqvl" event={"ID":"88201a3e-a275-4930-b587-58251c2a1bbd","Type":"ContainerStarted","Data":"fe5be398931b054056fe4a419527699ed2664103fb9c2c2af471a2cc46ae054b"} Apr 17 08:02:39.621934 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:02:39.621898 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/seaweedfs-86cc847c5c-hcqvl" event={"ID":"88201a3e-a275-4930-b587-58251c2a1bbd","Type":"ContainerStarted","Data":"87093b49709ce5b69baf23a509f4e969051bd7e121238307023d25ab4a236aac"} Apr 17 08:02:39.622312 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:02:39.622034 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/seaweedfs-86cc847c5c-hcqvl" Apr 17 08:02:39.636729 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:02:39.636671 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/seaweedfs-86cc847c5c-hcqvl" podStartSLOduration=1.143815025 podStartE2EDuration="3.6366262s" podCreationTimestamp="2026-04-17 08:02:36 +0000 UTC" firstStartedPulling="2026-04-17 08:02:36.734317408 +0000 UTC m=+679.510283413" lastFinishedPulling="2026-04-17 08:02:39.227128586 +0000 UTC m=+682.003094588" observedRunningTime="2026-04-17 08:02:39.635529296 +0000 UTC m=+682.411495316" watchObservedRunningTime="2026-04-17 08:02:39.6366262 +0000 UTC m=+682.412592225" Apr 17 08:02:45.627001 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:02:45.626968 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/seaweedfs-86cc847c5c-hcqvl" Apr 17 08:03:47.144229 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:03:47.144188 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/model-serving-api-86f7b4b499-99q7m"] Apr 17 08:03:47.146444 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:03:47.146422 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/model-serving-api-86f7b4b499-99q7m" Apr 17 08:03:47.148900 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:03:47.148877 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"model-serving-api-tls\"" Apr 17 08:03:47.149674 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:03:47.149657 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"model-serving-api-dockercfg-5xjjm\"" Apr 17 08:03:47.159402 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:03:47.159381 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/model-serving-api-86f7b4b499-99q7m"] Apr 17 08:03:47.315919 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:03:47.315883 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certs\" (UniqueName: \"kubernetes.io/secret/70f8d1df-cd0b-480b-b718-bae60c29718f-tls-certs\") pod \"model-serving-api-86f7b4b499-99q7m\" (UID: \"70f8d1df-cd0b-480b-b718-bae60c29718f\") " pod="kserve/model-serving-api-86f7b4b499-99q7m" Apr 17 08:03:47.315919 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:03:47.315922 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4bxr\" (UniqueName: \"kubernetes.io/projected/70f8d1df-cd0b-480b-b718-bae60c29718f-kube-api-access-v4bxr\") pod \"model-serving-api-86f7b4b499-99q7m\" (UID: \"70f8d1df-cd0b-480b-b718-bae60c29718f\") " pod="kserve/model-serving-api-86f7b4b499-99q7m" Apr 17 08:03:47.417356 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:03:47.417264 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-certs\" (UniqueName: \"kubernetes.io/secret/70f8d1df-cd0b-480b-b718-bae60c29718f-tls-certs\") pod \"model-serving-api-86f7b4b499-99q7m\" (UID: \"70f8d1df-cd0b-480b-b718-bae60c29718f\") " pod="kserve/model-serving-api-86f7b4b499-99q7m" Apr 17 08:03:47.417356 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:03:47.417316 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v4bxr\" (UniqueName: \"kubernetes.io/projected/70f8d1df-cd0b-480b-b718-bae60c29718f-kube-api-access-v4bxr\") pod \"model-serving-api-86f7b4b499-99q7m\" (UID: \"70f8d1df-cd0b-480b-b718-bae60c29718f\") " pod="kserve/model-serving-api-86f7b4b499-99q7m" Apr 17 08:03:47.419680 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:03:47.419632 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-certs\" (UniqueName: \"kubernetes.io/secret/70f8d1df-cd0b-480b-b718-bae60c29718f-tls-certs\") pod \"model-serving-api-86f7b4b499-99q7m\" (UID: \"70f8d1df-cd0b-480b-b718-bae60c29718f\") " pod="kserve/model-serving-api-86f7b4b499-99q7m" Apr 17 08:03:47.425610 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:03:47.425584 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4bxr\" (UniqueName: \"kubernetes.io/projected/70f8d1df-cd0b-480b-b718-bae60c29718f-kube-api-access-v4bxr\") pod \"model-serving-api-86f7b4b499-99q7m\" (UID: \"70f8d1df-cd0b-480b-b718-bae60c29718f\") " pod="kserve/model-serving-api-86f7b4b499-99q7m" Apr 17 08:03:47.459206 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:03:47.459178 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/model-serving-api-86f7b4b499-99q7m" Apr 17 08:03:47.576883 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:03:47.576854 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/model-serving-api-86f7b4b499-99q7m"] Apr 17 08:03:47.579432 ip-10-0-132-178 kubenswrapper[2569]: W0417 08:03:47.579396 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70f8d1df_cd0b_480b_b718_bae60c29718f.slice/crio-563c234e6360834093e7230edcae7ee2925e4582060ea9d3734483064e47ec33 WatchSource:0}: Error finding container 563c234e6360834093e7230edcae7ee2925e4582060ea9d3734483064e47ec33: Status 404 returned error can't find the container with id 563c234e6360834093e7230edcae7ee2925e4582060ea9d3734483064e47ec33 Apr 17 08:03:47.799506 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:03:47.799469 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/model-serving-api-86f7b4b499-99q7m" event={"ID":"70f8d1df-cd0b-480b-b718-bae60c29718f","Type":"ContainerStarted","Data":"563c234e6360834093e7230edcae7ee2925e4582060ea9d3734483064e47ec33"} Apr 17 08:03:49.805730 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:03:49.805691 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/model-serving-api-86f7b4b499-99q7m" event={"ID":"70f8d1df-cd0b-480b-b718-bae60c29718f","Type":"ContainerStarted","Data":"25f999d47d9e3fb114338c1bcb5d19912bb533f75ed090344f523af7c55a2339"} Apr 17 08:03:49.806183 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:03:49.805824 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/model-serving-api-86f7b4b499-99q7m" Apr 17 08:03:49.822445 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:03:49.822396 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/model-serving-api-86f7b4b499-99q7m" podStartSLOduration=0.692165812 podStartE2EDuration="2.822380427s" podCreationTimestamp="2026-04-17 08:03:47 +0000 UTC" firstStartedPulling="2026-04-17 08:03:47.581710992 +0000 UTC m=+750.357677000" lastFinishedPulling="2026-04-17 08:03:49.711925611 +0000 UTC m=+752.487891615" observedRunningTime="2026-04-17 08:03:49.820392768 +0000 UTC m=+752.596358805" watchObservedRunningTime="2026-04-17 08:03:49.822380427 +0000 UTC m=+752.598346450" Apr 17 08:04:00.813571 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:04:00.813541 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/model-serving-api-86f7b4b499-99q7m" Apr 17 08:04:23.960853 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:04:23.960810 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd"] Apr 17 08:04:23.964054 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:04:23.964039 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" Apr 17 08:04:23.966490 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:04:23.966470 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"default-dockercfg-k26cm\"" Apr 17 08:04:23.974513 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:04:23.974489 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd"] Apr 17 08:04:24.077834 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:04:24.077802 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/0831b9c3-c47b-45d8-a949-d1aa74aca1e7-kserve-provision-location\") pod \"isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd\" (UID: \"0831b9c3-c47b-45d8-a949-d1aa74aca1e7\") " pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" Apr 17 08:04:24.178815 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:04:24.178779 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/0831b9c3-c47b-45d8-a949-d1aa74aca1e7-kserve-provision-location\") pod \"isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd\" (UID: \"0831b9c3-c47b-45d8-a949-d1aa74aca1e7\") " pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" Apr 17 08:04:24.179140 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:04:24.179120 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/0831b9c3-c47b-45d8-a949-d1aa74aca1e7-kserve-provision-location\") pod \"isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd\" (UID: \"0831b9c3-c47b-45d8-a949-d1aa74aca1e7\") " pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" Apr 17 08:04:24.274226 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:04:24.274201 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" Apr 17 08:04:24.387921 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:04:24.387891 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd"] Apr 17 08:04:24.390575 ip-10-0-132-178 kubenswrapper[2569]: W0417 08:04:24.390544 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0831b9c3_c47b_45d8_a949_d1aa74aca1e7.slice/crio-91da0c49435f34ea1a248ccbbe400e23b1ef3e90add8cb57756ca8c2a6db4fbe WatchSource:0}: Error finding container 91da0c49435f34ea1a248ccbbe400e23b1ef3e90add8cb57756ca8c2a6db4fbe: Status 404 returned error can't find the container with id 91da0c49435f34ea1a248ccbbe400e23b1ef3e90add8cb57756ca8c2a6db4fbe Apr 17 08:04:24.904343 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:04:24.904304 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" event={"ID":"0831b9c3-c47b-45d8-a949-d1aa74aca1e7","Type":"ContainerStarted","Data":"91da0c49435f34ea1a248ccbbe400e23b1ef3e90add8cb57756ca8c2a6db4fbe"} Apr 17 08:04:27.913544 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:04:27.913508 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" event={"ID":"0831b9c3-c47b-45d8-a949-d1aa74aca1e7","Type":"ContainerStarted","Data":"54dda211c157e0200af48a49485930cae770cff37ddc31a4b39f04c39d908e25"} Apr 17 08:04:31.924618 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:04:31.924582 2569 generic.go:358] "Generic (PLEG): container finished" podID="0831b9c3-c47b-45d8-a949-d1aa74aca1e7" containerID="54dda211c157e0200af48a49485930cae770cff37ddc31a4b39f04c39d908e25" exitCode=0 Apr 17 08:04:31.925154 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:04:31.924635 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" event={"ID":"0831b9c3-c47b-45d8-a949-d1aa74aca1e7","Type":"ContainerDied","Data":"54dda211c157e0200af48a49485930cae770cff37ddc31a4b39f04c39d908e25"} Apr 17 08:04:44.969501 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:04:44.969420 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" event={"ID":"0831b9c3-c47b-45d8-a949-d1aa74aca1e7","Type":"ContainerStarted","Data":"51dfacc51aa8cba8e213e7aed5562a017d9ba3121de3054e03a359944db62a92"} Apr 17 08:04:48.985029 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:04:48.984987 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" event={"ID":"0831b9c3-c47b-45d8-a949-d1aa74aca1e7","Type":"ContainerStarted","Data":"994bf3d5497b00511fb84df78735d34a14c857f24445550cdc7f3d21e6b28518"} Apr 17 08:04:48.985411 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:04:48.985242 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" Apr 17 08:04:48.986372 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:04:48.986344 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" podUID="0831b9c3-c47b-45d8-a949-d1aa74aca1e7" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.17:8080: connect: connection refused" Apr 17 08:04:49.002006 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:04:49.001964 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" podStartSLOduration=2.425167219 podStartE2EDuration="26.001952987s" podCreationTimestamp="2026-04-17 08:04:23 +0000 UTC" firstStartedPulling="2026-04-17 08:04:24.392411727 +0000 UTC m=+787.168377728" lastFinishedPulling="2026-04-17 08:04:47.969197494 +0000 UTC m=+810.745163496" observedRunningTime="2026-04-17 08:04:49.000826395 +0000 UTC m=+811.776792451" watchObservedRunningTime="2026-04-17 08:04:49.001952987 +0000 UTC m=+811.777919059" Apr 17 08:04:49.987876 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:04:49.987845 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" Apr 17 08:04:49.988523 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:04:49.987955 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" podUID="0831b9c3-c47b-45d8-a949-d1aa74aca1e7" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.17:8080: connect: connection refused" Apr 17 08:04:49.988808 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:04:49.988784 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" podUID="0831b9c3-c47b-45d8-a949-d1aa74aca1e7" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 17 08:04:50.991016 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:04:50.990972 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" podUID="0831b9c3-c47b-45d8-a949-d1aa74aca1e7" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.17:8080: connect: connection refused" Apr 17 08:04:50.991415 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:04:50.991182 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" podUID="0831b9c3-c47b-45d8-a949-d1aa74aca1e7" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 17 08:05:00.991228 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:05:00.991185 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" podUID="0831b9c3-c47b-45d8-a949-d1aa74aca1e7" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.17:8080: connect: connection refused" Apr 17 08:05:00.991748 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:05:00.991634 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" podUID="0831b9c3-c47b-45d8-a949-d1aa74aca1e7" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 17 08:05:10.991347 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:05:10.991290 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" podUID="0831b9c3-c47b-45d8-a949-d1aa74aca1e7" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.17:8080: connect: connection refused" Apr 17 08:05:10.991880 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:05:10.991756 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" podUID="0831b9c3-c47b-45d8-a949-d1aa74aca1e7" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 17 08:05:20.991744 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:05:20.991695 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" podUID="0831b9c3-c47b-45d8-a949-d1aa74aca1e7" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.17:8080: connect: connection refused" Apr 17 08:05:20.992297 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:05:20.992087 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" podUID="0831b9c3-c47b-45d8-a949-d1aa74aca1e7" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 17 08:05:30.991504 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:05:30.991460 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" podUID="0831b9c3-c47b-45d8-a949-d1aa74aca1e7" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.17:8080: connect: connection refused" Apr 17 08:05:30.992072 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:05:30.991905 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" podUID="0831b9c3-c47b-45d8-a949-d1aa74aca1e7" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 17 08:05:40.991167 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:05:40.991123 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" podUID="0831b9c3-c47b-45d8-a949-d1aa74aca1e7" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.17:8080: connect: connection refused" Apr 17 08:05:40.991621 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:05:40.991502 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" podUID="0831b9c3-c47b-45d8-a949-d1aa74aca1e7" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 17 08:05:50.991807 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:05:50.991776 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" Apr 17 08:05:50.992390 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:05:50.991845 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" Apr 17 08:05:59.087289 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:05:59.087258 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd"] Apr 17 08:05:59.087792 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:05:59.087541 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" podUID="0831b9c3-c47b-45d8-a949-d1aa74aca1e7" containerName="kserve-container" containerID="cri-o://51dfacc51aa8cba8e213e7aed5562a017d9ba3121de3054e03a359944db62a92" gracePeriod=30 Apr 17 08:05:59.087792 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:05:59.087672 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" podUID="0831b9c3-c47b-45d8-a949-d1aa74aca1e7" containerName="agent" containerID="cri-o://994bf3d5497b00511fb84df78735d34a14c857f24445550cdc7f3d21e6b28518" gracePeriod=30 Apr 17 08:05:59.184107 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:05:59.184073 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-247b1-predictor-7f4f9949cf-xng2z"] Apr 17 08:05:59.187268 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:05:59.187246 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-247b1-predictor-7f4f9949cf-xng2z" Apr 17 08:05:59.194905 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:05:59.194879 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-247b1-predictor-7f4f9949cf-xng2z"] Apr 17 08:05:59.236991 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:05:59.236965 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-247b1-predictor-bddf8599-lgqbq"] Apr 17 08:05:59.239974 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:05:59.239956 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-247b1-predictor-bddf8599-lgqbq" Apr 17 08:05:59.246683 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:05:59.246662 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-247b1-predictor-bddf8599-lgqbq"] Apr 17 08:05:59.276519 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:05:59.276489 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/1b94178d-c303-471e-91a1-fec215d757eb-kserve-provision-location\") pod \"isvc-sklearn-graph-raw-247b1-predictor-7f4f9949cf-xng2z\" (UID: \"1b94178d-c303-471e-91a1-fec215d757eb\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-247b1-predictor-7f4f9949cf-xng2z" Apr 17 08:05:59.377607 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:05:59.377521 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/d606e5fb-c0d2-45c0-9a5b-d9db02493562-kserve-provision-location\") pod \"isvc-xgboost-graph-raw-247b1-predictor-bddf8599-lgqbq\" (UID: \"d606e5fb-c0d2-45c0-9a5b-d9db02493562\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-247b1-predictor-bddf8599-lgqbq" Apr 17 08:05:59.377607 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:05:59.377569 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/1b94178d-c303-471e-91a1-fec215d757eb-kserve-provision-location\") pod \"isvc-sklearn-graph-raw-247b1-predictor-7f4f9949cf-xng2z\" (UID: \"1b94178d-c303-471e-91a1-fec215d757eb\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-247b1-predictor-7f4f9949cf-xng2z" Apr 17 08:05:59.377925 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:05:59.377900 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/1b94178d-c303-471e-91a1-fec215d757eb-kserve-provision-location\") pod \"isvc-sklearn-graph-raw-247b1-predictor-7f4f9949cf-xng2z\" (UID: \"1b94178d-c303-471e-91a1-fec215d757eb\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-247b1-predictor-7f4f9949cf-xng2z" Apr 17 08:05:59.478446 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:05:59.478411 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/d606e5fb-c0d2-45c0-9a5b-d9db02493562-kserve-provision-location\") pod \"isvc-xgboost-graph-raw-247b1-predictor-bddf8599-lgqbq\" (UID: \"d606e5fb-c0d2-45c0-9a5b-d9db02493562\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-247b1-predictor-bddf8599-lgqbq" Apr 17 08:05:59.478819 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:05:59.478800 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/d606e5fb-c0d2-45c0-9a5b-d9db02493562-kserve-provision-location\") pod \"isvc-xgboost-graph-raw-247b1-predictor-bddf8599-lgqbq\" (UID: \"d606e5fb-c0d2-45c0-9a5b-d9db02493562\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-247b1-predictor-bddf8599-lgqbq" Apr 17 08:05:59.497397 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:05:59.497366 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-247b1-predictor-7f4f9949cf-xng2z" Apr 17 08:05:59.550558 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:05:59.550077 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-247b1-predictor-bddf8599-lgqbq" Apr 17 08:05:59.627346 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:05:59.627328 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-247b1-predictor-7f4f9949cf-xng2z"] Apr 17 08:05:59.630335 ip-10-0-132-178 kubenswrapper[2569]: W0417 08:05:59.630293 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b94178d_c303_471e_91a1_fec215d757eb.slice/crio-c7755df3e6aecde0048d1aa0441013d7bb24ba567ca5ec540f41c137edc5b0f8 WatchSource:0}: Error finding container c7755df3e6aecde0048d1aa0441013d7bb24ba567ca5ec540f41c137edc5b0f8: Status 404 returned error can't find the container with id c7755df3e6aecde0048d1aa0441013d7bb24ba567ca5ec540f41c137edc5b0f8 Apr 17 08:05:59.673459 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:05:59.673433 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-247b1-predictor-bddf8599-lgqbq"] Apr 17 08:05:59.676194 ip-10-0-132-178 kubenswrapper[2569]: W0417 08:05:59.676172 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd606e5fb_c0d2_45c0_9a5b_d9db02493562.slice/crio-dbcdfc6cda8c822a951b48c09ede4b528219beca1155aead22861373c000914f WatchSource:0}: Error finding container dbcdfc6cda8c822a951b48c09ede4b528219beca1155aead22861373c000914f: Status 404 returned error can't find the container with id dbcdfc6cda8c822a951b48c09ede4b528219beca1155aead22861373c000914f Apr 17 08:06:00.183113 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:00.183076 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-247b1-predictor-7f4f9949cf-xng2z" event={"ID":"1b94178d-c303-471e-91a1-fec215d757eb","Type":"ContainerStarted","Data":"011fb46f863ad603309fbf0ab4ac299953d65c021f86caa85999fee4142c3005"} Apr 17 08:06:00.183579 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:00.183121 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-247b1-predictor-7f4f9949cf-xng2z" event={"ID":"1b94178d-c303-471e-91a1-fec215d757eb","Type":"ContainerStarted","Data":"c7755df3e6aecde0048d1aa0441013d7bb24ba567ca5ec540f41c137edc5b0f8"} Apr 17 08:06:00.184469 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:00.184443 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-247b1-predictor-bddf8599-lgqbq" event={"ID":"d606e5fb-c0d2-45c0-9a5b-d9db02493562","Type":"ContainerStarted","Data":"4c9f8d6c39782d1f642cc1a7ed76563edbfce50fb5f29068683dfc6112b14c8e"} Apr 17 08:06:00.184561 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:00.184476 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-247b1-predictor-bddf8599-lgqbq" event={"ID":"d606e5fb-c0d2-45c0-9a5b-d9db02493562","Type":"ContainerStarted","Data":"dbcdfc6cda8c822a951b48c09ede4b528219beca1155aead22861373c000914f"} Apr 17 08:06:00.991321 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:00.991260 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" podUID="0831b9c3-c47b-45d8-a949-d1aa74aca1e7" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.17:8080: connect: connection refused" Apr 17 08:06:00.991608 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:00.991583 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" podUID="0831b9c3-c47b-45d8-a949-d1aa74aca1e7" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 17 08:06:04.200853 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:04.200823 2569 generic.go:358] "Generic (PLEG): container finished" podID="1b94178d-c303-471e-91a1-fec215d757eb" containerID="011fb46f863ad603309fbf0ab4ac299953d65c021f86caa85999fee4142c3005" exitCode=0 Apr 17 08:06:04.201278 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:04.200914 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-247b1-predictor-7f4f9949cf-xng2z" event={"ID":"1b94178d-c303-471e-91a1-fec215d757eb","Type":"ContainerDied","Data":"011fb46f863ad603309fbf0ab4ac299953d65c021f86caa85999fee4142c3005"} Apr 17 08:06:04.203281 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:04.203259 2569 generic.go:358] "Generic (PLEG): container finished" podID="0831b9c3-c47b-45d8-a949-d1aa74aca1e7" containerID="51dfacc51aa8cba8e213e7aed5562a017d9ba3121de3054e03a359944db62a92" exitCode=0 Apr 17 08:06:04.203388 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:04.203326 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" event={"ID":"0831b9c3-c47b-45d8-a949-d1aa74aca1e7","Type":"ContainerDied","Data":"51dfacc51aa8cba8e213e7aed5562a017d9ba3121de3054e03a359944db62a92"} Apr 17 08:06:04.204686 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:04.204667 2569 generic.go:358] "Generic (PLEG): container finished" podID="d606e5fb-c0d2-45c0-9a5b-d9db02493562" containerID="4c9f8d6c39782d1f642cc1a7ed76563edbfce50fb5f29068683dfc6112b14c8e" exitCode=0 Apr 17 08:06:04.204760 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:04.204732 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-247b1-predictor-bddf8599-lgqbq" event={"ID":"d606e5fb-c0d2-45c0-9a5b-d9db02493562","Type":"ContainerDied","Data":"4c9f8d6c39782d1f642cc1a7ed76563edbfce50fb5f29068683dfc6112b14c8e"} Apr 17 08:06:05.209798 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:05.209767 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-247b1-predictor-7f4f9949cf-xng2z" event={"ID":"1b94178d-c303-471e-91a1-fec215d757eb","Type":"ContainerStarted","Data":"b8b8f7d28a6d758e717a6da4ce976abbd955b58e988b891ab08802adceb5fd1d"} Apr 17 08:06:05.210322 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:05.210101 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-247b1-predictor-7f4f9949cf-xng2z" Apr 17 08:06:05.211496 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:05.211466 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-247b1-predictor-7f4f9949cf-xng2z" podUID="1b94178d-c303-471e-91a1-fec215d757eb" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.18:8080: connect: connection refused" Apr 17 08:06:05.228325 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:05.228286 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-247b1-predictor-7f4f9949cf-xng2z" podStartSLOduration=6.22827347 podStartE2EDuration="6.22827347s" podCreationTimestamp="2026-04-17 08:05:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 08:06:05.226221092 +0000 UTC m=+888.002187147" watchObservedRunningTime="2026-04-17 08:06:05.22827347 +0000 UTC m=+888.004239493" Apr 17 08:06:06.215201 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:06.215081 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-247b1-predictor-7f4f9949cf-xng2z" podUID="1b94178d-c303-471e-91a1-fec215d757eb" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.18:8080: connect: connection refused" Apr 17 08:06:10.995819 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:10.995761 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" podUID="0831b9c3-c47b-45d8-a949-d1aa74aca1e7" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.17:8080: connect: connection refused" Apr 17 08:06:10.996333 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:10.996205 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" podUID="0831b9c3-c47b-45d8-a949-d1aa74aca1e7" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 17 08:06:16.215049 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:16.215004 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-247b1-predictor-7f4f9949cf-xng2z" podUID="1b94178d-c303-471e-91a1-fec215d757eb" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.18:8080: connect: connection refused" Apr 17 08:06:17.695393 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:17.695363 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wbxwr_2e0e406d-3d55-41f3-ba63-448c73f82ded/ovn-acl-logging/0.log" Apr 17 08:06:17.696975 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:17.696953 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wbxwr_2e0e406d-3d55-41f3-ba63-448c73f82ded/ovn-acl-logging/0.log" Apr 17 08:06:20.991797 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:20.991744 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" podUID="0831b9c3-c47b-45d8-a949-d1aa74aca1e7" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.17:8080: connect: connection refused" Apr 17 08:06:20.992272 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:20.991908 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" Apr 17 08:06:20.992272 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:20.992063 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" podUID="0831b9c3-c47b-45d8-a949-d1aa74aca1e7" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 17 08:06:20.992272 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:20.992195 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" Apr 17 08:06:26.215362 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:26.215299 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-247b1-predictor-7f4f9949cf-xng2z" podUID="1b94178d-c303-471e-91a1-fec215d757eb" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.18:8080: connect: connection refused" Apr 17 08:06:26.273585 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:26.273547 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-247b1-predictor-bddf8599-lgqbq" event={"ID":"d606e5fb-c0d2-45c0-9a5b-d9db02493562","Type":"ContainerStarted","Data":"863533b8ebd2273f0f330c384b401917e6264c03a29915a9f16bc850d5b902ab"} Apr 17 08:06:26.273859 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:26.273839 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-247b1-predictor-bddf8599-lgqbq" Apr 17 08:06:26.275094 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:26.275068 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-247b1-predictor-bddf8599-lgqbq" podUID="d606e5fb-c0d2-45c0-9a5b-d9db02493562" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.19:8080: connect: connection refused" Apr 17 08:06:26.289632 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:26.289592 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-247b1-predictor-bddf8599-lgqbq" podStartSLOduration=5.866599268 podStartE2EDuration="27.289580653s" podCreationTimestamp="2026-04-17 08:05:59 +0000 UTC" firstStartedPulling="2026-04-17 08:06:04.205746458 +0000 UTC m=+886.981712459" lastFinishedPulling="2026-04-17 08:06:25.628727841 +0000 UTC m=+908.404693844" observedRunningTime="2026-04-17 08:06:26.289026069 +0000 UTC m=+909.064992117" watchObservedRunningTime="2026-04-17 08:06:26.289580653 +0000 UTC m=+909.065546676" Apr 17 08:06:27.276624 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:27.276584 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-247b1-predictor-bddf8599-lgqbq" podUID="d606e5fb-c0d2-45c0-9a5b-d9db02493562" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.19:8080: connect: connection refused" Apr 17 08:06:29.224402 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:29.224371 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" Apr 17 08:06:29.283631 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:29.283594 2569 generic.go:358] "Generic (PLEG): container finished" podID="0831b9c3-c47b-45d8-a949-d1aa74aca1e7" containerID="994bf3d5497b00511fb84df78735d34a14c857f24445550cdc7f3d21e6b28518" exitCode=0 Apr 17 08:06:29.283819 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:29.283681 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" event={"ID":"0831b9c3-c47b-45d8-a949-d1aa74aca1e7","Type":"ContainerDied","Data":"994bf3d5497b00511fb84df78735d34a14c857f24445550cdc7f3d21e6b28518"} Apr 17 08:06:29.283819 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:29.283702 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" Apr 17 08:06:29.283819 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:29.283721 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd" event={"ID":"0831b9c3-c47b-45d8-a949-d1aa74aca1e7","Type":"ContainerDied","Data":"91da0c49435f34ea1a248ccbbe400e23b1ef3e90add8cb57756ca8c2a6db4fbe"} Apr 17 08:06:29.283819 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:29.283739 2569 scope.go:117] "RemoveContainer" containerID="994bf3d5497b00511fb84df78735d34a14c857f24445550cdc7f3d21e6b28518" Apr 17 08:06:29.291167 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:29.291150 2569 scope.go:117] "RemoveContainer" containerID="51dfacc51aa8cba8e213e7aed5562a017d9ba3121de3054e03a359944db62a92" Apr 17 08:06:29.297540 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:29.297522 2569 scope.go:117] "RemoveContainer" containerID="54dda211c157e0200af48a49485930cae770cff37ddc31a4b39f04c39d908e25" Apr 17 08:06:29.304023 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:29.304008 2569 scope.go:117] "RemoveContainer" containerID="994bf3d5497b00511fb84df78735d34a14c857f24445550cdc7f3d21e6b28518" Apr 17 08:06:29.304268 ip-10-0-132-178 kubenswrapper[2569]: E0417 08:06:29.304250 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"994bf3d5497b00511fb84df78735d34a14c857f24445550cdc7f3d21e6b28518\": container with ID starting with 994bf3d5497b00511fb84df78735d34a14c857f24445550cdc7f3d21e6b28518 not found: ID does not exist" containerID="994bf3d5497b00511fb84df78735d34a14c857f24445550cdc7f3d21e6b28518" Apr 17 08:06:29.304318 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:29.304275 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"994bf3d5497b00511fb84df78735d34a14c857f24445550cdc7f3d21e6b28518"} err="failed to get container status \"994bf3d5497b00511fb84df78735d34a14c857f24445550cdc7f3d21e6b28518\": rpc error: code = NotFound desc = could not find container \"994bf3d5497b00511fb84df78735d34a14c857f24445550cdc7f3d21e6b28518\": container with ID starting with 994bf3d5497b00511fb84df78735d34a14c857f24445550cdc7f3d21e6b28518 not found: ID does not exist" Apr 17 08:06:29.304318 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:29.304292 2569 scope.go:117] "RemoveContainer" containerID="51dfacc51aa8cba8e213e7aed5562a017d9ba3121de3054e03a359944db62a92" Apr 17 08:06:29.304515 ip-10-0-132-178 kubenswrapper[2569]: E0417 08:06:29.304497 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51dfacc51aa8cba8e213e7aed5562a017d9ba3121de3054e03a359944db62a92\": container with ID starting with 51dfacc51aa8cba8e213e7aed5562a017d9ba3121de3054e03a359944db62a92 not found: ID does not exist" containerID="51dfacc51aa8cba8e213e7aed5562a017d9ba3121de3054e03a359944db62a92" Apr 17 08:06:29.304557 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:29.304529 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51dfacc51aa8cba8e213e7aed5562a017d9ba3121de3054e03a359944db62a92"} err="failed to get container status \"51dfacc51aa8cba8e213e7aed5562a017d9ba3121de3054e03a359944db62a92\": rpc error: code = NotFound desc = could not find container \"51dfacc51aa8cba8e213e7aed5562a017d9ba3121de3054e03a359944db62a92\": container with ID starting with 51dfacc51aa8cba8e213e7aed5562a017d9ba3121de3054e03a359944db62a92 not found: ID does not exist" Apr 17 08:06:29.304557 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:29.304546 2569 scope.go:117] "RemoveContainer" containerID="54dda211c157e0200af48a49485930cae770cff37ddc31a4b39f04c39d908e25" Apr 17 08:06:29.304811 ip-10-0-132-178 kubenswrapper[2569]: E0417 08:06:29.304793 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54dda211c157e0200af48a49485930cae770cff37ddc31a4b39f04c39d908e25\": container with ID starting with 54dda211c157e0200af48a49485930cae770cff37ddc31a4b39f04c39d908e25 not found: ID does not exist" containerID="54dda211c157e0200af48a49485930cae770cff37ddc31a4b39f04c39d908e25" Apr 17 08:06:29.304873 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:29.304815 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54dda211c157e0200af48a49485930cae770cff37ddc31a4b39f04c39d908e25"} err="failed to get container status \"54dda211c157e0200af48a49485930cae770cff37ddc31a4b39f04c39d908e25\": rpc error: code = NotFound desc = could not find container \"54dda211c157e0200af48a49485930cae770cff37ddc31a4b39f04c39d908e25\": container with ID starting with 54dda211c157e0200af48a49485930cae770cff37ddc31a4b39f04c39d908e25 not found: ID does not exist" Apr 17 08:06:29.321190 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:29.321163 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/0831b9c3-c47b-45d8-a949-d1aa74aca1e7-kserve-provision-location\") pod \"0831b9c3-c47b-45d8-a949-d1aa74aca1e7\" (UID: \"0831b9c3-c47b-45d8-a949-d1aa74aca1e7\") " Apr 17 08:06:29.321456 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:29.321433 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0831b9c3-c47b-45d8-a949-d1aa74aca1e7-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "0831b9c3-c47b-45d8-a949-d1aa74aca1e7" (UID: "0831b9c3-c47b-45d8-a949-d1aa74aca1e7"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 17 08:06:29.422020 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:29.421930 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/0831b9c3-c47b-45d8-a949-d1aa74aca1e7-kserve-provision-location\") on node \"ip-10-0-132-178.ec2.internal\" DevicePath \"\"" Apr 17 08:06:29.604237 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:29.604211 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd"] Apr 17 08:06:29.605738 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:29.605716 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-raw-sklearn-batcher-91db1-predictor-54bc4f8b9-jrgpd"] Apr 17 08:06:29.754123 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:29.754049 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0831b9c3-c47b-45d8-a949-d1aa74aca1e7" path="/var/lib/kubelet/pods/0831b9c3-c47b-45d8-a949-d1aa74aca1e7/volumes" Apr 17 08:06:36.215807 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:36.215759 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-247b1-predictor-7f4f9949cf-xng2z" podUID="1b94178d-c303-471e-91a1-fec215d757eb" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.18:8080: connect: connection refused" Apr 17 08:06:37.276619 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:37.276573 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-247b1-predictor-bddf8599-lgqbq" podUID="d606e5fb-c0d2-45c0-9a5b-d9db02493562" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.19:8080: connect: connection refused" Apr 17 08:06:46.215529 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:46.215474 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-247b1-predictor-7f4f9949cf-xng2z" podUID="1b94178d-c303-471e-91a1-fec215d757eb" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.18:8080: connect: connection refused" Apr 17 08:06:47.277197 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:47.277158 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-247b1-predictor-bddf8599-lgqbq" podUID="d606e5fb-c0d2-45c0-9a5b-d9db02493562" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.19:8080: connect: connection refused" Apr 17 08:06:56.215430 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:56.215382 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-247b1-predictor-7f4f9949cf-xng2z" podUID="1b94178d-c303-471e-91a1-fec215d757eb" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.18:8080: connect: connection refused" Apr 17 08:06:57.276918 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:06:57.276872 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-247b1-predictor-bddf8599-lgqbq" podUID="d606e5fb-c0d2-45c0-9a5b-d9db02493562" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.19:8080: connect: connection refused" Apr 17 08:07:06.215077 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:06.215029 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-247b1-predictor-7f4f9949cf-xng2z" podUID="1b94178d-c303-471e-91a1-fec215d757eb" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.18:8080: connect: connection refused" Apr 17 08:07:07.277316 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:07.277272 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-247b1-predictor-bddf8599-lgqbq" podUID="d606e5fb-c0d2-45c0-9a5b-d9db02493562" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.19:8080: connect: connection refused" Apr 17 08:07:07.751927 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:07.751824 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-247b1-predictor-7f4f9949cf-xng2z" podUID="1b94178d-c303-471e-91a1-fec215d757eb" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.18:8080: connect: connection refused" Apr 17 08:07:17.277143 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:17.277093 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-247b1-predictor-bddf8599-lgqbq" podUID="d606e5fb-c0d2-45c0-9a5b-d9db02493562" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.19:8080: connect: connection refused" Apr 17 08:07:17.753034 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:17.753002 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-247b1-predictor-7f4f9949cf-xng2z" Apr 17 08:07:27.278640 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:27.278603 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-247b1-predictor-bddf8599-lgqbq" Apr 17 08:07:39.384350 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:39.384270 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-247b1-predictor-7f4f9949cf-xng2z"] Apr 17 08:07:39.384748 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:39.384597 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-247b1-predictor-7f4f9949cf-xng2z" podUID="1b94178d-c303-471e-91a1-fec215d757eb" containerName="kserve-container" containerID="cri-o://b8b8f7d28a6d758e717a6da4ce976abbd955b58e988b891ab08802adceb5fd1d" gracePeriod=30 Apr 17 08:07:39.423707 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:39.423676 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-4b469-predictor-6b5dfbf799-fjw4j"] Apr 17 08:07:39.424061 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:39.424047 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0831b9c3-c47b-45d8-a949-d1aa74aca1e7" containerName="agent" Apr 17 08:07:39.424106 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:39.424065 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="0831b9c3-c47b-45d8-a949-d1aa74aca1e7" containerName="agent" Apr 17 08:07:39.424106 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:39.424099 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0831b9c3-c47b-45d8-a949-d1aa74aca1e7" containerName="kserve-container" Apr 17 08:07:39.424170 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:39.424108 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="0831b9c3-c47b-45d8-a949-d1aa74aca1e7" containerName="kserve-container" Apr 17 08:07:39.424170 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:39.424118 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0831b9c3-c47b-45d8-a949-d1aa74aca1e7" containerName="storage-initializer" Apr 17 08:07:39.424170 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:39.424127 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="0831b9c3-c47b-45d8-a949-d1aa74aca1e7" containerName="storage-initializer" Apr 17 08:07:39.424255 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:39.424192 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="0831b9c3-c47b-45d8-a949-d1aa74aca1e7" containerName="kserve-container" Apr 17 08:07:39.424255 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:39.424205 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="0831b9c3-c47b-45d8-a949-d1aa74aca1e7" containerName="agent" Apr 17 08:07:39.428619 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:39.428598 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-4b469-predictor-6b5dfbf799-fjw4j" Apr 17 08:07:39.437471 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:39.437442 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-4b469-predictor-6b5dfbf799-fjw4j"] Apr 17 08:07:39.486357 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:39.486325 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-4b469-predictor-5fffc65c55-74ln2"] Apr 17 08:07:39.489405 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:39.489389 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-4b469-predictor-5fffc65c55-74ln2" Apr 17 08:07:39.499068 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:39.499039 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-4b469-predictor-5fffc65c55-74ln2"] Apr 17 08:07:39.529723 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:39.529682 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/28a318bf-aed8-4668-8362-c654a5fc4ef1-kserve-provision-location\") pod \"isvc-sklearn-graph-raw-hpa-4b469-predictor-6b5dfbf799-fjw4j\" (UID: \"28a318bf-aed8-4668-8362-c654a5fc4ef1\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-4b469-predictor-6b5dfbf799-fjw4j" Apr 17 08:07:39.582675 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:39.582627 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-247b1-predictor-bddf8599-lgqbq"] Apr 17 08:07:39.583041 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:39.582989 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-247b1-predictor-bddf8599-lgqbq" podUID="d606e5fb-c0d2-45c0-9a5b-d9db02493562" containerName="kserve-container" containerID="cri-o://863533b8ebd2273f0f330c384b401917e6264c03a29915a9f16bc850d5b902ab" gracePeriod=30 Apr 17 08:07:39.630314 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:39.630277 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/28a318bf-aed8-4668-8362-c654a5fc4ef1-kserve-provision-location\") pod \"isvc-sklearn-graph-raw-hpa-4b469-predictor-6b5dfbf799-fjw4j\" (UID: \"28a318bf-aed8-4668-8362-c654a5fc4ef1\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-4b469-predictor-6b5dfbf799-fjw4j" Apr 17 08:07:39.630476 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:39.630342 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/1519eac7-c884-46b3-a09c-73c2e87a59de-kserve-provision-location\") pod \"isvc-xgboost-graph-raw-hpa-4b469-predictor-5fffc65c55-74ln2\" (UID: \"1519eac7-c884-46b3-a09c-73c2e87a59de\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-4b469-predictor-5fffc65c55-74ln2" Apr 17 08:07:39.630682 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:39.630641 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/28a318bf-aed8-4668-8362-c654a5fc4ef1-kserve-provision-location\") pod \"isvc-sklearn-graph-raw-hpa-4b469-predictor-6b5dfbf799-fjw4j\" (UID: \"28a318bf-aed8-4668-8362-c654a5fc4ef1\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-4b469-predictor-6b5dfbf799-fjw4j" Apr 17 08:07:39.731699 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:39.731585 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/1519eac7-c884-46b3-a09c-73c2e87a59de-kserve-provision-location\") pod \"isvc-xgboost-graph-raw-hpa-4b469-predictor-5fffc65c55-74ln2\" (UID: \"1519eac7-c884-46b3-a09c-73c2e87a59de\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-4b469-predictor-5fffc65c55-74ln2" Apr 17 08:07:39.731959 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:39.731926 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/1519eac7-c884-46b3-a09c-73c2e87a59de-kserve-provision-location\") pod \"isvc-xgboost-graph-raw-hpa-4b469-predictor-5fffc65c55-74ln2\" (UID: \"1519eac7-c884-46b3-a09c-73c2e87a59de\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-4b469-predictor-5fffc65c55-74ln2" Apr 17 08:07:39.739041 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:39.739022 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-4b469-predictor-6b5dfbf799-fjw4j" Apr 17 08:07:39.801052 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:39.800821 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-4b469-predictor-5fffc65c55-74ln2" Apr 17 08:07:39.864888 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:39.864429 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-4b469-predictor-6b5dfbf799-fjw4j"] Apr 17 08:07:39.868422 ip-10-0-132-178 kubenswrapper[2569]: W0417 08:07:39.868396 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28a318bf_aed8_4668_8362_c654a5fc4ef1.slice/crio-7486be9968596989015a75a86e5cda4dd7e6b02445cd626a95b7a80b17d73fe3 WatchSource:0}: Error finding container 7486be9968596989015a75a86e5cda4dd7e6b02445cd626a95b7a80b17d73fe3: Status 404 returned error can't find the container with id 7486be9968596989015a75a86e5cda4dd7e6b02445cd626a95b7a80b17d73fe3 Apr 17 08:07:39.871045 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:39.871027 2569 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 17 08:07:39.925275 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:39.925249 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-4b469-predictor-5fffc65c55-74ln2"] Apr 17 08:07:39.926752 ip-10-0-132-178 kubenswrapper[2569]: W0417 08:07:39.926727 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1519eac7_c884_46b3_a09c_73c2e87a59de.slice/crio-45a0fa4e15f432e2813db68bf3a4388214378e5482559e42eb1979f2f4544316 WatchSource:0}: Error finding container 45a0fa4e15f432e2813db68bf3a4388214378e5482559e42eb1979f2f4544316: Status 404 returned error can't find the container with id 45a0fa4e15f432e2813db68bf3a4388214378e5482559e42eb1979f2f4544316 Apr 17 08:07:40.483539 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:40.483492 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-4b469-predictor-6b5dfbf799-fjw4j" event={"ID":"28a318bf-aed8-4668-8362-c654a5fc4ef1","Type":"ContainerStarted","Data":"daf96eab2bb3f27815609281e20a28de64bd5ccbabd04c8b803914e2f867bd3a"} Apr 17 08:07:40.484037 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:40.483552 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-4b469-predictor-6b5dfbf799-fjw4j" event={"ID":"28a318bf-aed8-4668-8362-c654a5fc4ef1","Type":"ContainerStarted","Data":"7486be9968596989015a75a86e5cda4dd7e6b02445cd626a95b7a80b17d73fe3"} Apr 17 08:07:40.487394 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:40.487361 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-4b469-predictor-5fffc65c55-74ln2" event={"ID":"1519eac7-c884-46b3-a09c-73c2e87a59de","Type":"ContainerStarted","Data":"f2db404779c4f3455d7403e7a63e305676b750c31da9f3f3d20aa75c9d42612d"} Apr 17 08:07:40.487514 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:40.487397 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-4b469-predictor-5fffc65c55-74ln2" event={"ID":"1519eac7-c884-46b3-a09c-73c2e87a59de","Type":"ContainerStarted","Data":"45a0fa4e15f432e2813db68bf3a4388214378e5482559e42eb1979f2f4544316"} Apr 17 08:07:43.220514 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:43.220491 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-247b1-predictor-bddf8599-lgqbq" Apr 17 08:07:43.358087 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:43.357991 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/d606e5fb-c0d2-45c0-9a5b-d9db02493562-kserve-provision-location\") pod \"d606e5fb-c0d2-45c0-9a5b-d9db02493562\" (UID: \"d606e5fb-c0d2-45c0-9a5b-d9db02493562\") " Apr 17 08:07:43.358322 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:43.358299 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d606e5fb-c0d2-45c0-9a5b-d9db02493562-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "d606e5fb-c0d2-45c0-9a5b-d9db02493562" (UID: "d606e5fb-c0d2-45c0-9a5b-d9db02493562"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 17 08:07:43.459200 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:43.459165 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/d606e5fb-c0d2-45c0-9a5b-d9db02493562-kserve-provision-location\") on node \"ip-10-0-132-178.ec2.internal\" DevicePath \"\"" Apr 17 08:07:43.497530 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:43.497496 2569 generic.go:358] "Generic (PLEG): container finished" podID="d606e5fb-c0d2-45c0-9a5b-d9db02493562" containerID="863533b8ebd2273f0f330c384b401917e6264c03a29915a9f16bc850d5b902ab" exitCode=0 Apr 17 08:07:43.497722 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:43.497565 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-247b1-predictor-bddf8599-lgqbq" Apr 17 08:07:43.497722 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:43.497590 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-247b1-predictor-bddf8599-lgqbq" event={"ID":"d606e5fb-c0d2-45c0-9a5b-d9db02493562","Type":"ContainerDied","Data":"863533b8ebd2273f0f330c384b401917e6264c03a29915a9f16bc850d5b902ab"} Apr 17 08:07:43.497722 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:43.497635 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-247b1-predictor-bddf8599-lgqbq" event={"ID":"d606e5fb-c0d2-45c0-9a5b-d9db02493562","Type":"ContainerDied","Data":"dbcdfc6cda8c822a951b48c09ede4b528219beca1155aead22861373c000914f"} Apr 17 08:07:43.497722 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:43.497671 2569 scope.go:117] "RemoveContainer" containerID="863533b8ebd2273f0f330c384b401917e6264c03a29915a9f16bc850d5b902ab" Apr 17 08:07:43.505451 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:43.505379 2569 scope.go:117] "RemoveContainer" containerID="4c9f8d6c39782d1f642cc1a7ed76563edbfce50fb5f29068683dfc6112b14c8e" Apr 17 08:07:43.512459 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:43.512442 2569 scope.go:117] "RemoveContainer" containerID="863533b8ebd2273f0f330c384b401917e6264c03a29915a9f16bc850d5b902ab" Apr 17 08:07:43.512721 ip-10-0-132-178 kubenswrapper[2569]: E0417 08:07:43.512700 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"863533b8ebd2273f0f330c384b401917e6264c03a29915a9f16bc850d5b902ab\": container with ID starting with 863533b8ebd2273f0f330c384b401917e6264c03a29915a9f16bc850d5b902ab not found: ID does not exist" containerID="863533b8ebd2273f0f330c384b401917e6264c03a29915a9f16bc850d5b902ab" Apr 17 08:07:43.512776 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:43.512729 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"863533b8ebd2273f0f330c384b401917e6264c03a29915a9f16bc850d5b902ab"} err="failed to get container status \"863533b8ebd2273f0f330c384b401917e6264c03a29915a9f16bc850d5b902ab\": rpc error: code = NotFound desc = could not find container \"863533b8ebd2273f0f330c384b401917e6264c03a29915a9f16bc850d5b902ab\": container with ID starting with 863533b8ebd2273f0f330c384b401917e6264c03a29915a9f16bc850d5b902ab not found: ID does not exist" Apr 17 08:07:43.512776 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:43.512748 2569 scope.go:117] "RemoveContainer" containerID="4c9f8d6c39782d1f642cc1a7ed76563edbfce50fb5f29068683dfc6112b14c8e" Apr 17 08:07:43.512998 ip-10-0-132-178 kubenswrapper[2569]: E0417 08:07:43.512982 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c9f8d6c39782d1f642cc1a7ed76563edbfce50fb5f29068683dfc6112b14c8e\": container with ID starting with 4c9f8d6c39782d1f642cc1a7ed76563edbfce50fb5f29068683dfc6112b14c8e not found: ID does not exist" containerID="4c9f8d6c39782d1f642cc1a7ed76563edbfce50fb5f29068683dfc6112b14c8e" Apr 17 08:07:43.513040 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:43.513001 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c9f8d6c39782d1f642cc1a7ed76563edbfce50fb5f29068683dfc6112b14c8e"} err="failed to get container status \"4c9f8d6c39782d1f642cc1a7ed76563edbfce50fb5f29068683dfc6112b14c8e\": rpc error: code = NotFound desc = could not find container \"4c9f8d6c39782d1f642cc1a7ed76563edbfce50fb5f29068683dfc6112b14c8e\": container with ID starting with 4c9f8d6c39782d1f642cc1a7ed76563edbfce50fb5f29068683dfc6112b14c8e not found: ID does not exist" Apr 17 08:07:43.517997 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:43.517977 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-247b1-predictor-bddf8599-lgqbq"] Apr 17 08:07:43.521021 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:43.521002 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-247b1-predictor-bddf8599-lgqbq"] Apr 17 08:07:43.754328 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:43.754300 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d606e5fb-c0d2-45c0-9a5b-d9db02493562" path="/var/lib/kubelet/pods/d606e5fb-c0d2-45c0-9a5b-d9db02493562/volumes" Apr 17 08:07:43.755226 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:43.755209 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-247b1-predictor-7f4f9949cf-xng2z" Apr 17 08:07:43.861054 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:43.861023 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/1b94178d-c303-471e-91a1-fec215d757eb-kserve-provision-location\") pod \"1b94178d-c303-471e-91a1-fec215d757eb\" (UID: \"1b94178d-c303-471e-91a1-fec215d757eb\") " Apr 17 08:07:43.861377 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:43.861350 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b94178d-c303-471e-91a1-fec215d757eb-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "1b94178d-c303-471e-91a1-fec215d757eb" (UID: "1b94178d-c303-471e-91a1-fec215d757eb"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 17 08:07:43.962221 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:43.962149 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/1b94178d-c303-471e-91a1-fec215d757eb-kserve-provision-location\") on node \"ip-10-0-132-178.ec2.internal\" DevicePath \"\"" Apr 17 08:07:44.501244 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:44.501206 2569 generic.go:358] "Generic (PLEG): container finished" podID="1b94178d-c303-471e-91a1-fec215d757eb" containerID="b8b8f7d28a6d758e717a6da4ce976abbd955b58e988b891ab08802adceb5fd1d" exitCode=0 Apr 17 08:07:44.501711 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:44.501239 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-247b1-predictor-7f4f9949cf-xng2z" event={"ID":"1b94178d-c303-471e-91a1-fec215d757eb","Type":"ContainerDied","Data":"b8b8f7d28a6d758e717a6da4ce976abbd955b58e988b891ab08802adceb5fd1d"} Apr 17 08:07:44.501711 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:44.501281 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-247b1-predictor-7f4f9949cf-xng2z" Apr 17 08:07:44.501711 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:44.501286 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-247b1-predictor-7f4f9949cf-xng2z" event={"ID":"1b94178d-c303-471e-91a1-fec215d757eb","Type":"ContainerDied","Data":"c7755df3e6aecde0048d1aa0441013d7bb24ba567ca5ec540f41c137edc5b0f8"} Apr 17 08:07:44.501711 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:44.501312 2569 scope.go:117] "RemoveContainer" containerID="b8b8f7d28a6d758e717a6da4ce976abbd955b58e988b891ab08802adceb5fd1d" Apr 17 08:07:44.502574 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:44.502557 2569 generic.go:358] "Generic (PLEG): container finished" podID="1519eac7-c884-46b3-a09c-73c2e87a59de" containerID="f2db404779c4f3455d7403e7a63e305676b750c31da9f3f3d20aa75c9d42612d" exitCode=0 Apr 17 08:07:44.502662 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:44.502624 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-4b469-predictor-5fffc65c55-74ln2" event={"ID":"1519eac7-c884-46b3-a09c-73c2e87a59de","Type":"ContainerDied","Data":"f2db404779c4f3455d7403e7a63e305676b750c31da9f3f3d20aa75c9d42612d"} Apr 17 08:07:44.504079 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:44.504061 2569 generic.go:358] "Generic (PLEG): container finished" podID="28a318bf-aed8-4668-8362-c654a5fc4ef1" containerID="daf96eab2bb3f27815609281e20a28de64bd5ccbabd04c8b803914e2f867bd3a" exitCode=0 Apr 17 08:07:44.504171 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:44.504121 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-4b469-predictor-6b5dfbf799-fjw4j" event={"ID":"28a318bf-aed8-4668-8362-c654a5fc4ef1","Type":"ContainerDied","Data":"daf96eab2bb3f27815609281e20a28de64bd5ccbabd04c8b803914e2f867bd3a"} Apr 17 08:07:44.510300 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:44.510269 2569 scope.go:117] "RemoveContainer" containerID="011fb46f863ad603309fbf0ab4ac299953d65c021f86caa85999fee4142c3005" Apr 17 08:07:44.517505 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:44.517484 2569 scope.go:117] "RemoveContainer" containerID="b8b8f7d28a6d758e717a6da4ce976abbd955b58e988b891ab08802adceb5fd1d" Apr 17 08:07:44.517781 ip-10-0-132-178 kubenswrapper[2569]: E0417 08:07:44.517763 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8b8f7d28a6d758e717a6da4ce976abbd955b58e988b891ab08802adceb5fd1d\": container with ID starting with b8b8f7d28a6d758e717a6da4ce976abbd955b58e988b891ab08802adceb5fd1d not found: ID does not exist" containerID="b8b8f7d28a6d758e717a6da4ce976abbd955b58e988b891ab08802adceb5fd1d" Apr 17 08:07:44.517867 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:44.517791 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8b8f7d28a6d758e717a6da4ce976abbd955b58e988b891ab08802adceb5fd1d"} err="failed to get container status \"b8b8f7d28a6d758e717a6da4ce976abbd955b58e988b891ab08802adceb5fd1d\": rpc error: code = NotFound desc = could not find container \"b8b8f7d28a6d758e717a6da4ce976abbd955b58e988b891ab08802adceb5fd1d\": container with ID starting with b8b8f7d28a6d758e717a6da4ce976abbd955b58e988b891ab08802adceb5fd1d not found: ID does not exist" Apr 17 08:07:44.517867 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:44.517815 2569 scope.go:117] "RemoveContainer" containerID="011fb46f863ad603309fbf0ab4ac299953d65c021f86caa85999fee4142c3005" Apr 17 08:07:44.518040 ip-10-0-132-178 kubenswrapper[2569]: E0417 08:07:44.518024 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"011fb46f863ad603309fbf0ab4ac299953d65c021f86caa85999fee4142c3005\": container with ID starting with 011fb46f863ad603309fbf0ab4ac299953d65c021f86caa85999fee4142c3005 not found: ID does not exist" containerID="011fb46f863ad603309fbf0ab4ac299953d65c021f86caa85999fee4142c3005" Apr 17 08:07:44.518085 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:44.518044 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"011fb46f863ad603309fbf0ab4ac299953d65c021f86caa85999fee4142c3005"} err="failed to get container status \"011fb46f863ad603309fbf0ab4ac299953d65c021f86caa85999fee4142c3005\": rpc error: code = NotFound desc = could not find container \"011fb46f863ad603309fbf0ab4ac299953d65c021f86caa85999fee4142c3005\": container with ID starting with 011fb46f863ad603309fbf0ab4ac299953d65c021f86caa85999fee4142c3005 not found: ID does not exist" Apr 17 08:07:44.546405 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:44.546380 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-247b1-predictor-7f4f9949cf-xng2z"] Apr 17 08:07:44.550539 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:44.550518 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-247b1-predictor-7f4f9949cf-xng2z"] Apr 17 08:07:45.510262 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:45.510221 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-4b469-predictor-6b5dfbf799-fjw4j" event={"ID":"28a318bf-aed8-4668-8362-c654a5fc4ef1","Type":"ContainerStarted","Data":"3ab0d88588067fc84ccf535f2a36684b20f16bf8322f91f595a6df7b3ec8b7d9"} Apr 17 08:07:45.510765 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:45.510563 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-4b469-predictor-6b5dfbf799-fjw4j" Apr 17 08:07:45.512229 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:45.512202 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-4b469-predictor-6b5dfbf799-fjw4j" podUID="28a318bf-aed8-4668-8362-c654a5fc4ef1" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.20:8080: connect: connection refused" Apr 17 08:07:45.512528 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:45.512511 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-4b469-predictor-5fffc65c55-74ln2" event={"ID":"1519eac7-c884-46b3-a09c-73c2e87a59de","Type":"ContainerStarted","Data":"1e00b68c018d619f03e3ba7f1c37ae8351eb54b62ac519b4701fc6672ee51edf"} Apr 17 08:07:45.512772 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:45.512749 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-4b469-predictor-5fffc65c55-74ln2" Apr 17 08:07:45.513730 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:45.513705 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-4b469-predictor-5fffc65c55-74ln2" podUID="1519eac7-c884-46b3-a09c-73c2e87a59de" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.21:8080: connect: connection refused" Apr 17 08:07:45.526593 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:45.526560 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-4b469-predictor-6b5dfbf799-fjw4j" podStartSLOduration=6.5265492819999995 podStartE2EDuration="6.526549282s" podCreationTimestamp="2026-04-17 08:07:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 08:07:45.525603873 +0000 UTC m=+988.301569895" watchObservedRunningTime="2026-04-17 08:07:45.526549282 +0000 UTC m=+988.302515305" Apr 17 08:07:45.546744 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:45.546702 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-4b469-predictor-5fffc65c55-74ln2" podStartSLOduration=6.546689901 podStartE2EDuration="6.546689901s" podCreationTimestamp="2026-04-17 08:07:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 08:07:45.545638577 +0000 UTC m=+988.321604602" watchObservedRunningTime="2026-04-17 08:07:45.546689901 +0000 UTC m=+988.322655921" Apr 17 08:07:45.754453 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:45.754419 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b94178d-c303-471e-91a1-fec215d757eb" path="/var/lib/kubelet/pods/1b94178d-c303-471e-91a1-fec215d757eb/volumes" Apr 17 08:07:46.515961 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:46.515913 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-4b469-predictor-6b5dfbf799-fjw4j" podUID="28a318bf-aed8-4668-8362-c654a5fc4ef1" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.20:8080: connect: connection refused" Apr 17 08:07:46.516386 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:46.515915 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-4b469-predictor-5fffc65c55-74ln2" podUID="1519eac7-c884-46b3-a09c-73c2e87a59de" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.21:8080: connect: connection refused" Apr 17 08:07:56.516006 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:56.515965 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-4b469-predictor-6b5dfbf799-fjw4j" podUID="28a318bf-aed8-4668-8362-c654a5fc4ef1" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.20:8080: connect: connection refused" Apr 17 08:07:56.516463 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:07:56.515965 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-4b469-predictor-5fffc65c55-74ln2" podUID="1519eac7-c884-46b3-a09c-73c2e87a59de" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.21:8080: connect: connection refused" Apr 17 08:08:06.516072 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:08:06.516025 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-4b469-predictor-5fffc65c55-74ln2" podUID="1519eac7-c884-46b3-a09c-73c2e87a59de" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.21:8080: connect: connection refused" Apr 17 08:08:06.516541 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:08:06.516025 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-4b469-predictor-6b5dfbf799-fjw4j" podUID="28a318bf-aed8-4668-8362-c654a5fc4ef1" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.20:8080: connect: connection refused" Apr 17 08:08:16.516061 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:08:16.516014 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-4b469-predictor-5fffc65c55-74ln2" podUID="1519eac7-c884-46b3-a09c-73c2e87a59de" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.21:8080: connect: connection refused" Apr 17 08:08:16.516061 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:08:16.516014 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-4b469-predictor-6b5dfbf799-fjw4j" podUID="28a318bf-aed8-4668-8362-c654a5fc4ef1" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.20:8080: connect: connection refused" Apr 17 08:08:26.516094 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:08:26.516052 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-4b469-predictor-6b5dfbf799-fjw4j" podUID="28a318bf-aed8-4668-8362-c654a5fc4ef1" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.20:8080: connect: connection refused" Apr 17 08:08:26.516470 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:08:26.516052 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-4b469-predictor-5fffc65c55-74ln2" podUID="1519eac7-c884-46b3-a09c-73c2e87a59de" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.21:8080: connect: connection refused" Apr 17 08:08:36.516747 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:08:36.516695 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-4b469-predictor-6b5dfbf799-fjw4j" podUID="28a318bf-aed8-4668-8362-c654a5fc4ef1" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.20:8080: connect: connection refused" Apr 17 08:08:36.517202 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:08:36.516695 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-4b469-predictor-5fffc65c55-74ln2" podUID="1519eac7-c884-46b3-a09c-73c2e87a59de" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.21:8080: connect: connection refused" Apr 17 08:08:46.516837 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:08:46.516794 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-4b469-predictor-6b5dfbf799-fjw4j" podUID="28a318bf-aed8-4668-8362-c654a5fc4ef1" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.20:8080: connect: connection refused" Apr 17 08:08:46.517817 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:08:46.517795 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-4b469-predictor-5fffc65c55-74ln2" Apr 17 08:08:56.516985 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:08:56.516942 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-4b469-predictor-6b5dfbf799-fjw4j" Apr 17 08:09:19.658439 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:19.658402 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-4b469-predictor-6b5dfbf799-fjw4j"] Apr 17 08:09:19.659038 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:19.658731 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-4b469-predictor-6b5dfbf799-fjw4j" podUID="28a318bf-aed8-4668-8362-c654a5fc4ef1" containerName="kserve-container" containerID="cri-o://3ab0d88588067fc84ccf535f2a36684b20f16bf8322f91f595a6df7b3ec8b7d9" gracePeriod=30 Apr 17 08:09:19.765194 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:19.765158 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-4b469-predictor-5fffc65c55-74ln2"] Apr 17 08:09:19.765503 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:19.765461 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-4b469-predictor-5fffc65c55-74ln2" podUID="1519eac7-c884-46b3-a09c-73c2e87a59de" containerName="kserve-container" containerID="cri-o://1e00b68c018d619f03e3ba7f1c37ae8351eb54b62ac519b4701fc6672ee51edf" gracePeriod=30 Apr 17 08:09:23.208258 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:23.208234 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-4b469-predictor-5fffc65c55-74ln2" Apr 17 08:09:23.277000 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:23.276970 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/1519eac7-c884-46b3-a09c-73c2e87a59de-kserve-provision-location\") pod \"1519eac7-c884-46b3-a09c-73c2e87a59de\" (UID: \"1519eac7-c884-46b3-a09c-73c2e87a59de\") " Apr 17 08:09:23.277286 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:23.277260 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1519eac7-c884-46b3-a09c-73c2e87a59de-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "1519eac7-c884-46b3-a09c-73c2e87a59de" (UID: "1519eac7-c884-46b3-a09c-73c2e87a59de"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 17 08:09:23.378037 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:23.377964 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/1519eac7-c884-46b3-a09c-73c2e87a59de-kserve-provision-location\") on node \"ip-10-0-132-178.ec2.internal\" DevicePath \"\"" Apr 17 08:09:23.782871 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:23.782843 2569 generic.go:358] "Generic (PLEG): container finished" podID="1519eac7-c884-46b3-a09c-73c2e87a59de" containerID="1e00b68c018d619f03e3ba7f1c37ae8351eb54b62ac519b4701fc6672ee51edf" exitCode=0 Apr 17 08:09:23.783006 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:23.782910 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-4b469-predictor-5fffc65c55-74ln2" Apr 17 08:09:23.783006 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:23.782914 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-4b469-predictor-5fffc65c55-74ln2" event={"ID":"1519eac7-c884-46b3-a09c-73c2e87a59de","Type":"ContainerDied","Data":"1e00b68c018d619f03e3ba7f1c37ae8351eb54b62ac519b4701fc6672ee51edf"} Apr 17 08:09:23.783006 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:23.782956 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-4b469-predictor-5fffc65c55-74ln2" event={"ID":"1519eac7-c884-46b3-a09c-73c2e87a59de","Type":"ContainerDied","Data":"45a0fa4e15f432e2813db68bf3a4388214378e5482559e42eb1979f2f4544316"} Apr 17 08:09:23.783006 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:23.782980 2569 scope.go:117] "RemoveContainer" containerID="1e00b68c018d619f03e3ba7f1c37ae8351eb54b62ac519b4701fc6672ee51edf" Apr 17 08:09:23.784714 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:23.784678 2569 generic.go:358] "Generic (PLEG): container finished" podID="28a318bf-aed8-4668-8362-c654a5fc4ef1" containerID="3ab0d88588067fc84ccf535f2a36684b20f16bf8322f91f595a6df7b3ec8b7d9" exitCode=0 Apr 17 08:09:23.784828 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:23.784728 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-4b469-predictor-6b5dfbf799-fjw4j" event={"ID":"28a318bf-aed8-4668-8362-c654a5fc4ef1","Type":"ContainerDied","Data":"3ab0d88588067fc84ccf535f2a36684b20f16bf8322f91f595a6df7b3ec8b7d9"} Apr 17 08:09:23.809111 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:23.809084 2569 scope.go:117] "RemoveContainer" containerID="f2db404779c4f3455d7403e7a63e305676b750c31da9f3f3d20aa75c9d42612d" Apr 17 08:09:23.816209 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:23.816193 2569 scope.go:117] "RemoveContainer" containerID="1e00b68c018d619f03e3ba7f1c37ae8351eb54b62ac519b4701fc6672ee51edf" Apr 17 08:09:23.816497 ip-10-0-132-178 kubenswrapper[2569]: E0417 08:09:23.816476 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e00b68c018d619f03e3ba7f1c37ae8351eb54b62ac519b4701fc6672ee51edf\": container with ID starting with 1e00b68c018d619f03e3ba7f1c37ae8351eb54b62ac519b4701fc6672ee51edf not found: ID does not exist" containerID="1e00b68c018d619f03e3ba7f1c37ae8351eb54b62ac519b4701fc6672ee51edf" Apr 17 08:09:23.816593 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:23.816505 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e00b68c018d619f03e3ba7f1c37ae8351eb54b62ac519b4701fc6672ee51edf"} err="failed to get container status \"1e00b68c018d619f03e3ba7f1c37ae8351eb54b62ac519b4701fc6672ee51edf\": rpc error: code = NotFound desc = could not find container \"1e00b68c018d619f03e3ba7f1c37ae8351eb54b62ac519b4701fc6672ee51edf\": container with ID starting with 1e00b68c018d619f03e3ba7f1c37ae8351eb54b62ac519b4701fc6672ee51edf not found: ID does not exist" Apr 17 08:09:23.816593 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:23.816529 2569 scope.go:117] "RemoveContainer" containerID="f2db404779c4f3455d7403e7a63e305676b750c31da9f3f3d20aa75c9d42612d" Apr 17 08:09:23.816820 ip-10-0-132-178 kubenswrapper[2569]: E0417 08:09:23.816802 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2db404779c4f3455d7403e7a63e305676b750c31da9f3f3d20aa75c9d42612d\": container with ID starting with f2db404779c4f3455d7403e7a63e305676b750c31da9f3f3d20aa75c9d42612d not found: ID does not exist" containerID="f2db404779c4f3455d7403e7a63e305676b750c31da9f3f3d20aa75c9d42612d" Apr 17 08:09:23.816871 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:23.816827 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2db404779c4f3455d7403e7a63e305676b750c31da9f3f3d20aa75c9d42612d"} err="failed to get container status \"f2db404779c4f3455d7403e7a63e305676b750c31da9f3f3d20aa75c9d42612d\": rpc error: code = NotFound desc = could not find container \"f2db404779c4f3455d7403e7a63e305676b750c31da9f3f3d20aa75c9d42612d\": container with ID starting with f2db404779c4f3455d7403e7a63e305676b750c31da9f3f3d20aa75c9d42612d not found: ID does not exist" Apr 17 08:09:23.820131 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:23.820106 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-4b469-predictor-5fffc65c55-74ln2"] Apr 17 08:09:23.825473 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:23.825450 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-4b469-predictor-5fffc65c55-74ln2"] Apr 17 08:09:23.888010 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:23.887991 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-4b469-predictor-6b5dfbf799-fjw4j" Apr 17 08:09:23.983706 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:23.983635 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/28a318bf-aed8-4668-8362-c654a5fc4ef1-kserve-provision-location\") pod \"28a318bf-aed8-4668-8362-c654a5fc4ef1\" (UID: \"28a318bf-aed8-4668-8362-c654a5fc4ef1\") " Apr 17 08:09:23.984018 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:23.983997 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28a318bf-aed8-4668-8362-c654a5fc4ef1-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "28a318bf-aed8-4668-8362-c654a5fc4ef1" (UID: "28a318bf-aed8-4668-8362-c654a5fc4ef1"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 17 08:09:24.084484 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:24.084434 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/28a318bf-aed8-4668-8362-c654a5fc4ef1-kserve-provision-location\") on node \"ip-10-0-132-178.ec2.internal\" DevicePath \"\"" Apr 17 08:09:24.788761 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:24.788726 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-4b469-predictor-6b5dfbf799-fjw4j" event={"ID":"28a318bf-aed8-4668-8362-c654a5fc4ef1","Type":"ContainerDied","Data":"7486be9968596989015a75a86e5cda4dd7e6b02445cd626a95b7a80b17d73fe3"} Apr 17 08:09:24.789213 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:24.788774 2569 scope.go:117] "RemoveContainer" containerID="3ab0d88588067fc84ccf535f2a36684b20f16bf8322f91f595a6df7b3ec8b7d9" Apr 17 08:09:24.789213 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:24.788782 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-4b469-predictor-6b5dfbf799-fjw4j" Apr 17 08:09:24.797066 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:24.797043 2569 scope.go:117] "RemoveContainer" containerID="daf96eab2bb3f27815609281e20a28de64bd5ccbabd04c8b803914e2f867bd3a" Apr 17 08:09:24.809361 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:24.809339 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-4b469-predictor-6b5dfbf799-fjw4j"] Apr 17 08:09:24.812076 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:24.812052 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-4b469-predictor-6b5dfbf799-fjw4j"] Apr 17 08:09:25.753671 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:25.753623 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1519eac7-c884-46b3-a09c-73c2e87a59de" path="/var/lib/kubelet/pods/1519eac7-c884-46b3-a09c-73c2e87a59de/volumes" Apr 17 08:09:25.754004 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:25.753989 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28a318bf-aed8-4668-8362-c654a5fc4ef1" path="/var/lib/kubelet/pods/28a318bf-aed8-4668-8362-c654a5fc4ef1/volumes" Apr 17 08:09:29.727119 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:29.727082 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9"] Apr 17 08:09:29.727528 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:29.727323 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1b94178d-c303-471e-91a1-fec215d757eb" containerName="kserve-container" Apr 17 08:09:29.727528 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:29.727333 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b94178d-c303-471e-91a1-fec215d757eb" containerName="kserve-container" Apr 17 08:09:29.727528 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:29.727341 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="28a318bf-aed8-4668-8362-c654a5fc4ef1" containerName="storage-initializer" Apr 17 08:09:29.727528 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:29.727346 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="28a318bf-aed8-4668-8362-c654a5fc4ef1" containerName="storage-initializer" Apr 17 08:09:29.727528 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:29.727358 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d606e5fb-c0d2-45c0-9a5b-d9db02493562" containerName="kserve-container" Apr 17 08:09:29.727528 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:29.727363 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="d606e5fb-c0d2-45c0-9a5b-d9db02493562" containerName="kserve-container" Apr 17 08:09:29.727528 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:29.727370 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1519eac7-c884-46b3-a09c-73c2e87a59de" containerName="storage-initializer" Apr 17 08:09:29.727528 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:29.727383 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="1519eac7-c884-46b3-a09c-73c2e87a59de" containerName="storage-initializer" Apr 17 08:09:29.727528 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:29.727392 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="28a318bf-aed8-4668-8362-c654a5fc4ef1" containerName="kserve-container" Apr 17 08:09:29.727528 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:29.727397 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="28a318bf-aed8-4668-8362-c654a5fc4ef1" containerName="kserve-container" Apr 17 08:09:29.727528 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:29.727406 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1b94178d-c303-471e-91a1-fec215d757eb" containerName="storage-initializer" Apr 17 08:09:29.727528 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:29.727411 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b94178d-c303-471e-91a1-fec215d757eb" containerName="storage-initializer" Apr 17 08:09:29.727528 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:29.727416 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1519eac7-c884-46b3-a09c-73c2e87a59de" containerName="kserve-container" Apr 17 08:09:29.727528 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:29.727421 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="1519eac7-c884-46b3-a09c-73c2e87a59de" containerName="kserve-container" Apr 17 08:09:29.727528 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:29.727428 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d606e5fb-c0d2-45c0-9a5b-d9db02493562" containerName="storage-initializer" Apr 17 08:09:29.727528 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:29.727433 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="d606e5fb-c0d2-45c0-9a5b-d9db02493562" containerName="storage-initializer" Apr 17 08:09:29.727528 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:29.727472 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="d606e5fb-c0d2-45c0-9a5b-d9db02493562" containerName="kserve-container" Apr 17 08:09:29.727528 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:29.727481 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="1b94178d-c303-471e-91a1-fec215d757eb" containerName="kserve-container" Apr 17 08:09:29.727528 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:29.727487 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="28a318bf-aed8-4668-8362-c654a5fc4ef1" containerName="kserve-container" Apr 17 08:09:29.727528 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:29.727494 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="1519eac7-c884-46b3-a09c-73c2e87a59de" containerName="kserve-container" Apr 17 08:09:29.732236 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:29.732207 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" Apr 17 08:09:29.734539 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:29.734515 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"default-dockercfg-k26cm\"" Apr 17 08:09:29.737238 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:29.737216 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9"] Apr 17 08:09:29.824729 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:29.824698 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/2dbda477-33dd-4b6e-b384-b6f834c30525-kserve-provision-location\") pod \"isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9\" (UID: \"2dbda477-33dd-4b6e-b384-b6f834c30525\") " pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" Apr 17 08:09:29.925853 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:29.925816 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/2dbda477-33dd-4b6e-b384-b6f834c30525-kserve-provision-location\") pod \"isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9\" (UID: \"2dbda477-33dd-4b6e-b384-b6f834c30525\") " pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" Apr 17 08:09:29.926175 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:29.926159 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/2dbda477-33dd-4b6e-b384-b6f834c30525-kserve-provision-location\") pod \"isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9\" (UID: \"2dbda477-33dd-4b6e-b384-b6f834c30525\") " pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" Apr 17 08:09:30.043585 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:30.043548 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" Apr 17 08:09:30.159497 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:30.159473 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9"] Apr 17 08:09:30.161718 ip-10-0-132-178 kubenswrapper[2569]: W0417 08:09:30.161692 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2dbda477_33dd_4b6e_b384_b6f834c30525.slice/crio-f6cfdf0836dda1a1d0ef0ab2227b86a422e050f2d5da77876b83fed28e320552 WatchSource:0}: Error finding container f6cfdf0836dda1a1d0ef0ab2227b86a422e050f2d5da77876b83fed28e320552: Status 404 returned error can't find the container with id f6cfdf0836dda1a1d0ef0ab2227b86a422e050f2d5da77876b83fed28e320552 Apr 17 08:09:30.808315 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:30.808275 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" event={"ID":"2dbda477-33dd-4b6e-b384-b6f834c30525","Type":"ContainerStarted","Data":"dbafde09ac6dbdf688aead5136a8b4ff0f0d602d8bb16e453ad1321592292f01"} Apr 17 08:09:30.808315 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:30.808320 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" event={"ID":"2dbda477-33dd-4b6e-b384-b6f834c30525","Type":"ContainerStarted","Data":"f6cfdf0836dda1a1d0ef0ab2227b86a422e050f2d5da77876b83fed28e320552"} Apr 17 08:09:34.821852 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:34.821812 2569 generic.go:358] "Generic (PLEG): container finished" podID="2dbda477-33dd-4b6e-b384-b6f834c30525" containerID="dbafde09ac6dbdf688aead5136a8b4ff0f0d602d8bb16e453ad1321592292f01" exitCode=0 Apr 17 08:09:34.822214 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:34.821869 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" event={"ID":"2dbda477-33dd-4b6e-b384-b6f834c30525","Type":"ContainerDied","Data":"dbafde09ac6dbdf688aead5136a8b4ff0f0d602d8bb16e453ad1321592292f01"} Apr 17 08:09:35.825904 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:35.825870 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" event={"ID":"2dbda477-33dd-4b6e-b384-b6f834c30525","Type":"ContainerStarted","Data":"f512cfe9a61ffeed04daae22d247cb62ffa15ca6bea85837c2112088f0f8cdc3"} Apr 17 08:09:35.825904 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:35.825904 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" event={"ID":"2dbda477-33dd-4b6e-b384-b6f834c30525","Type":"ContainerStarted","Data":"d231fb5528e607f0baa644aa79ea4d4c811a69f0c7ba0ae54766b89391d25c87"} Apr 17 08:09:35.826392 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:35.826362 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" Apr 17 08:09:35.826435 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:35.826409 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" Apr 17 08:09:35.827708 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:35.827672 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" podUID="2dbda477-33dd-4b6e-b384-b6f834c30525" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.22:8080: connect: connection refused" Apr 17 08:09:35.828301 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:35.828271 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" podUID="2dbda477-33dd-4b6e-b384-b6f834c30525" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 17 08:09:35.843979 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:35.843936 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" podStartSLOduration=6.843920756 podStartE2EDuration="6.843920756s" podCreationTimestamp="2026-04-17 08:09:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 08:09:35.842323317 +0000 UTC m=+1098.618289343" watchObservedRunningTime="2026-04-17 08:09:35.843920756 +0000 UTC m=+1098.619886776" Apr 17 08:09:36.828469 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:36.828430 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" podUID="2dbda477-33dd-4b6e-b384-b6f834c30525" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.22:8080: connect: connection refused" Apr 17 08:09:36.828904 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:36.828876 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" podUID="2dbda477-33dd-4b6e-b384-b6f834c30525" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 17 08:09:46.828446 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:46.828396 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" podUID="2dbda477-33dd-4b6e-b384-b6f834c30525" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.22:8080: connect: connection refused" Apr 17 08:09:46.828943 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:46.828894 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" podUID="2dbda477-33dd-4b6e-b384-b6f834c30525" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 17 08:09:53.806065 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:53.806012 2569 pod_container_manager_linux.go:217] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod1519eac7-c884-46b3-a09c-73c2e87a59de"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod1519eac7-c884-46b3-a09c-73c2e87a59de] : Timed out while waiting for systemd to remove kubepods-burstable-pod1519eac7_c884_46b3_a09c_73c2e87a59de.slice" Apr 17 08:09:56.829402 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:56.829346 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" podUID="2dbda477-33dd-4b6e-b384-b6f834c30525" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.22:8080: connect: connection refused" Apr 17 08:09:56.829848 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:09:56.829761 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" podUID="2dbda477-33dd-4b6e-b384-b6f834c30525" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 17 08:10:06.829028 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:10:06.828973 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" podUID="2dbda477-33dd-4b6e-b384-b6f834c30525" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.22:8080: connect: connection refused" Apr 17 08:10:06.829564 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:10:06.829456 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" podUID="2dbda477-33dd-4b6e-b384-b6f834c30525" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 17 08:10:16.829450 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:10:16.829399 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" podUID="2dbda477-33dd-4b6e-b384-b6f834c30525" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.22:8080: connect: connection refused" Apr 17 08:10:16.829972 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:10:16.829864 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" podUID="2dbda477-33dd-4b6e-b384-b6f834c30525" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 17 08:10:26.828704 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:10:26.828639 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" podUID="2dbda477-33dd-4b6e-b384-b6f834c30525" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.22:8080: connect: connection refused" Apr 17 08:10:26.829145 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:10:26.829122 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" podUID="2dbda477-33dd-4b6e-b384-b6f834c30525" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 17 08:10:36.829170 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:10:36.829060 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" podUID="2dbda477-33dd-4b6e-b384-b6f834c30525" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.22:8080: connect: connection refused" Apr 17 08:10:36.829578 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:10:36.829555 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" podUID="2dbda477-33dd-4b6e-b384-b6f834c30525" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 17 08:10:46.829836 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:10:46.829805 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" Apr 17 08:10:46.830210 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:10:46.829863 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" Apr 17 08:10:54.962546 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:10:54.962513 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9"] Apr 17 08:10:54.963041 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:10:54.962827 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" podUID="2dbda477-33dd-4b6e-b384-b6f834c30525" containerName="kserve-container" containerID="cri-o://d231fb5528e607f0baa644aa79ea4d4c811a69f0c7ba0ae54766b89391d25c87" gracePeriod=30 Apr 17 08:10:54.963041 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:10:54.962915 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" podUID="2dbda477-33dd-4b6e-b384-b6f834c30525" containerName="agent" containerID="cri-o://f512cfe9a61ffeed04daae22d247cb62ffa15ca6bea85837c2112088f0f8cdc3" gracePeriod=30 Apr 17 08:10:54.986631 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:10:54.986601 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q"] Apr 17 08:10:54.989824 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:10:54.989810 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q" Apr 17 08:10:54.998593 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:10:54.998571 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q"] Apr 17 08:10:55.025855 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:10:55.025828 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/49b2198f-997e-40a9-a7c7-ae57f87bda65-kserve-provision-location\") pod \"isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q\" (UID: \"49b2198f-997e-40a9-a7c7-ae57f87bda65\") " pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q" Apr 17 08:10:55.126570 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:10:55.126530 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/49b2198f-997e-40a9-a7c7-ae57f87bda65-kserve-provision-location\") pod \"isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q\" (UID: \"49b2198f-997e-40a9-a7c7-ae57f87bda65\") " pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q" Apr 17 08:10:55.126890 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:10:55.126873 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/49b2198f-997e-40a9-a7c7-ae57f87bda65-kserve-provision-location\") pod \"isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q\" (UID: \"49b2198f-997e-40a9-a7c7-ae57f87bda65\") " pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q" Apr 17 08:10:55.300341 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:10:55.300309 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q" Apr 17 08:10:55.416707 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:10:55.416682 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q"] Apr 17 08:10:55.419243 ip-10-0-132-178 kubenswrapper[2569]: W0417 08:10:55.419212 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49b2198f_997e_40a9_a7c7_ae57f87bda65.slice/crio-f9d790e821c7b2946d08202d7ba64567d8b2ed3edf5c76ee0969913dcb8f9b44 WatchSource:0}: Error finding container f9d790e821c7b2946d08202d7ba64567d8b2ed3edf5c76ee0969913dcb8f9b44: Status 404 returned error can't find the container with id f9d790e821c7b2946d08202d7ba64567d8b2ed3edf5c76ee0969913dcb8f9b44 Apr 17 08:10:56.047971 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:10:56.047931 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q" event={"ID":"49b2198f-997e-40a9-a7c7-ae57f87bda65","Type":"ContainerStarted","Data":"6ad69979f2c29e7108bb7ee61fc25f2e8701b88b45b98106ceb113265ab65cf7"} Apr 17 08:10:56.048365 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:10:56.047976 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q" event={"ID":"49b2198f-997e-40a9-a7c7-ae57f87bda65","Type":"ContainerStarted","Data":"f9d790e821c7b2946d08202d7ba64567d8b2ed3edf5c76ee0969913dcb8f9b44"} Apr 17 08:10:56.828808 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:10:56.828764 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" podUID="2dbda477-33dd-4b6e-b384-b6f834c30525" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.22:8080: connect: connection refused" Apr 17 08:10:56.829131 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:10:56.829104 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" podUID="2dbda477-33dd-4b6e-b384-b6f834c30525" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 17 08:11:00.060191 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:00.060159 2569 generic.go:358] "Generic (PLEG): container finished" podID="49b2198f-997e-40a9-a7c7-ae57f87bda65" containerID="6ad69979f2c29e7108bb7ee61fc25f2e8701b88b45b98106ceb113265ab65cf7" exitCode=0 Apr 17 08:11:00.060623 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:00.060237 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q" event={"ID":"49b2198f-997e-40a9-a7c7-ae57f87bda65","Type":"ContainerDied","Data":"6ad69979f2c29e7108bb7ee61fc25f2e8701b88b45b98106ceb113265ab65cf7"} Apr 17 08:11:00.062149 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:00.062129 2569 generic.go:358] "Generic (PLEG): container finished" podID="2dbda477-33dd-4b6e-b384-b6f834c30525" containerID="d231fb5528e607f0baa644aa79ea4d4c811a69f0c7ba0ae54766b89391d25c87" exitCode=0 Apr 17 08:11:00.062252 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:00.062180 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" event={"ID":"2dbda477-33dd-4b6e-b384-b6f834c30525","Type":"ContainerDied","Data":"d231fb5528e607f0baa644aa79ea4d4c811a69f0c7ba0ae54766b89391d25c87"} Apr 17 08:11:01.065962 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:01.065924 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q" event={"ID":"49b2198f-997e-40a9-a7c7-ae57f87bda65","Type":"ContainerStarted","Data":"8b9002f0c5319c97e80ae8d3527083bb38e2bd01571809e1bd81418ff9dfa0fa"} Apr 17 08:11:01.066440 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:01.066204 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q" Apr 17 08:11:01.067399 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:01.067372 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q" podUID="49b2198f-997e-40a9-a7c7-ae57f87bda65" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.23:8080: connect: connection refused" Apr 17 08:11:01.081723 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:01.081675 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q" podStartSLOduration=7.081661711 podStartE2EDuration="7.081661711s" podCreationTimestamp="2026-04-17 08:10:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 08:11:01.080112118 +0000 UTC m=+1183.856078141" watchObservedRunningTime="2026-04-17 08:11:01.081661711 +0000 UTC m=+1183.857627728" Apr 17 08:11:02.069482 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:02.069446 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q" podUID="49b2198f-997e-40a9-a7c7-ae57f87bda65" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.23:8080: connect: connection refused" Apr 17 08:11:06.828505 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:06.828442 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" podUID="2dbda477-33dd-4b6e-b384-b6f834c30525" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.22:8080: connect: connection refused" Apr 17 08:11:06.828940 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:06.828750 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" podUID="2dbda477-33dd-4b6e-b384-b6f834c30525" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 17 08:11:12.070365 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:12.070320 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q" podUID="49b2198f-997e-40a9-a7c7-ae57f87bda65" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.23:8080: connect: connection refused" Apr 17 08:11:16.828826 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:16.828780 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" podUID="2dbda477-33dd-4b6e-b384-b6f834c30525" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.22:8080: connect: connection refused" Apr 17 08:11:16.829280 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:16.828897 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" Apr 17 08:11:16.829280 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:16.829157 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" podUID="2dbda477-33dd-4b6e-b384-b6f834c30525" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 17 08:11:16.829363 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:16.829288 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" Apr 17 08:11:17.717828 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:17.717796 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wbxwr_2e0e406d-3d55-41f3-ba63-448c73f82ded/ovn-acl-logging/0.log" Apr 17 08:11:17.721368 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:17.721343 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wbxwr_2e0e406d-3d55-41f3-ba63-448c73f82ded/ovn-acl-logging/0.log" Apr 17 08:11:22.069708 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:22.069637 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q" podUID="49b2198f-997e-40a9-a7c7-ae57f87bda65" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.23:8080: connect: connection refused" Apr 17 08:11:25.101551 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:25.101528 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" Apr 17 08:11:25.135552 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:25.135521 2569 generic.go:358] "Generic (PLEG): container finished" podID="2dbda477-33dd-4b6e-b384-b6f834c30525" containerID="f512cfe9a61ffeed04daae22d247cb62ffa15ca6bea85837c2112088f0f8cdc3" exitCode=0 Apr 17 08:11:25.135728 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:25.135572 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" event={"ID":"2dbda477-33dd-4b6e-b384-b6f834c30525","Type":"ContainerDied","Data":"f512cfe9a61ffeed04daae22d247cb62ffa15ca6bea85837c2112088f0f8cdc3"} Apr 17 08:11:25.135728 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:25.135597 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" event={"ID":"2dbda477-33dd-4b6e-b384-b6f834c30525","Type":"ContainerDied","Data":"f6cfdf0836dda1a1d0ef0ab2227b86a422e050f2d5da77876b83fed28e320552"} Apr 17 08:11:25.135728 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:25.135598 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9" Apr 17 08:11:25.135728 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:25.135613 2569 scope.go:117] "RemoveContainer" containerID="f512cfe9a61ffeed04daae22d247cb62ffa15ca6bea85837c2112088f0f8cdc3" Apr 17 08:11:25.143081 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:25.143060 2569 scope.go:117] "RemoveContainer" containerID="d231fb5528e607f0baa644aa79ea4d4c811a69f0c7ba0ae54766b89391d25c87" Apr 17 08:11:25.149779 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:25.149760 2569 scope.go:117] "RemoveContainer" containerID="dbafde09ac6dbdf688aead5136a8b4ff0f0d602d8bb16e453ad1321592292f01" Apr 17 08:11:25.156118 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:25.156102 2569 scope.go:117] "RemoveContainer" containerID="f512cfe9a61ffeed04daae22d247cb62ffa15ca6bea85837c2112088f0f8cdc3" Apr 17 08:11:25.156369 ip-10-0-132-178 kubenswrapper[2569]: E0417 08:11:25.156342 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f512cfe9a61ffeed04daae22d247cb62ffa15ca6bea85837c2112088f0f8cdc3\": container with ID starting with f512cfe9a61ffeed04daae22d247cb62ffa15ca6bea85837c2112088f0f8cdc3 not found: ID does not exist" containerID="f512cfe9a61ffeed04daae22d247cb62ffa15ca6bea85837c2112088f0f8cdc3" Apr 17 08:11:25.156463 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:25.156370 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f512cfe9a61ffeed04daae22d247cb62ffa15ca6bea85837c2112088f0f8cdc3"} err="failed to get container status \"f512cfe9a61ffeed04daae22d247cb62ffa15ca6bea85837c2112088f0f8cdc3\": rpc error: code = NotFound desc = could not find container \"f512cfe9a61ffeed04daae22d247cb62ffa15ca6bea85837c2112088f0f8cdc3\": container with ID starting with f512cfe9a61ffeed04daae22d247cb62ffa15ca6bea85837c2112088f0f8cdc3 not found: ID does not exist" Apr 17 08:11:25.156463 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:25.156388 2569 scope.go:117] "RemoveContainer" containerID="d231fb5528e607f0baa644aa79ea4d4c811a69f0c7ba0ae54766b89391d25c87" Apr 17 08:11:25.156621 ip-10-0-132-178 kubenswrapper[2569]: E0417 08:11:25.156606 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d231fb5528e607f0baa644aa79ea4d4c811a69f0c7ba0ae54766b89391d25c87\": container with ID starting with d231fb5528e607f0baa644aa79ea4d4c811a69f0c7ba0ae54766b89391d25c87 not found: ID does not exist" containerID="d231fb5528e607f0baa644aa79ea4d4c811a69f0c7ba0ae54766b89391d25c87" Apr 17 08:11:25.156674 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:25.156626 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d231fb5528e607f0baa644aa79ea4d4c811a69f0c7ba0ae54766b89391d25c87"} err="failed to get container status \"d231fb5528e607f0baa644aa79ea4d4c811a69f0c7ba0ae54766b89391d25c87\": rpc error: code = NotFound desc = could not find container \"d231fb5528e607f0baa644aa79ea4d4c811a69f0c7ba0ae54766b89391d25c87\": container with ID starting with d231fb5528e607f0baa644aa79ea4d4c811a69f0c7ba0ae54766b89391d25c87 not found: ID does not exist" Apr 17 08:11:25.156674 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:25.156659 2569 scope.go:117] "RemoveContainer" containerID="dbafde09ac6dbdf688aead5136a8b4ff0f0d602d8bb16e453ad1321592292f01" Apr 17 08:11:25.156883 ip-10-0-132-178 kubenswrapper[2569]: E0417 08:11:25.156863 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbafde09ac6dbdf688aead5136a8b4ff0f0d602d8bb16e453ad1321592292f01\": container with ID starting with dbafde09ac6dbdf688aead5136a8b4ff0f0d602d8bb16e453ad1321592292f01 not found: ID does not exist" containerID="dbafde09ac6dbdf688aead5136a8b4ff0f0d602d8bb16e453ad1321592292f01" Apr 17 08:11:25.156927 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:25.156888 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbafde09ac6dbdf688aead5136a8b4ff0f0d602d8bb16e453ad1321592292f01"} err="failed to get container status \"dbafde09ac6dbdf688aead5136a8b4ff0f0d602d8bb16e453ad1321592292f01\": rpc error: code = NotFound desc = could not find container \"dbafde09ac6dbdf688aead5136a8b4ff0f0d602d8bb16e453ad1321592292f01\": container with ID starting with dbafde09ac6dbdf688aead5136a8b4ff0f0d602d8bb16e453ad1321592292f01 not found: ID does not exist" Apr 17 08:11:25.256432 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:25.256341 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/2dbda477-33dd-4b6e-b384-b6f834c30525-kserve-provision-location\") pod \"2dbda477-33dd-4b6e-b384-b6f834c30525\" (UID: \"2dbda477-33dd-4b6e-b384-b6f834c30525\") " Apr 17 08:11:25.256724 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:25.256697 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dbda477-33dd-4b6e-b384-b6f834c30525-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "2dbda477-33dd-4b6e-b384-b6f834c30525" (UID: "2dbda477-33dd-4b6e-b384-b6f834c30525"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 17 08:11:25.357522 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:25.357478 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/2dbda477-33dd-4b6e-b384-b6f834c30525-kserve-provision-location\") on node \"ip-10-0-132-178.ec2.internal\" DevicePath \"\"" Apr 17 08:11:25.456611 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:25.456543 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9"] Apr 17 08:11:25.458129 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:25.458108 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-logger-raw-0e2b3-predictor-764bcb96-b6dl9"] Apr 17 08:11:25.754053 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:25.754015 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dbda477-33dd-4b6e-b384-b6f834c30525" path="/var/lib/kubelet/pods/2dbda477-33dd-4b6e-b384-b6f834c30525/volumes" Apr 17 08:11:32.070437 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:32.070391 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q" podUID="49b2198f-997e-40a9-a7c7-ae57f87bda65" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.23:8080: connect: connection refused" Apr 17 08:11:42.069688 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:42.069623 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q" podUID="49b2198f-997e-40a9-a7c7-ae57f87bda65" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.23:8080: connect: connection refused" Apr 17 08:11:52.069889 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:11:52.069840 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q" podUID="49b2198f-997e-40a9-a7c7-ae57f87bda65" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.23:8080: connect: connection refused" Apr 17 08:12:02.069430 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:12:02.069379 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q" podUID="49b2198f-997e-40a9-a7c7-ae57f87bda65" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.23:8080: connect: connection refused" Apr 17 08:12:12.070339 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:12:12.070287 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q" podUID="49b2198f-997e-40a9-a7c7-ae57f87bda65" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.23:8080: connect: connection refused" Apr 17 08:12:17.751875 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:12:17.751842 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q" podUID="49b2198f-997e-40a9-a7c7-ae57f87bda65" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.23:8080: connect: connection refused" Apr 17 08:12:27.752077 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:12:27.752040 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q" podUID="49b2198f-997e-40a9-a7c7-ae57f87bda65" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.23:8080: connect: connection refused" Apr 17 08:12:37.752218 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:12:37.752175 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q" podUID="49b2198f-997e-40a9-a7c7-ae57f87bda65" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.23:8080: connect: connection refused" Apr 17 08:12:47.755882 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:12:47.755840 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q" podUID="49b2198f-997e-40a9-a7c7-ae57f87bda65" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.23:8080: connect: connection refused" Apr 17 08:12:57.751991 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:12:57.751950 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q" podUID="49b2198f-997e-40a9-a7c7-ae57f87bda65" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.23:8080: connect: connection refused" Apr 17 08:13:07.751899 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:07.751856 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q" podUID="49b2198f-997e-40a9-a7c7-ae57f87bda65" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.23:8080: connect: connection refused" Apr 17 08:13:17.752858 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:17.752828 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q" Apr 17 08:13:25.171484 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:25.171450 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q"] Apr 17 08:13:25.171873 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:25.171818 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q" podUID="49b2198f-997e-40a9-a7c7-ae57f87bda65" containerName="kserve-container" containerID="cri-o://8b9002f0c5319c97e80ae8d3527083bb38e2bd01571809e1bd81418ff9dfa0fa" gracePeriod=30 Apr 17 08:13:25.250573 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:25.250533 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-primary-5ab299-predictor-6d587d75cb-d42h7"] Apr 17 08:13:25.250807 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:25.250795 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2dbda477-33dd-4b6e-b384-b6f834c30525" containerName="agent" Apr 17 08:13:25.250853 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:25.250808 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dbda477-33dd-4b6e-b384-b6f834c30525" containerName="agent" Apr 17 08:13:25.250853 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:25.250826 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2dbda477-33dd-4b6e-b384-b6f834c30525" containerName="storage-initializer" Apr 17 08:13:25.250853 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:25.250832 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dbda477-33dd-4b6e-b384-b6f834c30525" containerName="storage-initializer" Apr 17 08:13:25.250853 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:25.250839 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2dbda477-33dd-4b6e-b384-b6f834c30525" containerName="kserve-container" Apr 17 08:13:25.250853 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:25.250845 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dbda477-33dd-4b6e-b384-b6f834c30525" containerName="kserve-container" Apr 17 08:13:25.251002 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:25.250886 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="2dbda477-33dd-4b6e-b384-b6f834c30525" containerName="kserve-container" Apr 17 08:13:25.251002 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:25.250893 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="2dbda477-33dd-4b6e-b384-b6f834c30525" containerName="agent" Apr 17 08:13:25.253842 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:25.253827 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-primary-5ab299-predictor-6d587d75cb-d42h7" Apr 17 08:13:25.262184 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:25.262158 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-primary-5ab299-predictor-6d587d75cb-d42h7"] Apr 17 08:13:25.425142 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:25.425051 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/161cd400-ff96-4577-9ac6-4ad25e2662aa-kserve-provision-location\") pod \"isvc-primary-5ab299-predictor-6d587d75cb-d42h7\" (UID: \"161cd400-ff96-4577-9ac6-4ad25e2662aa\") " pod="kserve-ci-e2e-test/isvc-primary-5ab299-predictor-6d587d75cb-d42h7" Apr 17 08:13:25.526153 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:25.526112 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/161cd400-ff96-4577-9ac6-4ad25e2662aa-kserve-provision-location\") pod \"isvc-primary-5ab299-predictor-6d587d75cb-d42h7\" (UID: \"161cd400-ff96-4577-9ac6-4ad25e2662aa\") " pod="kserve-ci-e2e-test/isvc-primary-5ab299-predictor-6d587d75cb-d42h7" Apr 17 08:13:25.526499 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:25.526478 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/161cd400-ff96-4577-9ac6-4ad25e2662aa-kserve-provision-location\") pod \"isvc-primary-5ab299-predictor-6d587d75cb-d42h7\" (UID: \"161cd400-ff96-4577-9ac6-4ad25e2662aa\") " pod="kserve-ci-e2e-test/isvc-primary-5ab299-predictor-6d587d75cb-d42h7" Apr 17 08:13:25.564585 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:25.564544 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-primary-5ab299-predictor-6d587d75cb-d42h7" Apr 17 08:13:25.679547 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:25.679425 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-primary-5ab299-predictor-6d587d75cb-d42h7"] Apr 17 08:13:25.682255 ip-10-0-132-178 kubenswrapper[2569]: W0417 08:13:25.682226 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod161cd400_ff96_4577_9ac6_4ad25e2662aa.slice/crio-be1fbd2edfcc71712d7920a1be4976a8dd358235f8847892c845a92afd340b5c WatchSource:0}: Error finding container be1fbd2edfcc71712d7920a1be4976a8dd358235f8847892c845a92afd340b5c: Status 404 returned error can't find the container with id be1fbd2edfcc71712d7920a1be4976a8dd358235f8847892c845a92afd340b5c Apr 17 08:13:25.684041 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:25.684020 2569 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 17 08:13:26.470997 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:26.470957 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-primary-5ab299-predictor-6d587d75cb-d42h7" event={"ID":"161cd400-ff96-4577-9ac6-4ad25e2662aa","Type":"ContainerStarted","Data":"570c8756a0733c3aae2ce18f0a18212e46eb81c0969af7e19e082c8fbda4e9db"} Apr 17 08:13:26.471382 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:26.471003 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-primary-5ab299-predictor-6d587d75cb-d42h7" event={"ID":"161cd400-ff96-4577-9ac6-4ad25e2662aa","Type":"ContainerStarted","Data":"be1fbd2edfcc71712d7920a1be4976a8dd358235f8847892c845a92afd340b5c"} Apr 17 08:13:27.752124 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:27.752088 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q" podUID="49b2198f-997e-40a9-a7c7-ae57f87bda65" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.23:8080: connect: connection refused" Apr 17 08:13:29.481072 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:29.480987 2569 generic.go:358] "Generic (PLEG): container finished" podID="161cd400-ff96-4577-9ac6-4ad25e2662aa" containerID="570c8756a0733c3aae2ce18f0a18212e46eb81c0969af7e19e082c8fbda4e9db" exitCode=0 Apr 17 08:13:29.481072 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:29.481048 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-primary-5ab299-predictor-6d587d75cb-d42h7" event={"ID":"161cd400-ff96-4577-9ac6-4ad25e2662aa","Type":"ContainerDied","Data":"570c8756a0733c3aae2ce18f0a18212e46eb81c0969af7e19e082c8fbda4e9db"} Apr 17 08:13:30.485045 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:30.485010 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-primary-5ab299-predictor-6d587d75cb-d42h7" event={"ID":"161cd400-ff96-4577-9ac6-4ad25e2662aa","Type":"ContainerStarted","Data":"b27a533d76f6d25af2d2391607b65e00d74812c3acd151805394e25aa60029ab"} Apr 17 08:13:30.485404 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:30.485285 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-primary-5ab299-predictor-6d587d75cb-d42h7" Apr 17 08:13:30.486552 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:30.486523 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-primary-5ab299-predictor-6d587d75cb-d42h7" podUID="161cd400-ff96-4577-9ac6-4ad25e2662aa" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.24:8080: connect: connection refused" Apr 17 08:13:30.500089 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:30.500047 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-primary-5ab299-predictor-6d587d75cb-d42h7" podStartSLOduration=5.500033903 podStartE2EDuration="5.500033903s" podCreationTimestamp="2026-04-17 08:13:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 08:13:30.499361587 +0000 UTC m=+1333.275327632" watchObservedRunningTime="2026-04-17 08:13:30.500033903 +0000 UTC m=+1333.276000000" Apr 17 08:13:31.491958 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:31.491917 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-primary-5ab299-predictor-6d587d75cb-d42h7" podUID="161cd400-ff96-4577-9ac6-4ad25e2662aa" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.24:8080: connect: connection refused" Apr 17 08:13:34.306295 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:34.306271 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q" Apr 17 08:13:34.390445 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:34.390357 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/49b2198f-997e-40a9-a7c7-ae57f87bda65-kserve-provision-location\") pod \"49b2198f-997e-40a9-a7c7-ae57f87bda65\" (UID: \"49b2198f-997e-40a9-a7c7-ae57f87bda65\") " Apr 17 08:13:34.390714 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:34.390691 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49b2198f-997e-40a9-a7c7-ae57f87bda65-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "49b2198f-997e-40a9-a7c7-ae57f87bda65" (UID: "49b2198f-997e-40a9-a7c7-ae57f87bda65"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 17 08:13:34.491552 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:34.491517 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/49b2198f-997e-40a9-a7c7-ae57f87bda65-kserve-provision-location\") on node \"ip-10-0-132-178.ec2.internal\" DevicePath \"\"" Apr 17 08:13:34.498144 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:34.498111 2569 generic.go:358] "Generic (PLEG): container finished" podID="49b2198f-997e-40a9-a7c7-ae57f87bda65" containerID="8b9002f0c5319c97e80ae8d3527083bb38e2bd01571809e1bd81418ff9dfa0fa" exitCode=0 Apr 17 08:13:34.498287 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:34.498191 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q" Apr 17 08:13:34.498287 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:34.498203 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q" event={"ID":"49b2198f-997e-40a9-a7c7-ae57f87bda65","Type":"ContainerDied","Data":"8b9002f0c5319c97e80ae8d3527083bb38e2bd01571809e1bd81418ff9dfa0fa"} Apr 17 08:13:34.498287 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:34.498251 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q" event={"ID":"49b2198f-997e-40a9-a7c7-ae57f87bda65","Type":"ContainerDied","Data":"f9d790e821c7b2946d08202d7ba64567d8b2ed3edf5c76ee0969913dcb8f9b44"} Apr 17 08:13:34.498287 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:34.498268 2569 scope.go:117] "RemoveContainer" containerID="8b9002f0c5319c97e80ae8d3527083bb38e2bd01571809e1bd81418ff9dfa0fa" Apr 17 08:13:34.505571 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:34.505554 2569 scope.go:117] "RemoveContainer" containerID="6ad69979f2c29e7108bb7ee61fc25f2e8701b88b45b98106ceb113265ab65cf7" Apr 17 08:13:34.512259 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:34.512239 2569 scope.go:117] "RemoveContainer" containerID="8b9002f0c5319c97e80ae8d3527083bb38e2bd01571809e1bd81418ff9dfa0fa" Apr 17 08:13:34.512496 ip-10-0-132-178 kubenswrapper[2569]: E0417 08:13:34.512479 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b9002f0c5319c97e80ae8d3527083bb38e2bd01571809e1bd81418ff9dfa0fa\": container with ID starting with 8b9002f0c5319c97e80ae8d3527083bb38e2bd01571809e1bd81418ff9dfa0fa not found: ID does not exist" containerID="8b9002f0c5319c97e80ae8d3527083bb38e2bd01571809e1bd81418ff9dfa0fa" Apr 17 08:13:34.512540 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:34.512505 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b9002f0c5319c97e80ae8d3527083bb38e2bd01571809e1bd81418ff9dfa0fa"} err="failed to get container status \"8b9002f0c5319c97e80ae8d3527083bb38e2bd01571809e1bd81418ff9dfa0fa\": rpc error: code = NotFound desc = could not find container \"8b9002f0c5319c97e80ae8d3527083bb38e2bd01571809e1bd81418ff9dfa0fa\": container with ID starting with 8b9002f0c5319c97e80ae8d3527083bb38e2bd01571809e1bd81418ff9dfa0fa not found: ID does not exist" Apr 17 08:13:34.512540 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:34.512523 2569 scope.go:117] "RemoveContainer" containerID="6ad69979f2c29e7108bb7ee61fc25f2e8701b88b45b98106ceb113265ab65cf7" Apr 17 08:13:34.512752 ip-10-0-132-178 kubenswrapper[2569]: E0417 08:13:34.512737 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ad69979f2c29e7108bb7ee61fc25f2e8701b88b45b98106ceb113265ab65cf7\": container with ID starting with 6ad69979f2c29e7108bb7ee61fc25f2e8701b88b45b98106ceb113265ab65cf7 not found: ID does not exist" containerID="6ad69979f2c29e7108bb7ee61fc25f2e8701b88b45b98106ceb113265ab65cf7" Apr 17 08:13:34.512807 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:34.512755 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ad69979f2c29e7108bb7ee61fc25f2e8701b88b45b98106ceb113265ab65cf7"} err="failed to get container status \"6ad69979f2c29e7108bb7ee61fc25f2e8701b88b45b98106ceb113265ab65cf7\": rpc error: code = NotFound desc = could not find container \"6ad69979f2c29e7108bb7ee61fc25f2e8701b88b45b98106ceb113265ab65cf7\": container with ID starting with 6ad69979f2c29e7108bb7ee61fc25f2e8701b88b45b98106ceb113265ab65cf7 not found: ID does not exist" Apr 17 08:13:34.521906 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:34.521886 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q"] Apr 17 08:13:34.525344 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:34.525325 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-scale-raw-9c681-predictor-54ddb9ccbf-69s9q"] Apr 17 08:13:35.753331 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:35.753296 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49b2198f-997e-40a9-a7c7-ae57f87bda65" path="/var/lib/kubelet/pods/49b2198f-997e-40a9-a7c7-ae57f87bda65/volumes" Apr 17 08:13:41.490863 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:41.490766 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-primary-5ab299-predictor-6d587d75cb-d42h7" podUID="161cd400-ff96-4577-9ac6-4ad25e2662aa" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.24:8080: connect: connection refused" Apr 17 08:13:51.490692 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:13:51.490621 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-primary-5ab299-predictor-6d587d75cb-d42h7" podUID="161cd400-ff96-4577-9ac6-4ad25e2662aa" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.24:8080: connect: connection refused" Apr 17 08:14:01.491133 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:01.491084 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-primary-5ab299-predictor-6d587d75cb-d42h7" podUID="161cd400-ff96-4577-9ac6-4ad25e2662aa" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.24:8080: connect: connection refused" Apr 17 08:14:11.490934 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:11.490880 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-primary-5ab299-predictor-6d587d75cb-d42h7" podUID="161cd400-ff96-4577-9ac6-4ad25e2662aa" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.24:8080: connect: connection refused" Apr 17 08:14:21.491244 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:21.491194 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-primary-5ab299-predictor-6d587d75cb-d42h7" podUID="161cd400-ff96-4577-9ac6-4ad25e2662aa" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.24:8080: connect: connection refused" Apr 17 08:14:31.490501 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:31.490456 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-primary-5ab299-predictor-6d587d75cb-d42h7" podUID="161cd400-ff96-4577-9ac6-4ad25e2662aa" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.24:8080: connect: connection refused" Apr 17 08:14:38.751330 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:38.751303 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-primary-5ab299-predictor-6d587d75cb-d42h7" Apr 17 08:14:45.372770 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:45.372733 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-secondary-5ab299-predictor-99869bf89-nrmf6"] Apr 17 08:14:45.373162 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:45.373011 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="49b2198f-997e-40a9-a7c7-ae57f87bda65" containerName="storage-initializer" Apr 17 08:14:45.373162 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:45.373022 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="49b2198f-997e-40a9-a7c7-ae57f87bda65" containerName="storage-initializer" Apr 17 08:14:45.373162 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:45.373031 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="49b2198f-997e-40a9-a7c7-ae57f87bda65" containerName="kserve-container" Apr 17 08:14:45.373162 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:45.373037 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="49b2198f-997e-40a9-a7c7-ae57f87bda65" containerName="kserve-container" Apr 17 08:14:45.373162 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:45.373077 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="49b2198f-997e-40a9-a7c7-ae57f87bda65" containerName="kserve-container" Apr 17 08:14:45.375792 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:45.375778 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-secondary-5ab299-predictor-99869bf89-nrmf6" Apr 17 08:14:45.378133 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:45.378113 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"odh-kserve-custom-ca-bundle\"" Apr 17 08:14:45.378213 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:45.378113 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"invalid-s3-sa-5ab299-dockercfg-vjczf\"" Apr 17 08:14:45.379086 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:45.379072 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"invalid-s3-secret-5ab299\"" Apr 17 08:14:45.384274 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:45.384245 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-secondary-5ab299-predictor-99869bf89-nrmf6"] Apr 17 08:14:45.494934 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:45.494904 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/7acea439-4b1b-45a8-80c8-27227ad0353b-cabundle-cert\") pod \"isvc-secondary-5ab299-predictor-99869bf89-nrmf6\" (UID: \"7acea439-4b1b-45a8-80c8-27227ad0353b\") " pod="kserve-ci-e2e-test/isvc-secondary-5ab299-predictor-99869bf89-nrmf6" Apr 17 08:14:45.495095 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:45.494979 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/7acea439-4b1b-45a8-80c8-27227ad0353b-kserve-provision-location\") pod \"isvc-secondary-5ab299-predictor-99869bf89-nrmf6\" (UID: \"7acea439-4b1b-45a8-80c8-27227ad0353b\") " pod="kserve-ci-e2e-test/isvc-secondary-5ab299-predictor-99869bf89-nrmf6" Apr 17 08:14:45.595747 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:45.595716 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/7acea439-4b1b-45a8-80c8-27227ad0353b-cabundle-cert\") pod \"isvc-secondary-5ab299-predictor-99869bf89-nrmf6\" (UID: \"7acea439-4b1b-45a8-80c8-27227ad0353b\") " pod="kserve-ci-e2e-test/isvc-secondary-5ab299-predictor-99869bf89-nrmf6" Apr 17 08:14:45.595918 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:45.595807 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/7acea439-4b1b-45a8-80c8-27227ad0353b-kserve-provision-location\") pod \"isvc-secondary-5ab299-predictor-99869bf89-nrmf6\" (UID: \"7acea439-4b1b-45a8-80c8-27227ad0353b\") " pod="kserve-ci-e2e-test/isvc-secondary-5ab299-predictor-99869bf89-nrmf6" Apr 17 08:14:45.596198 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:45.596176 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/7acea439-4b1b-45a8-80c8-27227ad0353b-kserve-provision-location\") pod \"isvc-secondary-5ab299-predictor-99869bf89-nrmf6\" (UID: \"7acea439-4b1b-45a8-80c8-27227ad0353b\") " pod="kserve-ci-e2e-test/isvc-secondary-5ab299-predictor-99869bf89-nrmf6" Apr 17 08:14:45.596404 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:45.596386 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/7acea439-4b1b-45a8-80c8-27227ad0353b-cabundle-cert\") pod \"isvc-secondary-5ab299-predictor-99869bf89-nrmf6\" (UID: \"7acea439-4b1b-45a8-80c8-27227ad0353b\") " pod="kserve-ci-e2e-test/isvc-secondary-5ab299-predictor-99869bf89-nrmf6" Apr 17 08:14:45.689298 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:45.689220 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-secondary-5ab299-predictor-99869bf89-nrmf6" Apr 17 08:14:45.805098 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:45.805070 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-secondary-5ab299-predictor-99869bf89-nrmf6"] Apr 17 08:14:45.808174 ip-10-0-132-178 kubenswrapper[2569]: W0417 08:14:45.808144 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7acea439_4b1b_45a8_80c8_27227ad0353b.slice/crio-22d07116ff02fe00beacf406d2d5cc990b395a77dcbb12e8e68fddc2507796da WatchSource:0}: Error finding container 22d07116ff02fe00beacf406d2d5cc990b395a77dcbb12e8e68fddc2507796da: Status 404 returned error can't find the container with id 22d07116ff02fe00beacf406d2d5cc990b395a77dcbb12e8e68fddc2507796da Apr 17 08:14:46.696078 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:46.696045 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-secondary-5ab299-predictor-99869bf89-nrmf6" event={"ID":"7acea439-4b1b-45a8-80c8-27227ad0353b","Type":"ContainerStarted","Data":"b53d28910b659a932256b0fb580c4e01c97fab582d091054299f4a6d4509c496"} Apr 17 08:14:46.696078 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:46.696082 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-secondary-5ab299-predictor-99869bf89-nrmf6" event={"ID":"7acea439-4b1b-45a8-80c8-27227ad0353b","Type":"ContainerStarted","Data":"22d07116ff02fe00beacf406d2d5cc990b395a77dcbb12e8e68fddc2507796da"} Apr 17 08:14:49.710225 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:49.710156 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-secondary-5ab299-predictor-99869bf89-nrmf6_7acea439-4b1b-45a8-80c8-27227ad0353b/storage-initializer/0.log" Apr 17 08:14:49.710225 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:49.710191 2569 generic.go:358] "Generic (PLEG): container finished" podID="7acea439-4b1b-45a8-80c8-27227ad0353b" containerID="b53d28910b659a932256b0fb580c4e01c97fab582d091054299f4a6d4509c496" exitCode=1 Apr 17 08:14:49.710624 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:49.710237 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-secondary-5ab299-predictor-99869bf89-nrmf6" event={"ID":"7acea439-4b1b-45a8-80c8-27227ad0353b","Type":"ContainerDied","Data":"b53d28910b659a932256b0fb580c4e01c97fab582d091054299f4a6d4509c496"} Apr 17 08:14:50.714983 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:50.714956 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-secondary-5ab299-predictor-99869bf89-nrmf6_7acea439-4b1b-45a8-80c8-27227ad0353b/storage-initializer/0.log" Apr 17 08:14:50.715350 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:50.715050 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-secondary-5ab299-predictor-99869bf89-nrmf6" event={"ID":"7acea439-4b1b-45a8-80c8-27227ad0353b","Type":"ContainerStarted","Data":"df0f9f7ba1ccd73de690ca01e193559c80c76794f82df46a2258f25ce5913e0f"} Apr 17 08:14:56.736948 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:56.736921 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-secondary-5ab299-predictor-99869bf89-nrmf6_7acea439-4b1b-45a8-80c8-27227ad0353b/storage-initializer/1.log" Apr 17 08:14:56.737336 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:56.737239 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-secondary-5ab299-predictor-99869bf89-nrmf6_7acea439-4b1b-45a8-80c8-27227ad0353b/storage-initializer/0.log" Apr 17 08:14:56.737336 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:56.737271 2569 generic.go:358] "Generic (PLEG): container finished" podID="7acea439-4b1b-45a8-80c8-27227ad0353b" containerID="df0f9f7ba1ccd73de690ca01e193559c80c76794f82df46a2258f25ce5913e0f" exitCode=1 Apr 17 08:14:56.737437 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:56.737348 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-secondary-5ab299-predictor-99869bf89-nrmf6" event={"ID":"7acea439-4b1b-45a8-80c8-27227ad0353b","Type":"ContainerDied","Data":"df0f9f7ba1ccd73de690ca01e193559c80c76794f82df46a2258f25ce5913e0f"} Apr 17 08:14:56.737437 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:56.737387 2569 scope.go:117] "RemoveContainer" containerID="b53d28910b659a932256b0fb580c4e01c97fab582d091054299f4a6d4509c496" Apr 17 08:14:56.737762 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:56.737744 2569 scope.go:117] "RemoveContainer" containerID="b53d28910b659a932256b0fb580c4e01c97fab582d091054299f4a6d4509c496" Apr 17 08:14:56.747266 ip-10-0-132-178 kubenswrapper[2569]: E0417 08:14:56.747242 2569 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_storage-initializer_isvc-secondary-5ab299-predictor-99869bf89-nrmf6_kserve-ci-e2e-test_7acea439-4b1b-45a8-80c8-27227ad0353b_0 in pod sandbox 22d07116ff02fe00beacf406d2d5cc990b395a77dcbb12e8e68fddc2507796da from index: no such id: 'b53d28910b659a932256b0fb580c4e01c97fab582d091054299f4a6d4509c496'" containerID="b53d28910b659a932256b0fb580c4e01c97fab582d091054299f4a6d4509c496" Apr 17 08:14:56.747334 ip-10-0-132-178 kubenswrapper[2569]: E0417 08:14:56.747295 2569 kuberuntime_container.go:951] "Unhandled Error" err="failed to remove pod init container \"storage-initializer\": rpc error: code = Unknown desc = failed to delete container k8s_storage-initializer_isvc-secondary-5ab299-predictor-99869bf89-nrmf6_kserve-ci-e2e-test_7acea439-4b1b-45a8-80c8-27227ad0353b_0 in pod sandbox 22d07116ff02fe00beacf406d2d5cc990b395a77dcbb12e8e68fddc2507796da from index: no such id: 'b53d28910b659a932256b0fb580c4e01c97fab582d091054299f4a6d4509c496'; Skipping pod \"isvc-secondary-5ab299-predictor-99869bf89-nrmf6_kserve-ci-e2e-test(7acea439-4b1b-45a8-80c8-27227ad0353b)\"" logger="UnhandledError" Apr 17 08:14:56.748585 ip-10-0-132-178 kubenswrapper[2569]: E0417 08:14:56.748566 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-initializer\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-initializer pod=isvc-secondary-5ab299-predictor-99869bf89-nrmf6_kserve-ci-e2e-test(7acea439-4b1b-45a8-80c8-27227ad0353b)\"" pod="kserve-ci-e2e-test/isvc-secondary-5ab299-predictor-99869bf89-nrmf6" podUID="7acea439-4b1b-45a8-80c8-27227ad0353b" Apr 17 08:14:57.740896 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:14:57.740866 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-secondary-5ab299-predictor-99869bf89-nrmf6_7acea439-4b1b-45a8-80c8-27227ad0353b/storage-initializer/1.log" Apr 17 08:15:01.447045 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:01.447006 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-secondary-5ab299-predictor-99869bf89-nrmf6"] Apr 17 08:15:01.503112 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:01.503061 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-primary-5ab299-predictor-6d587d75cb-d42h7"] Apr 17 08:15:01.503623 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:01.503510 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-primary-5ab299-predictor-6d587d75cb-d42h7" podUID="161cd400-ff96-4577-9ac6-4ad25e2662aa" containerName="kserve-container" containerID="cri-o://b27a533d76f6d25af2d2391607b65e00d74812c3acd151805394e25aa60029ab" gracePeriod=30 Apr 17 08:15:01.560678 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:01.560635 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-init-fail-a41b09-predictor-5cdf5dbfbf-9p5t4"] Apr 17 08:15:01.564807 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:01.564788 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-init-fail-a41b09-predictor-5cdf5dbfbf-9p5t4" Apr 17 08:15:01.567134 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:01.567119 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"fail-s3-secret-a41b09\"" Apr 17 08:15:01.567523 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:01.567504 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"fail-s3-sa-a41b09-dockercfg-mt7dt\"" Apr 17 08:15:01.571650 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:01.571627 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-init-fail-a41b09-predictor-5cdf5dbfbf-9p5t4"] Apr 17 08:15:01.572833 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:01.572816 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-secondary-5ab299-predictor-99869bf89-nrmf6_7acea439-4b1b-45a8-80c8-27227ad0353b/storage-initializer/1.log" Apr 17 08:15:01.572926 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:01.572863 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-secondary-5ab299-predictor-99869bf89-nrmf6" Apr 17 08:15:01.713373 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:01.713298 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/7acea439-4b1b-45a8-80c8-27227ad0353b-cabundle-cert\") pod \"7acea439-4b1b-45a8-80c8-27227ad0353b\" (UID: \"7acea439-4b1b-45a8-80c8-27227ad0353b\") " Apr 17 08:15:01.713373 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:01.713363 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/7acea439-4b1b-45a8-80c8-27227ad0353b-kserve-provision-location\") pod \"7acea439-4b1b-45a8-80c8-27227ad0353b\" (UID: \"7acea439-4b1b-45a8-80c8-27227ad0353b\") " Apr 17 08:15:01.713585 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:01.713474 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/2649c398-19e4-4060-9033-acd46396b116-kserve-provision-location\") pod \"isvc-init-fail-a41b09-predictor-5cdf5dbfbf-9p5t4\" (UID: \"2649c398-19e4-4060-9033-acd46396b116\") " pod="kserve-ci-e2e-test/isvc-init-fail-a41b09-predictor-5cdf5dbfbf-9p5t4" Apr 17 08:15:01.713585 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:01.713497 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/2649c398-19e4-4060-9033-acd46396b116-cabundle-cert\") pod \"isvc-init-fail-a41b09-predictor-5cdf5dbfbf-9p5t4\" (UID: \"2649c398-19e4-4060-9033-acd46396b116\") " pod="kserve-ci-e2e-test/isvc-init-fail-a41b09-predictor-5cdf5dbfbf-9p5t4" Apr 17 08:15:01.713719 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:01.713622 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7acea439-4b1b-45a8-80c8-27227ad0353b-cabundle-cert" (OuterVolumeSpecName: "cabundle-cert") pod "7acea439-4b1b-45a8-80c8-27227ad0353b" (UID: "7acea439-4b1b-45a8-80c8-27227ad0353b"). InnerVolumeSpecName "cabundle-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 08:15:01.713719 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:01.713669 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7acea439-4b1b-45a8-80c8-27227ad0353b-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "7acea439-4b1b-45a8-80c8-27227ad0353b" (UID: "7acea439-4b1b-45a8-80c8-27227ad0353b"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 17 08:15:01.752633 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:01.752607 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-secondary-5ab299-predictor-99869bf89-nrmf6_7acea439-4b1b-45a8-80c8-27227ad0353b/storage-initializer/1.log" Apr 17 08:15:01.752825 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:01.752808 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-secondary-5ab299-predictor-99869bf89-nrmf6" Apr 17 08:15:01.753319 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:01.753296 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-secondary-5ab299-predictor-99869bf89-nrmf6" event={"ID":"7acea439-4b1b-45a8-80c8-27227ad0353b","Type":"ContainerDied","Data":"22d07116ff02fe00beacf406d2d5cc990b395a77dcbb12e8e68fddc2507796da"} Apr 17 08:15:01.753415 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:01.753329 2569 scope.go:117] "RemoveContainer" containerID="df0f9f7ba1ccd73de690ca01e193559c80c76794f82df46a2258f25ce5913e0f" Apr 17 08:15:01.786596 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:01.786570 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-secondary-5ab299-predictor-99869bf89-nrmf6"] Apr 17 08:15:01.789937 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:01.789914 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-secondary-5ab299-predictor-99869bf89-nrmf6"] Apr 17 08:15:01.813978 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:01.813957 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/2649c398-19e4-4060-9033-acd46396b116-kserve-provision-location\") pod \"isvc-init-fail-a41b09-predictor-5cdf5dbfbf-9p5t4\" (UID: \"2649c398-19e4-4060-9033-acd46396b116\") " pod="kserve-ci-e2e-test/isvc-init-fail-a41b09-predictor-5cdf5dbfbf-9p5t4" Apr 17 08:15:01.814068 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:01.813986 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/2649c398-19e4-4060-9033-acd46396b116-cabundle-cert\") pod \"isvc-init-fail-a41b09-predictor-5cdf5dbfbf-9p5t4\" (UID: \"2649c398-19e4-4060-9033-acd46396b116\") " pod="kserve-ci-e2e-test/isvc-init-fail-a41b09-predictor-5cdf5dbfbf-9p5t4" Apr 17 08:15:01.814068 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:01.814048 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/7acea439-4b1b-45a8-80c8-27227ad0353b-kserve-provision-location\") on node \"ip-10-0-132-178.ec2.internal\" DevicePath \"\"" Apr 17 08:15:01.814068 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:01.814059 2569 reconciler_common.go:299] "Volume detached for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/7acea439-4b1b-45a8-80c8-27227ad0353b-cabundle-cert\") on node \"ip-10-0-132-178.ec2.internal\" DevicePath \"\"" Apr 17 08:15:01.814298 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:01.814282 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/2649c398-19e4-4060-9033-acd46396b116-kserve-provision-location\") pod \"isvc-init-fail-a41b09-predictor-5cdf5dbfbf-9p5t4\" (UID: \"2649c398-19e4-4060-9033-acd46396b116\") " pod="kserve-ci-e2e-test/isvc-init-fail-a41b09-predictor-5cdf5dbfbf-9p5t4" Apr 17 08:15:01.814560 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:01.814545 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/2649c398-19e4-4060-9033-acd46396b116-cabundle-cert\") pod \"isvc-init-fail-a41b09-predictor-5cdf5dbfbf-9p5t4\" (UID: \"2649c398-19e4-4060-9033-acd46396b116\") " pod="kserve-ci-e2e-test/isvc-init-fail-a41b09-predictor-5cdf5dbfbf-9p5t4" Apr 17 08:15:01.883137 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:01.883112 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-init-fail-a41b09-predictor-5cdf5dbfbf-9p5t4" Apr 17 08:15:01.999033 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:01.999007 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-init-fail-a41b09-predictor-5cdf5dbfbf-9p5t4"] Apr 17 08:15:02.001086 ip-10-0-132-178 kubenswrapper[2569]: W0417 08:15:02.001062 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2649c398_19e4_4060_9033_acd46396b116.slice/crio-b79f0448f955aaac75a2f94322f9ac99e432a626d307f5d7f49d0d846535f867 WatchSource:0}: Error finding container b79f0448f955aaac75a2f94322f9ac99e432a626d307f5d7f49d0d846535f867: Status 404 returned error can't find the container with id b79f0448f955aaac75a2f94322f9ac99e432a626d307f5d7f49d0d846535f867 Apr 17 08:15:02.756808 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:02.756772 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-init-fail-a41b09-predictor-5cdf5dbfbf-9p5t4" event={"ID":"2649c398-19e4-4060-9033-acd46396b116","Type":"ContainerStarted","Data":"37bc5dbcf940b7477c323b6d663addd6570058e6bdff3a18379c6e7e5a5616af"} Apr 17 08:15:02.756808 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:02.756808 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-init-fail-a41b09-predictor-5cdf5dbfbf-9p5t4" event={"ID":"2649c398-19e4-4060-9033-acd46396b116","Type":"ContainerStarted","Data":"b79f0448f955aaac75a2f94322f9ac99e432a626d307f5d7f49d0d846535f867"} Apr 17 08:15:03.753808 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:03.753767 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7acea439-4b1b-45a8-80c8-27227ad0353b" path="/var/lib/kubelet/pods/7acea439-4b1b-45a8-80c8-27227ad0353b/volumes" Apr 17 08:15:05.541322 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:05.539901 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-primary-5ab299-predictor-6d587d75cb-d42h7" Apr 17 08:15:05.639677 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:05.639578 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/161cd400-ff96-4577-9ac6-4ad25e2662aa-kserve-provision-location\") pod \"161cd400-ff96-4577-9ac6-4ad25e2662aa\" (UID: \"161cd400-ff96-4577-9ac6-4ad25e2662aa\") " Apr 17 08:15:05.639943 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:05.639911 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/161cd400-ff96-4577-9ac6-4ad25e2662aa-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "161cd400-ff96-4577-9ac6-4ad25e2662aa" (UID: "161cd400-ff96-4577-9ac6-4ad25e2662aa"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 17 08:15:05.740540 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:05.740511 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/161cd400-ff96-4577-9ac6-4ad25e2662aa-kserve-provision-location\") on node \"ip-10-0-132-178.ec2.internal\" DevicePath \"\"" Apr 17 08:15:05.765780 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:05.765752 2569 generic.go:358] "Generic (PLEG): container finished" podID="161cd400-ff96-4577-9ac6-4ad25e2662aa" containerID="b27a533d76f6d25af2d2391607b65e00d74812c3acd151805394e25aa60029ab" exitCode=0 Apr 17 08:15:05.765906 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:05.765810 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-primary-5ab299-predictor-6d587d75cb-d42h7" event={"ID":"161cd400-ff96-4577-9ac6-4ad25e2662aa","Type":"ContainerDied","Data":"b27a533d76f6d25af2d2391607b65e00d74812c3acd151805394e25aa60029ab"} Apr 17 08:15:05.765906 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:05.765829 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-primary-5ab299-predictor-6d587d75cb-d42h7" Apr 17 08:15:05.765906 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:05.765838 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-primary-5ab299-predictor-6d587d75cb-d42h7" event={"ID":"161cd400-ff96-4577-9ac6-4ad25e2662aa","Type":"ContainerDied","Data":"be1fbd2edfcc71712d7920a1be4976a8dd358235f8847892c845a92afd340b5c"} Apr 17 08:15:05.765906 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:05.765853 2569 scope.go:117] "RemoveContainer" containerID="b27a533d76f6d25af2d2391607b65e00d74812c3acd151805394e25aa60029ab" Apr 17 08:15:05.773355 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:05.773336 2569 scope.go:117] "RemoveContainer" containerID="570c8756a0733c3aae2ce18f0a18212e46eb81c0969af7e19e082c8fbda4e9db" Apr 17 08:15:05.779833 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:05.779818 2569 scope.go:117] "RemoveContainer" containerID="b27a533d76f6d25af2d2391607b65e00d74812c3acd151805394e25aa60029ab" Apr 17 08:15:05.780103 ip-10-0-132-178 kubenswrapper[2569]: E0417 08:15:05.780082 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b27a533d76f6d25af2d2391607b65e00d74812c3acd151805394e25aa60029ab\": container with ID starting with b27a533d76f6d25af2d2391607b65e00d74812c3acd151805394e25aa60029ab not found: ID does not exist" containerID="b27a533d76f6d25af2d2391607b65e00d74812c3acd151805394e25aa60029ab" Apr 17 08:15:05.780192 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:05.780110 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b27a533d76f6d25af2d2391607b65e00d74812c3acd151805394e25aa60029ab"} err="failed to get container status \"b27a533d76f6d25af2d2391607b65e00d74812c3acd151805394e25aa60029ab\": rpc error: code = NotFound desc = could not find container \"b27a533d76f6d25af2d2391607b65e00d74812c3acd151805394e25aa60029ab\": container with ID starting with b27a533d76f6d25af2d2391607b65e00d74812c3acd151805394e25aa60029ab not found: ID does not exist" Apr 17 08:15:05.780192 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:05.780127 2569 scope.go:117] "RemoveContainer" containerID="570c8756a0733c3aae2ce18f0a18212e46eb81c0969af7e19e082c8fbda4e9db" Apr 17 08:15:05.780400 ip-10-0-132-178 kubenswrapper[2569]: E0417 08:15:05.780372 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"570c8756a0733c3aae2ce18f0a18212e46eb81c0969af7e19e082c8fbda4e9db\": container with ID starting with 570c8756a0733c3aae2ce18f0a18212e46eb81c0969af7e19e082c8fbda4e9db not found: ID does not exist" containerID="570c8756a0733c3aae2ce18f0a18212e46eb81c0969af7e19e082c8fbda4e9db" Apr 17 08:15:05.780462 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:05.780407 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"570c8756a0733c3aae2ce18f0a18212e46eb81c0969af7e19e082c8fbda4e9db"} err="failed to get container status \"570c8756a0733c3aae2ce18f0a18212e46eb81c0969af7e19e082c8fbda4e9db\": rpc error: code = NotFound desc = could not find container \"570c8756a0733c3aae2ce18f0a18212e46eb81c0969af7e19e082c8fbda4e9db\": container with ID starting with 570c8756a0733c3aae2ce18f0a18212e46eb81c0969af7e19e082c8fbda4e9db not found: ID does not exist" Apr 17 08:15:05.781118 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:05.781098 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-primary-5ab299-predictor-6d587d75cb-d42h7"] Apr 17 08:15:05.784482 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:05.784462 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-primary-5ab299-predictor-6d587d75cb-d42h7"] Apr 17 08:15:07.754171 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:07.754097 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="161cd400-ff96-4577-9ac6-4ad25e2662aa" path="/var/lib/kubelet/pods/161cd400-ff96-4577-9ac6-4ad25e2662aa/volumes" Apr 17 08:15:08.776920 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:08.776895 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-init-fail-a41b09-predictor-5cdf5dbfbf-9p5t4_2649c398-19e4-4060-9033-acd46396b116/storage-initializer/0.log" Apr 17 08:15:08.777315 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:08.776927 2569 generic.go:358] "Generic (PLEG): container finished" podID="2649c398-19e4-4060-9033-acd46396b116" containerID="37bc5dbcf940b7477c323b6d663addd6570058e6bdff3a18379c6e7e5a5616af" exitCode=1 Apr 17 08:15:08.777315 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:08.776998 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-init-fail-a41b09-predictor-5cdf5dbfbf-9p5t4" event={"ID":"2649c398-19e4-4060-9033-acd46396b116","Type":"ContainerDied","Data":"37bc5dbcf940b7477c323b6d663addd6570058e6bdff3a18379c6e7e5a5616af"} Apr 17 08:15:09.780665 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:09.780621 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-init-fail-a41b09-predictor-5cdf5dbfbf-9p5t4_2649c398-19e4-4060-9033-acd46396b116/storage-initializer/0.log" Apr 17 08:15:09.781045 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:09.780701 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-init-fail-a41b09-predictor-5cdf5dbfbf-9p5t4" event={"ID":"2649c398-19e4-4060-9033-acd46396b116","Type":"ContainerStarted","Data":"50ad479b4ddddf8a6065ed108889b3be0619e305b5922415a354f929f2e56ec1"} Apr 17 08:15:11.581574 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:11.581543 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-init-fail-a41b09-predictor-5cdf5dbfbf-9p5t4"] Apr 17 08:15:11.582064 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:11.581794 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-init-fail-a41b09-predictor-5cdf5dbfbf-9p5t4" podUID="2649c398-19e4-4060-9033-acd46396b116" containerName="storage-initializer" containerID="cri-o://50ad479b4ddddf8a6065ed108889b3be0619e305b5922415a354f929f2e56ec1" gracePeriod=30 Apr 17 08:15:11.706618 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:11.706588 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-3e909-predictor-d7874586f-26kwx"] Apr 17 08:15:11.706866 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:11.706855 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="161cd400-ff96-4577-9ac6-4ad25e2662aa" containerName="storage-initializer" Apr 17 08:15:11.706916 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:11.706868 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="161cd400-ff96-4577-9ac6-4ad25e2662aa" containerName="storage-initializer" Apr 17 08:15:11.706916 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:11.706886 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="161cd400-ff96-4577-9ac6-4ad25e2662aa" containerName="kserve-container" Apr 17 08:15:11.706916 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:11.706891 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="161cd400-ff96-4577-9ac6-4ad25e2662aa" containerName="kserve-container" Apr 17 08:15:11.706916 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:11.706902 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7acea439-4b1b-45a8-80c8-27227ad0353b" containerName="storage-initializer" Apr 17 08:15:11.706916 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:11.706908 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="7acea439-4b1b-45a8-80c8-27227ad0353b" containerName="storage-initializer" Apr 17 08:15:11.706916 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:11.706913 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7acea439-4b1b-45a8-80c8-27227ad0353b" containerName="storage-initializer" Apr 17 08:15:11.707081 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:11.706919 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="7acea439-4b1b-45a8-80c8-27227ad0353b" containerName="storage-initializer" Apr 17 08:15:11.707081 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:11.706962 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="7acea439-4b1b-45a8-80c8-27227ad0353b" containerName="storage-initializer" Apr 17 08:15:11.707081 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:11.706969 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="7acea439-4b1b-45a8-80c8-27227ad0353b" containerName="storage-initializer" Apr 17 08:15:11.707081 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:11.706976 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="161cd400-ff96-4577-9ac6-4ad25e2662aa" containerName="kserve-container" Apr 17 08:15:11.709869 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:11.709854 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/raw-sklearn-3e909-predictor-d7874586f-26kwx" Apr 17 08:15:11.712154 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:11.712134 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"default-dockercfg-k26cm\"" Apr 17 08:15:11.715870 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:11.715842 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-3e909-predictor-d7874586f-26kwx"] Apr 17 08:15:11.776976 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:11.776944 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/f9330324-5617-4d7b-9d8e-65bbe6d2742a-kserve-provision-location\") pod \"raw-sklearn-3e909-predictor-d7874586f-26kwx\" (UID: \"f9330324-5617-4d7b-9d8e-65bbe6d2742a\") " pod="kserve-ci-e2e-test/raw-sklearn-3e909-predictor-d7874586f-26kwx" Apr 17 08:15:11.877401 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:11.877313 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/f9330324-5617-4d7b-9d8e-65bbe6d2742a-kserve-provision-location\") pod \"raw-sklearn-3e909-predictor-d7874586f-26kwx\" (UID: \"f9330324-5617-4d7b-9d8e-65bbe6d2742a\") " pod="kserve-ci-e2e-test/raw-sklearn-3e909-predictor-d7874586f-26kwx" Apr 17 08:15:11.877731 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:11.877707 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/f9330324-5617-4d7b-9d8e-65bbe6d2742a-kserve-provision-location\") pod \"raw-sklearn-3e909-predictor-d7874586f-26kwx\" (UID: \"f9330324-5617-4d7b-9d8e-65bbe6d2742a\") " pod="kserve-ci-e2e-test/raw-sklearn-3e909-predictor-d7874586f-26kwx" Apr 17 08:15:12.021342 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:12.021310 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/raw-sklearn-3e909-predictor-d7874586f-26kwx" Apr 17 08:15:12.134469 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:12.134399 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-3e909-predictor-d7874586f-26kwx"] Apr 17 08:15:12.137136 ip-10-0-132-178 kubenswrapper[2569]: W0417 08:15:12.137108 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9330324_5617_4d7b_9d8e_65bbe6d2742a.slice/crio-7259f3424b6eccf70ec83610afedaae17c15d6b4f48253120c3053d50c0b51c2 WatchSource:0}: Error finding container 7259f3424b6eccf70ec83610afedaae17c15d6b4f48253120c3053d50c0b51c2: Status 404 returned error can't find the container with id 7259f3424b6eccf70ec83610afedaae17c15d6b4f48253120c3053d50c0b51c2 Apr 17 08:15:12.790179 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:12.790151 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-init-fail-a41b09-predictor-5cdf5dbfbf-9p5t4_2649c398-19e4-4060-9033-acd46396b116/storage-initializer/1.log" Apr 17 08:15:12.790570 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:12.790554 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-init-fail-a41b09-predictor-5cdf5dbfbf-9p5t4_2649c398-19e4-4060-9033-acd46396b116/storage-initializer/0.log" Apr 17 08:15:12.790695 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:12.790597 2569 generic.go:358] "Generic (PLEG): container finished" podID="2649c398-19e4-4060-9033-acd46396b116" containerID="50ad479b4ddddf8a6065ed108889b3be0619e305b5922415a354f929f2e56ec1" exitCode=1 Apr 17 08:15:12.790756 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:12.790690 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-init-fail-a41b09-predictor-5cdf5dbfbf-9p5t4" event={"ID":"2649c398-19e4-4060-9033-acd46396b116","Type":"ContainerDied","Data":"50ad479b4ddddf8a6065ed108889b3be0619e305b5922415a354f929f2e56ec1"} Apr 17 08:15:12.790756 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:12.790741 2569 scope.go:117] "RemoveContainer" containerID="37bc5dbcf940b7477c323b6d663addd6570058e6bdff3a18379c6e7e5a5616af" Apr 17 08:15:12.791973 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:12.791952 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-3e909-predictor-d7874586f-26kwx" event={"ID":"f9330324-5617-4d7b-9d8e-65bbe6d2742a","Type":"ContainerStarted","Data":"43b190079e56369c46f31eeaac0eaf2a8a1fed723a95f3583c5e42ddda4e2df4"} Apr 17 08:15:12.792066 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:12.791980 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-3e909-predictor-d7874586f-26kwx" event={"ID":"f9330324-5617-4d7b-9d8e-65bbe6d2742a","Type":"ContainerStarted","Data":"7259f3424b6eccf70ec83610afedaae17c15d6b4f48253120c3053d50c0b51c2"} Apr 17 08:15:12.813363 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:12.813346 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-init-fail-a41b09-predictor-5cdf5dbfbf-9p5t4_2649c398-19e4-4060-9033-acd46396b116/storage-initializer/1.log" Apr 17 08:15:12.813467 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:12.813408 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-init-fail-a41b09-predictor-5cdf5dbfbf-9p5t4" Apr 17 08:15:12.985022 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:12.984939 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/2649c398-19e4-4060-9033-acd46396b116-cabundle-cert\") pod \"2649c398-19e4-4060-9033-acd46396b116\" (UID: \"2649c398-19e4-4060-9033-acd46396b116\") " Apr 17 08:15:12.985022 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:12.984986 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/2649c398-19e4-4060-9033-acd46396b116-kserve-provision-location\") pod \"2649c398-19e4-4060-9033-acd46396b116\" (UID: \"2649c398-19e4-4060-9033-acd46396b116\") " Apr 17 08:15:12.985272 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:12.985251 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2649c398-19e4-4060-9033-acd46396b116-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "2649c398-19e4-4060-9033-acd46396b116" (UID: "2649c398-19e4-4060-9033-acd46396b116"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 17 08:15:12.985337 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:12.985314 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2649c398-19e4-4060-9033-acd46396b116-cabundle-cert" (OuterVolumeSpecName: "cabundle-cert") pod "2649c398-19e4-4060-9033-acd46396b116" (UID: "2649c398-19e4-4060-9033-acd46396b116"). InnerVolumeSpecName "cabundle-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 08:15:13.085614 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:13.085583 2569 reconciler_common.go:299] "Volume detached for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/2649c398-19e4-4060-9033-acd46396b116-cabundle-cert\") on node \"ip-10-0-132-178.ec2.internal\" DevicePath \"\"" Apr 17 08:15:13.085614 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:13.085610 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/2649c398-19e4-4060-9033-acd46396b116-kserve-provision-location\") on node \"ip-10-0-132-178.ec2.internal\" DevicePath \"\"" Apr 17 08:15:13.795338 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:13.795315 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-init-fail-a41b09-predictor-5cdf5dbfbf-9p5t4_2649c398-19e4-4060-9033-acd46396b116/storage-initializer/1.log" Apr 17 08:15:13.795757 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:13.795425 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-init-fail-a41b09-predictor-5cdf5dbfbf-9p5t4" Apr 17 08:15:13.795757 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:13.795457 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-init-fail-a41b09-predictor-5cdf5dbfbf-9p5t4" event={"ID":"2649c398-19e4-4060-9033-acd46396b116","Type":"ContainerDied","Data":"b79f0448f955aaac75a2f94322f9ac99e432a626d307f5d7f49d0d846535f867"} Apr 17 08:15:13.795757 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:13.795503 2569 scope.go:117] "RemoveContainer" containerID="50ad479b4ddddf8a6065ed108889b3be0619e305b5922415a354f929f2e56ec1" Apr 17 08:15:13.823977 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:13.823956 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-init-fail-a41b09-predictor-5cdf5dbfbf-9p5t4"] Apr 17 08:15:13.827197 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:13.827172 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-init-fail-a41b09-predictor-5cdf5dbfbf-9p5t4"] Apr 17 08:15:15.753525 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:15.753488 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2649c398-19e4-4060-9033-acd46396b116" path="/var/lib/kubelet/pods/2649c398-19e4-4060-9033-acd46396b116/volumes" Apr 17 08:15:16.806550 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:16.806515 2569 generic.go:358] "Generic (PLEG): container finished" podID="f9330324-5617-4d7b-9d8e-65bbe6d2742a" containerID="43b190079e56369c46f31eeaac0eaf2a8a1fed723a95f3583c5e42ddda4e2df4" exitCode=0 Apr 17 08:15:16.806959 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:16.806593 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-3e909-predictor-d7874586f-26kwx" event={"ID":"f9330324-5617-4d7b-9d8e-65bbe6d2742a","Type":"ContainerDied","Data":"43b190079e56369c46f31eeaac0eaf2a8a1fed723a95f3583c5e42ddda4e2df4"} Apr 17 08:15:17.810923 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:17.810897 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-3e909-predictor-d7874586f-26kwx" event={"ID":"f9330324-5617-4d7b-9d8e-65bbe6d2742a","Type":"ContainerStarted","Data":"9b52bdcf476a7b7eb9d47c5ecb523dde3cc65aecc2a4aa7c038aa59aded7a546"} Apr 17 08:15:17.811262 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:17.811243 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/raw-sklearn-3e909-predictor-d7874586f-26kwx" Apr 17 08:15:17.812559 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:17.812535 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-3e909-predictor-d7874586f-26kwx" podUID="f9330324-5617-4d7b-9d8e-65bbe6d2742a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.27:8080: connect: connection refused" Apr 17 08:15:17.826498 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:17.826433 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/raw-sklearn-3e909-predictor-d7874586f-26kwx" podStartSLOduration=6.826420117 podStartE2EDuration="6.826420117s" podCreationTimestamp="2026-04-17 08:15:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 08:15:17.824893715 +0000 UTC m=+1440.600859742" watchObservedRunningTime="2026-04-17 08:15:17.826420117 +0000 UTC m=+1440.602386140" Apr 17 08:15:18.814375 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:18.814339 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-3e909-predictor-d7874586f-26kwx" podUID="f9330324-5617-4d7b-9d8e-65bbe6d2742a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.27:8080: connect: connection refused" Apr 17 08:15:28.814843 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:28.814801 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-3e909-predictor-d7874586f-26kwx" podUID="f9330324-5617-4d7b-9d8e-65bbe6d2742a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.27:8080: connect: connection refused" Apr 17 08:15:38.815183 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:38.815141 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-3e909-predictor-d7874586f-26kwx" podUID="f9330324-5617-4d7b-9d8e-65bbe6d2742a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.27:8080: connect: connection refused" Apr 17 08:15:48.814776 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:48.814731 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-3e909-predictor-d7874586f-26kwx" podUID="f9330324-5617-4d7b-9d8e-65bbe6d2742a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.27:8080: connect: connection refused" Apr 17 08:15:58.815096 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:15:58.815050 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-3e909-predictor-d7874586f-26kwx" podUID="f9330324-5617-4d7b-9d8e-65bbe6d2742a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.27:8080: connect: connection refused" Apr 17 08:16:08.815262 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:08.815218 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-3e909-predictor-d7874586f-26kwx" podUID="f9330324-5617-4d7b-9d8e-65bbe6d2742a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.27:8080: connect: connection refused" Apr 17 08:16:17.735948 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:17.735921 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wbxwr_2e0e406d-3d55-41f3-ba63-448c73f82ded/ovn-acl-logging/0.log" Apr 17 08:16:17.739723 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:17.739699 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wbxwr_2e0e406d-3d55-41f3-ba63-448c73f82ded/ovn-acl-logging/0.log" Apr 17 08:16:18.814581 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:18.814531 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-3e909-predictor-d7874586f-26kwx" podUID="f9330324-5617-4d7b-9d8e-65bbe6d2742a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.27:8080: connect: connection refused" Apr 17 08:16:23.753992 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:23.753965 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/raw-sklearn-3e909-predictor-d7874586f-26kwx" Apr 17 08:16:31.826264 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:31.826226 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-3e909-predictor-d7874586f-26kwx"] Apr 17 08:16:31.826631 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:31.826496 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/raw-sklearn-3e909-predictor-d7874586f-26kwx" podUID="f9330324-5617-4d7b-9d8e-65bbe6d2742a" containerName="kserve-container" containerID="cri-o://9b52bdcf476a7b7eb9d47c5ecb523dde3cc65aecc2a4aa7c038aa59aded7a546" gracePeriod=30 Apr 17 08:16:31.870509 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:31.870471 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-runtime-44455-predictor-7f66488f7f-pzr9f"] Apr 17 08:16:31.870865 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:31.870846 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2649c398-19e4-4060-9033-acd46396b116" containerName="storage-initializer" Apr 17 08:16:31.870951 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:31.870866 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="2649c398-19e4-4060-9033-acd46396b116" containerName="storage-initializer" Apr 17 08:16:31.870951 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:31.870894 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2649c398-19e4-4060-9033-acd46396b116" containerName="storage-initializer" Apr 17 08:16:31.870951 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:31.870902 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="2649c398-19e4-4060-9033-acd46396b116" containerName="storage-initializer" Apr 17 08:16:31.871114 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:31.870994 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="2649c398-19e4-4060-9033-acd46396b116" containerName="storage-initializer" Apr 17 08:16:31.871114 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:31.871008 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="2649c398-19e4-4060-9033-acd46396b116" containerName="storage-initializer" Apr 17 08:16:31.873885 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:31.873864 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/raw-sklearn-runtime-44455-predictor-7f66488f7f-pzr9f" Apr 17 08:16:31.881354 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:31.881334 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-runtime-44455-predictor-7f66488f7f-pzr9f"] Apr 17 08:16:31.945258 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:31.945219 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/f5110cd0-6941-4e41-9d68-205af8c50dc1-kserve-provision-location\") pod \"raw-sklearn-runtime-44455-predictor-7f66488f7f-pzr9f\" (UID: \"f5110cd0-6941-4e41-9d68-205af8c50dc1\") " pod="kserve-ci-e2e-test/raw-sklearn-runtime-44455-predictor-7f66488f7f-pzr9f" Apr 17 08:16:32.046348 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:32.046312 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/f5110cd0-6941-4e41-9d68-205af8c50dc1-kserve-provision-location\") pod \"raw-sklearn-runtime-44455-predictor-7f66488f7f-pzr9f\" (UID: \"f5110cd0-6941-4e41-9d68-205af8c50dc1\") " pod="kserve-ci-e2e-test/raw-sklearn-runtime-44455-predictor-7f66488f7f-pzr9f" Apr 17 08:16:32.046675 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:32.046636 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/f5110cd0-6941-4e41-9d68-205af8c50dc1-kserve-provision-location\") pod \"raw-sklearn-runtime-44455-predictor-7f66488f7f-pzr9f\" (UID: \"f5110cd0-6941-4e41-9d68-205af8c50dc1\") " pod="kserve-ci-e2e-test/raw-sklearn-runtime-44455-predictor-7f66488f7f-pzr9f" Apr 17 08:16:32.184383 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:32.184308 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/raw-sklearn-runtime-44455-predictor-7f66488f7f-pzr9f" Apr 17 08:16:32.302761 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:32.302692 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-runtime-44455-predictor-7f66488f7f-pzr9f"] Apr 17 08:16:32.305255 ip-10-0-132-178 kubenswrapper[2569]: W0417 08:16:32.305220 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5110cd0_6941_4e41_9d68_205af8c50dc1.slice/crio-3cf6966ca82199c7f13d1e44463365a7d2eda1bf640f7b140534892d49748af5 WatchSource:0}: Error finding container 3cf6966ca82199c7f13d1e44463365a7d2eda1bf640f7b140534892d49748af5: Status 404 returned error can't find the container with id 3cf6966ca82199c7f13d1e44463365a7d2eda1bf640f7b140534892d49748af5 Apr 17 08:16:33.009129 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:33.009094 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-runtime-44455-predictor-7f66488f7f-pzr9f" event={"ID":"f5110cd0-6941-4e41-9d68-205af8c50dc1","Type":"ContainerStarted","Data":"65378123bffdc6d0b71cdac1163c8055a711fcbede69f9dc01e29efb6066c698"} Apr 17 08:16:33.009129 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:33.009130 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-runtime-44455-predictor-7f66488f7f-pzr9f" event={"ID":"f5110cd0-6941-4e41-9d68-205af8c50dc1","Type":"ContainerStarted","Data":"3cf6966ca82199c7f13d1e44463365a7d2eda1bf640f7b140534892d49748af5"} Apr 17 08:16:33.750827 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:33.750776 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-3e909-predictor-d7874586f-26kwx" podUID="f9330324-5617-4d7b-9d8e-65bbe6d2742a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.27:8080: connect: connection refused" Apr 17 08:16:36.019705 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:36.019628 2569 generic.go:358] "Generic (PLEG): container finished" podID="f9330324-5617-4d7b-9d8e-65bbe6d2742a" containerID="9b52bdcf476a7b7eb9d47c5ecb523dde3cc65aecc2a4aa7c038aa59aded7a546" exitCode=0 Apr 17 08:16:36.020128 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:36.019793 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-3e909-predictor-d7874586f-26kwx" event={"ID":"f9330324-5617-4d7b-9d8e-65bbe6d2742a","Type":"ContainerDied","Data":"9b52bdcf476a7b7eb9d47c5ecb523dde3cc65aecc2a4aa7c038aa59aded7a546"} Apr 17 08:16:36.116114 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:36.116093 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/raw-sklearn-3e909-predictor-d7874586f-26kwx" Apr 17 08:16:36.280333 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:36.280309 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/f9330324-5617-4d7b-9d8e-65bbe6d2742a-kserve-provision-location\") pod \"f9330324-5617-4d7b-9d8e-65bbe6d2742a\" (UID: \"f9330324-5617-4d7b-9d8e-65bbe6d2742a\") " Apr 17 08:16:36.280703 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:36.280676 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9330324-5617-4d7b-9d8e-65bbe6d2742a-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "f9330324-5617-4d7b-9d8e-65bbe6d2742a" (UID: "f9330324-5617-4d7b-9d8e-65bbe6d2742a"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 17 08:16:36.380773 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:36.380706 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/f9330324-5617-4d7b-9d8e-65bbe6d2742a-kserve-provision-location\") on node \"ip-10-0-132-178.ec2.internal\" DevicePath \"\"" Apr 17 08:16:37.023324 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:37.023290 2569 generic.go:358] "Generic (PLEG): container finished" podID="f5110cd0-6941-4e41-9d68-205af8c50dc1" containerID="65378123bffdc6d0b71cdac1163c8055a711fcbede69f9dc01e29efb6066c698" exitCode=0 Apr 17 08:16:37.023754 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:37.023366 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-runtime-44455-predictor-7f66488f7f-pzr9f" event={"ID":"f5110cd0-6941-4e41-9d68-205af8c50dc1","Type":"ContainerDied","Data":"65378123bffdc6d0b71cdac1163c8055a711fcbede69f9dc01e29efb6066c698"} Apr 17 08:16:37.024824 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:37.024800 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-3e909-predictor-d7874586f-26kwx" event={"ID":"f9330324-5617-4d7b-9d8e-65bbe6d2742a","Type":"ContainerDied","Data":"7259f3424b6eccf70ec83610afedaae17c15d6b4f48253120c3053d50c0b51c2"} Apr 17 08:16:37.024900 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:37.024847 2569 scope.go:117] "RemoveContainer" containerID="9b52bdcf476a7b7eb9d47c5ecb523dde3cc65aecc2a4aa7c038aa59aded7a546" Apr 17 08:16:37.024900 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:37.024875 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/raw-sklearn-3e909-predictor-d7874586f-26kwx" Apr 17 08:16:37.035171 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:37.035151 2569 scope.go:117] "RemoveContainer" containerID="43b190079e56369c46f31eeaac0eaf2a8a1fed723a95f3583c5e42ddda4e2df4" Apr 17 08:16:37.050728 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:37.050707 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-3e909-predictor-d7874586f-26kwx"] Apr 17 08:16:37.052674 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:37.052639 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-3e909-predictor-d7874586f-26kwx"] Apr 17 08:16:37.753248 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:37.753207 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9330324-5617-4d7b-9d8e-65bbe6d2742a" path="/var/lib/kubelet/pods/f9330324-5617-4d7b-9d8e-65bbe6d2742a/volumes" Apr 17 08:16:38.029184 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:38.029150 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-runtime-44455-predictor-7f66488f7f-pzr9f" event={"ID":"f5110cd0-6941-4e41-9d68-205af8c50dc1","Type":"ContainerStarted","Data":"05479bf5fabd2da4ead69786407f23a2128265019c680f742056d76510f37517"} Apr 17 08:16:38.029621 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:38.029512 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/raw-sklearn-runtime-44455-predictor-7f66488f7f-pzr9f" Apr 17 08:16:38.030407 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:38.030383 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-runtime-44455-predictor-7f66488f7f-pzr9f" podUID="f5110cd0-6941-4e41-9d68-205af8c50dc1" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.28:8080: connect: connection refused" Apr 17 08:16:38.050757 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:38.050714 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/raw-sklearn-runtime-44455-predictor-7f66488f7f-pzr9f" podStartSLOduration=7.050702569 podStartE2EDuration="7.050702569s" podCreationTimestamp="2026-04-17 08:16:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 08:16:38.048316914 +0000 UTC m=+1520.824282961" watchObservedRunningTime="2026-04-17 08:16:38.050702569 +0000 UTC m=+1520.826668592" Apr 17 08:16:39.033621 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:39.033586 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-runtime-44455-predictor-7f66488f7f-pzr9f" podUID="f5110cd0-6941-4e41-9d68-205af8c50dc1" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.28:8080: connect: connection refused" Apr 17 08:16:49.034637 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:49.034587 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-runtime-44455-predictor-7f66488f7f-pzr9f" podUID="f5110cd0-6941-4e41-9d68-205af8c50dc1" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.28:8080: connect: connection refused" Apr 17 08:16:59.034186 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:16:59.034132 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-runtime-44455-predictor-7f66488f7f-pzr9f" podUID="f5110cd0-6941-4e41-9d68-205af8c50dc1" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.28:8080: connect: connection refused" Apr 17 08:17:09.033818 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:17:09.033761 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-runtime-44455-predictor-7f66488f7f-pzr9f" podUID="f5110cd0-6941-4e41-9d68-205af8c50dc1" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.28:8080: connect: connection refused" Apr 17 08:17:19.034212 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:17:19.034166 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-runtime-44455-predictor-7f66488f7f-pzr9f" podUID="f5110cd0-6941-4e41-9d68-205af8c50dc1" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.28:8080: connect: connection refused" Apr 17 08:17:29.034066 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:17:29.034015 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-runtime-44455-predictor-7f66488f7f-pzr9f" podUID="f5110cd0-6941-4e41-9d68-205af8c50dc1" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.28:8080: connect: connection refused" Apr 17 08:17:39.033969 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:17:39.033917 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-runtime-44455-predictor-7f66488f7f-pzr9f" podUID="f5110cd0-6941-4e41-9d68-205af8c50dc1" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.28:8080: connect: connection refused" Apr 17 08:17:44.750827 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:17:44.750795 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/raw-sklearn-runtime-44455-predictor-7f66488f7f-pzr9f" Apr 17 08:17:51.986053 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:17:51.986018 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-runtime-44455-predictor-7f66488f7f-pzr9f"] Apr 17 08:17:51.986493 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:17:51.986314 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/raw-sklearn-runtime-44455-predictor-7f66488f7f-pzr9f" podUID="f5110cd0-6941-4e41-9d68-205af8c50dc1" containerName="kserve-container" containerID="cri-o://05479bf5fabd2da4ead69786407f23a2128265019c680f742056d76510f37517" gracePeriod=30 Apr 17 08:17:54.750331 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:17:54.750288 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-runtime-44455-predictor-7f66488f7f-pzr9f" podUID="f5110cd0-6941-4e41-9d68-205af8c50dc1" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.28:8080: connect: connection refused" Apr 17 08:17:56.531023 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:17:56.530997 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/raw-sklearn-runtime-44455-predictor-7f66488f7f-pzr9f" Apr 17 08:17:56.657190 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:17:56.657102 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/f5110cd0-6941-4e41-9d68-205af8c50dc1-kserve-provision-location\") pod \"f5110cd0-6941-4e41-9d68-205af8c50dc1\" (UID: \"f5110cd0-6941-4e41-9d68-205af8c50dc1\") " Apr 17 08:17:56.657436 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:17:56.657411 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5110cd0-6941-4e41-9d68-205af8c50dc1-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "f5110cd0-6941-4e41-9d68-205af8c50dc1" (UID: "f5110cd0-6941-4e41-9d68-205af8c50dc1"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 17 08:17:56.757735 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:17:56.757694 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/f5110cd0-6941-4e41-9d68-205af8c50dc1-kserve-provision-location\") on node \"ip-10-0-132-178.ec2.internal\" DevicePath \"\"" Apr 17 08:17:57.257005 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:17:57.256971 2569 generic.go:358] "Generic (PLEG): container finished" podID="f5110cd0-6941-4e41-9d68-205af8c50dc1" containerID="05479bf5fabd2da4ead69786407f23a2128265019c680f742056d76510f37517" exitCode=0 Apr 17 08:17:57.257184 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:17:57.257040 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/raw-sklearn-runtime-44455-predictor-7f66488f7f-pzr9f" Apr 17 08:17:57.257184 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:17:57.257040 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-runtime-44455-predictor-7f66488f7f-pzr9f" event={"ID":"f5110cd0-6941-4e41-9d68-205af8c50dc1","Type":"ContainerDied","Data":"05479bf5fabd2da4ead69786407f23a2128265019c680f742056d76510f37517"} Apr 17 08:17:57.257184 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:17:57.257143 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-runtime-44455-predictor-7f66488f7f-pzr9f" event={"ID":"f5110cd0-6941-4e41-9d68-205af8c50dc1","Type":"ContainerDied","Data":"3cf6966ca82199c7f13d1e44463365a7d2eda1bf640f7b140534892d49748af5"} Apr 17 08:17:57.257184 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:17:57.257158 2569 scope.go:117] "RemoveContainer" containerID="05479bf5fabd2da4ead69786407f23a2128265019c680f742056d76510f37517" Apr 17 08:17:57.264747 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:17:57.264510 2569 scope.go:117] "RemoveContainer" containerID="65378123bffdc6d0b71cdac1163c8055a711fcbede69f9dc01e29efb6066c698" Apr 17 08:17:57.271315 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:17:57.271297 2569 scope.go:117] "RemoveContainer" containerID="05479bf5fabd2da4ead69786407f23a2128265019c680f742056d76510f37517" Apr 17 08:17:57.271541 ip-10-0-132-178 kubenswrapper[2569]: E0417 08:17:57.271524 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05479bf5fabd2da4ead69786407f23a2128265019c680f742056d76510f37517\": container with ID starting with 05479bf5fabd2da4ead69786407f23a2128265019c680f742056d76510f37517 not found: ID does not exist" containerID="05479bf5fabd2da4ead69786407f23a2128265019c680f742056d76510f37517" Apr 17 08:17:57.271600 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:17:57.271550 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05479bf5fabd2da4ead69786407f23a2128265019c680f742056d76510f37517"} err="failed to get container status \"05479bf5fabd2da4ead69786407f23a2128265019c680f742056d76510f37517\": rpc error: code = NotFound desc = could not find container \"05479bf5fabd2da4ead69786407f23a2128265019c680f742056d76510f37517\": container with ID starting with 05479bf5fabd2da4ead69786407f23a2128265019c680f742056d76510f37517 not found: ID does not exist" Apr 17 08:17:57.271600 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:17:57.271566 2569 scope.go:117] "RemoveContainer" containerID="65378123bffdc6d0b71cdac1163c8055a711fcbede69f9dc01e29efb6066c698" Apr 17 08:17:57.271796 ip-10-0-132-178 kubenswrapper[2569]: E0417 08:17:57.271776 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65378123bffdc6d0b71cdac1163c8055a711fcbede69f9dc01e29efb6066c698\": container with ID starting with 65378123bffdc6d0b71cdac1163c8055a711fcbede69f9dc01e29efb6066c698 not found: ID does not exist" containerID="65378123bffdc6d0b71cdac1163c8055a711fcbede69f9dc01e29efb6066c698" Apr 17 08:17:57.271841 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:17:57.271803 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65378123bffdc6d0b71cdac1163c8055a711fcbede69f9dc01e29efb6066c698"} err="failed to get container status \"65378123bffdc6d0b71cdac1163c8055a711fcbede69f9dc01e29efb6066c698\": rpc error: code = NotFound desc = could not find container \"65378123bffdc6d0b71cdac1163c8055a711fcbede69f9dc01e29efb6066c698\": container with ID starting with 65378123bffdc6d0b71cdac1163c8055a711fcbede69f9dc01e29efb6066c698 not found: ID does not exist" Apr 17 08:17:57.277574 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:17:57.277551 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-runtime-44455-predictor-7f66488f7f-pzr9f"] Apr 17 08:17:57.282869 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:17:57.282848 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-runtime-44455-predictor-7f66488f7f-pzr9f"] Apr 17 08:17:57.753631 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:17:57.753586 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5110cd0-6941-4e41-9d68-205af8c50dc1" path="/var/lib/kubelet/pods/f5110cd0-6941-4e41-9d68-205af8c50dc1/volumes" Apr 17 08:18:20.073509 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:20.073460 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_global-pull-secret-syncer-6xq6f_69371276-3a35-470f-aaf5-f3677601470b/global-pull-secret-syncer/0.log" Apr 17 08:18:20.249556 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:20.249525 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_konnectivity-agent-txzjm_edf1e79e-27d7-4945-a525-2276d0340c21/konnectivity-agent/0.log" Apr 17 08:18:20.268226 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:20.268180 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_kube-apiserver-proxy-ip-10-0-132-178.ec2.internal_3dd336e8c07dea5cd1d3b829faa22207/haproxy/0.log" Apr 17 08:18:23.780175 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:23.780148 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-5bldv_0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0/node-exporter/0.log" Apr 17 08:18:23.803576 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:23.803552 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-5bldv_0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0/kube-rbac-proxy/0.log" Apr 17 08:18:23.828801 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:23.828781 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-5bldv_0075d905-cd03-4cbe-ab5a-c8df3eb1fbe0/init-textfile/0.log" Apr 17 08:18:27.050954 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:27.050921 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-nl5sm/perf-node-gather-daemonset-c9jhx"] Apr 17 08:18:27.051332 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:27.051166 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f9330324-5617-4d7b-9d8e-65bbe6d2742a" containerName="kserve-container" Apr 17 08:18:27.051332 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:27.051176 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9330324-5617-4d7b-9d8e-65bbe6d2742a" containerName="kserve-container" Apr 17 08:18:27.051332 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:27.051188 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f5110cd0-6941-4e41-9d68-205af8c50dc1" containerName="storage-initializer" Apr 17 08:18:27.051332 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:27.051193 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5110cd0-6941-4e41-9d68-205af8c50dc1" containerName="storage-initializer" Apr 17 08:18:27.051332 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:27.051199 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f5110cd0-6941-4e41-9d68-205af8c50dc1" containerName="kserve-container" Apr 17 08:18:27.051332 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:27.051205 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5110cd0-6941-4e41-9d68-205af8c50dc1" containerName="kserve-container" Apr 17 08:18:27.051332 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:27.051219 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f9330324-5617-4d7b-9d8e-65bbe6d2742a" containerName="storage-initializer" Apr 17 08:18:27.051332 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:27.051224 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9330324-5617-4d7b-9d8e-65bbe6d2742a" containerName="storage-initializer" Apr 17 08:18:27.051332 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:27.051260 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="f9330324-5617-4d7b-9d8e-65bbe6d2742a" containerName="kserve-container" Apr 17 08:18:27.051332 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:27.051267 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="f5110cd0-6941-4e41-9d68-205af8c50dc1" containerName="kserve-container" Apr 17 08:18:27.055115 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:27.055094 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nl5sm/perf-node-gather-daemonset-c9jhx" Apr 17 08:18:27.057631 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:27.057613 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-nl5sm\"/\"kube-root-ca.crt\"" Apr 17 08:18:27.058582 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:27.058565 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-nl5sm\"/\"default-dockercfg-r66q7\"" Apr 17 08:18:27.058671 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:27.058577 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-nl5sm\"/\"openshift-service-ca.crt\"" Apr 17 08:18:27.063052 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:27.063030 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-nl5sm/perf-node-gather-daemonset-c9jhx"] Apr 17 08:18:27.071550 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:27.071519 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a9e1b118-7b8c-477f-9db8-6418ce9b0ed9-sys\") pod \"perf-node-gather-daemonset-c9jhx\" (UID: \"a9e1b118-7b8c-477f-9db8-6418ce9b0ed9\") " pod="openshift-must-gather-nl5sm/perf-node-gather-daemonset-c9jhx" Apr 17 08:18:27.071634 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:27.071568 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/a9e1b118-7b8c-477f-9db8-6418ce9b0ed9-proc\") pod \"perf-node-gather-daemonset-c9jhx\" (UID: \"a9e1b118-7b8c-477f-9db8-6418ce9b0ed9\") " pod="openshift-must-gather-nl5sm/perf-node-gather-daemonset-c9jhx" Apr 17 08:18:27.071634 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:27.071624 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/a9e1b118-7b8c-477f-9db8-6418ce9b0ed9-podres\") pod \"perf-node-gather-daemonset-c9jhx\" (UID: \"a9e1b118-7b8c-477f-9db8-6418ce9b0ed9\") " pod="openshift-must-gather-nl5sm/perf-node-gather-daemonset-c9jhx" Apr 17 08:18:27.071731 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:27.071692 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q6vc\" (UniqueName: \"kubernetes.io/projected/a9e1b118-7b8c-477f-9db8-6418ce9b0ed9-kube-api-access-5q6vc\") pod \"perf-node-gather-daemonset-c9jhx\" (UID: \"a9e1b118-7b8c-477f-9db8-6418ce9b0ed9\") " pod="openshift-must-gather-nl5sm/perf-node-gather-daemonset-c9jhx" Apr 17 08:18:27.071731 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:27.071723 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9e1b118-7b8c-477f-9db8-6418ce9b0ed9-lib-modules\") pod \"perf-node-gather-daemonset-c9jhx\" (UID: \"a9e1b118-7b8c-477f-9db8-6418ce9b0ed9\") " pod="openshift-must-gather-nl5sm/perf-node-gather-daemonset-c9jhx" Apr 17 08:18:27.172911 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:27.172874 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5q6vc\" (UniqueName: \"kubernetes.io/projected/a9e1b118-7b8c-477f-9db8-6418ce9b0ed9-kube-api-access-5q6vc\") pod \"perf-node-gather-daemonset-c9jhx\" (UID: \"a9e1b118-7b8c-477f-9db8-6418ce9b0ed9\") " pod="openshift-must-gather-nl5sm/perf-node-gather-daemonset-c9jhx" Apr 17 08:18:27.172911 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:27.172917 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9e1b118-7b8c-477f-9db8-6418ce9b0ed9-lib-modules\") pod \"perf-node-gather-daemonset-c9jhx\" (UID: \"a9e1b118-7b8c-477f-9db8-6418ce9b0ed9\") " pod="openshift-must-gather-nl5sm/perf-node-gather-daemonset-c9jhx" Apr 17 08:18:27.173163 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:27.172950 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a9e1b118-7b8c-477f-9db8-6418ce9b0ed9-sys\") pod \"perf-node-gather-daemonset-c9jhx\" (UID: \"a9e1b118-7b8c-477f-9db8-6418ce9b0ed9\") " pod="openshift-must-gather-nl5sm/perf-node-gather-daemonset-c9jhx" Apr 17 08:18:27.173163 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:27.172967 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/a9e1b118-7b8c-477f-9db8-6418ce9b0ed9-proc\") pod \"perf-node-gather-daemonset-c9jhx\" (UID: \"a9e1b118-7b8c-477f-9db8-6418ce9b0ed9\") " pod="openshift-must-gather-nl5sm/perf-node-gather-daemonset-c9jhx" Apr 17 08:18:27.173163 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:27.172989 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/a9e1b118-7b8c-477f-9db8-6418ce9b0ed9-podres\") pod \"perf-node-gather-daemonset-c9jhx\" (UID: \"a9e1b118-7b8c-477f-9db8-6418ce9b0ed9\") " pod="openshift-must-gather-nl5sm/perf-node-gather-daemonset-c9jhx" Apr 17 08:18:27.173163 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:27.173069 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a9e1b118-7b8c-477f-9db8-6418ce9b0ed9-sys\") pod \"perf-node-gather-daemonset-c9jhx\" (UID: \"a9e1b118-7b8c-477f-9db8-6418ce9b0ed9\") " pod="openshift-must-gather-nl5sm/perf-node-gather-daemonset-c9jhx" Apr 17 08:18:27.173163 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:27.173099 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/a9e1b118-7b8c-477f-9db8-6418ce9b0ed9-proc\") pod \"perf-node-gather-daemonset-c9jhx\" (UID: \"a9e1b118-7b8c-477f-9db8-6418ce9b0ed9\") " pod="openshift-must-gather-nl5sm/perf-node-gather-daemonset-c9jhx" Apr 17 08:18:27.173163 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:27.173121 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/a9e1b118-7b8c-477f-9db8-6418ce9b0ed9-podres\") pod \"perf-node-gather-daemonset-c9jhx\" (UID: \"a9e1b118-7b8c-477f-9db8-6418ce9b0ed9\") " pod="openshift-must-gather-nl5sm/perf-node-gather-daemonset-c9jhx" Apr 17 08:18:27.173163 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:27.173122 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9e1b118-7b8c-477f-9db8-6418ce9b0ed9-lib-modules\") pod \"perf-node-gather-daemonset-c9jhx\" (UID: \"a9e1b118-7b8c-477f-9db8-6418ce9b0ed9\") " pod="openshift-must-gather-nl5sm/perf-node-gather-daemonset-c9jhx" Apr 17 08:18:27.181149 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:27.181120 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q6vc\" (UniqueName: \"kubernetes.io/projected/a9e1b118-7b8c-477f-9db8-6418ce9b0ed9-kube-api-access-5q6vc\") pod \"perf-node-gather-daemonset-c9jhx\" (UID: \"a9e1b118-7b8c-477f-9db8-6418ce9b0ed9\") " pod="openshift-must-gather-nl5sm/perf-node-gather-daemonset-c9jhx" Apr 17 08:18:27.364782 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:27.364700 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nl5sm/perf-node-gather-daemonset-c9jhx" Apr 17 08:18:27.481795 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:27.481677 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-nl5sm/perf-node-gather-daemonset-c9jhx"] Apr 17 08:18:27.484442 ip-10-0-132-178 kubenswrapper[2569]: W0417 08:18:27.484413 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda9e1b118_7b8c_477f_9db8_6418ce9b0ed9.slice/crio-e25d90a1d243c10d92de03a9b919b38f0c0d739fb0604b76e12d0bf938ffaec2 WatchSource:0}: Error finding container e25d90a1d243c10d92de03a9b919b38f0c0d739fb0604b76e12d0bf938ffaec2: Status 404 returned error can't find the container with id e25d90a1d243c10d92de03a9b919b38f0c0d739fb0604b76e12d0bf938ffaec2 Apr 17 08:18:27.485934 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:27.485917 2569 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 17 08:18:27.825123 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:27.825094 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-mlkb9_cb2327ec-f42e-4e5d-b122-579bbba751a8/dns/0.log" Apr 17 08:18:27.845988 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:27.845967 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-mlkb9_cb2327ec-f42e-4e5d-b122-579bbba751a8/kube-rbac-proxy/0.log" Apr 17 08:18:27.953674 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:27.953628 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-xg76x_9719cdb1-840b-4a4d-8e68-be9ea50fc183/dns-node-resolver/0.log" Apr 17 08:18:28.348291 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:28.348259 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nl5sm/perf-node-gather-daemonset-c9jhx" event={"ID":"a9e1b118-7b8c-477f-9db8-6418ce9b0ed9","Type":"ContainerStarted","Data":"866d6793f2536996f27e07efdaa93d9e87183d49878953b6f9d018367ae8d321"} Apr 17 08:18:28.348291 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:28.348293 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nl5sm/perf-node-gather-daemonset-c9jhx" event={"ID":"a9e1b118-7b8c-477f-9db8-6418ce9b0ed9","Type":"ContainerStarted","Data":"e25d90a1d243c10d92de03a9b919b38f0c0d739fb0604b76e12d0bf938ffaec2"} Apr 17 08:18:28.348727 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:28.348385 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-must-gather-nl5sm/perf-node-gather-daemonset-c9jhx" Apr 17 08:18:28.363281 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:28.363238 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-nl5sm/perf-node-gather-daemonset-c9jhx" podStartSLOduration=1.363226359 podStartE2EDuration="1.363226359s" podCreationTimestamp="2026-04-17 08:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 08:18:28.362151061 +0000 UTC m=+1631.138117086" watchObservedRunningTime="2026-04-17 08:18:28.363226359 +0000 UTC m=+1631.139192382" Apr 17 08:18:28.400054 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:28.400029 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-jmbz2_5681141e-f336-418e-bb90-9a38ef69d0fc/node-ca/0.log" Apr 17 08:18:29.467950 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:29.467920 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-t7j9w_dffe644a-60cc-49b3-96e0-1da3bf018246/serve-healthcheck-canary/0.log" Apr 17 08:18:29.854227 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:29.854192 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-pf4zb_74c1df39-93b8-4a20-b9cd-ec32f8438732/kube-rbac-proxy/0.log" Apr 17 08:18:29.876370 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:29.876345 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-pf4zb_74c1df39-93b8-4a20-b9cd-ec32f8438732/exporter/0.log" Apr 17 08:18:29.896791 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:29.896770 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-pf4zb_74c1df39-93b8-4a20-b9cd-ec32f8438732/extractor/0.log" Apr 17 08:18:31.983280 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:31.983250 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_model-serving-api-86f7b4b499-99q7m_70f8d1df-cd0b-480b-b718-bae60c29718f/server/0.log" Apr 17 08:18:32.157557 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:32.157530 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_seaweedfs-86cc847c5c-hcqvl_88201a3e-a275-4930-b587-58251c2a1bbd/seaweedfs/0.log" Apr 17 08:18:34.360814 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:34.360786 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-nl5sm/perf-node-gather-daemonset-c9jhx" Apr 17 08:18:37.291409 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:37.291374 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-rjghl_20cbf601-967c-4931-9e41-f9b3377a7284/kube-multus-additional-cni-plugins/0.log" Apr 17 08:18:37.311365 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:37.311336 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-rjghl_20cbf601-967c-4931-9e41-f9b3377a7284/egress-router-binary-copy/0.log" Apr 17 08:18:37.330944 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:37.330920 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-rjghl_20cbf601-967c-4931-9e41-f9b3377a7284/cni-plugins/0.log" Apr 17 08:18:37.360599 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:37.357609 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-rjghl_20cbf601-967c-4931-9e41-f9b3377a7284/bond-cni-plugin/0.log" Apr 17 08:18:37.378149 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:37.378126 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-rjghl_20cbf601-967c-4931-9e41-f9b3377a7284/routeoverride-cni/0.log" Apr 17 08:18:37.397960 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:37.397940 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-rjghl_20cbf601-967c-4931-9e41-f9b3377a7284/whereabouts-cni-bincopy/0.log" Apr 17 08:18:37.419853 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:37.419824 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-rjghl_20cbf601-967c-4931-9e41-f9b3377a7284/whereabouts-cni/0.log" Apr 17 08:18:37.670897 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:37.670821 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jm5zl_4331ee4b-17e0-4bfd-a306-b47ead03f055/kube-multus/0.log" Apr 17 08:18:37.737807 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:37.737777 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-pqns9_6f23d3b8-dbbd-4489-8830-f7fb50e6d226/network-metrics-daemon/0.log" Apr 17 08:18:37.758658 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:37.758621 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-pqns9_6f23d3b8-dbbd-4489-8830-f7fb50e6d226/kube-rbac-proxy/0.log" Apr 17 08:18:39.201578 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:39.201544 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wbxwr_2e0e406d-3d55-41f3-ba63-448c73f82ded/ovn-controller/0.log" Apr 17 08:18:39.221297 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:39.221269 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wbxwr_2e0e406d-3d55-41f3-ba63-448c73f82ded/ovn-acl-logging/0.log" Apr 17 08:18:39.229409 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:39.229369 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wbxwr_2e0e406d-3d55-41f3-ba63-448c73f82ded/ovn-acl-logging/1.log" Apr 17 08:18:39.250449 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:39.250428 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wbxwr_2e0e406d-3d55-41f3-ba63-448c73f82ded/kube-rbac-proxy-node/0.log" Apr 17 08:18:39.273631 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:39.273605 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wbxwr_2e0e406d-3d55-41f3-ba63-448c73f82ded/kube-rbac-proxy-ovn-metrics/0.log" Apr 17 08:18:39.292292 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:39.292267 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wbxwr_2e0e406d-3d55-41f3-ba63-448c73f82ded/northd/0.log" Apr 17 08:18:39.311746 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:39.311723 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wbxwr_2e0e406d-3d55-41f3-ba63-448c73f82ded/nbdb/0.log" Apr 17 08:18:39.334488 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:39.334462 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wbxwr_2e0e406d-3d55-41f3-ba63-448c73f82ded/sbdb/0.log" Apr 17 08:18:39.425503 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:39.425472 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wbxwr_2e0e406d-3d55-41f3-ba63-448c73f82ded/ovnkube-controller/0.log" Apr 17 08:18:40.391998 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:40.391971 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-target-t7s67_f102eb44-3020-49b4-b898-dcf83e0d0a11/network-check-target-container/0.log" Apr 17 08:18:41.382414 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:41.382384 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_iptables-alerter-k6m72_2f75afd8-16c7-448e-8f36-42d2b2219a87/iptables-alerter/0.log" Apr 17 08:18:41.967446 ip-10-0-132-178 kubenswrapper[2569]: I0417 08:18:41.967418 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_tuned-ktjch_8d6a3eec-3fe4-46c9-9cf9-999e26bceb92/tuned/0.log"