Apr 16 17:41:18.423099 ip-10-0-138-134 systemd[1]: Starting Kubernetes Kubelet... Apr 16 17:41:18.797516 ip-10-0-138-134 kubenswrapper[2571]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 17:41:18.797516 ip-10-0-138-134 kubenswrapper[2571]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Apr 16 17:41:18.797516 ip-10-0-138-134 kubenswrapper[2571]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 17:41:18.797516 ip-10-0-138-134 kubenswrapper[2571]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 17:41:18.797516 ip-10-0-138-134 kubenswrapper[2571]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 17:41:18.798931 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.798830 2571 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 17:41:18.805944 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.805921 2571 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 17:41:18.805944 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.805938 2571 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 17:41:18.805944 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.805942 2571 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 17:41:18.805944 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.805948 2571 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 17:41:18.806118 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.805953 2571 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 17:41:18.806118 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.805956 2571 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 17:41:18.806118 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.805959 2571 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 17:41:18.806118 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.805962 2571 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 17:41:18.806118 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.805965 2571 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 17:41:18.806118 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.805968 2571 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 17:41:18.806118 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.805971 2571 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 17:41:18.806118 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.805974 2571 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 17:41:18.806118 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.805977 2571 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 17:41:18.806118 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.805980 2571 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 17:41:18.806118 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.805983 2571 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 17:41:18.806118 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.805985 2571 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 17:41:18.806118 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.805988 2571 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 17:41:18.806118 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.805990 2571 feature_gate.go:328] unrecognized feature gate: Example Apr 16 17:41:18.806118 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.805993 2571 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 17:41:18.806118 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.805996 2571 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 17:41:18.806118 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.805999 2571 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 17:41:18.806118 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806001 2571 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 17:41:18.806118 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806004 2571 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 17:41:18.806608 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806010 2571 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 17:41:18.806608 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806013 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 17:41:18.806608 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806015 2571 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 17:41:18.806608 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806018 2571 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 17:41:18.806608 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806020 2571 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 17:41:18.806608 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806023 2571 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 17:41:18.806608 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806025 2571 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 17:41:18.806608 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806028 2571 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 17:41:18.806608 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806031 2571 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 17:41:18.806608 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806033 2571 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 17:41:18.806608 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806036 2571 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 17:41:18.806608 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806039 2571 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 17:41:18.806608 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806041 2571 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 17:41:18.806608 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806044 2571 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 17:41:18.806608 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806047 2571 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 17:41:18.806608 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806050 2571 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 17:41:18.806608 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806053 2571 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 17:41:18.806608 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806055 2571 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 17:41:18.806608 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806058 2571 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 17:41:18.806608 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806061 2571 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 17:41:18.807157 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806063 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 17:41:18.807157 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806066 2571 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 17:41:18.807157 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806070 2571 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 17:41:18.807157 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806073 2571 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 17:41:18.807157 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806075 2571 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 17:41:18.807157 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806078 2571 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 17:41:18.807157 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806082 2571 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 17:41:18.807157 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806085 2571 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 17:41:18.807157 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806088 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 17:41:18.807157 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806092 2571 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 17:41:18.807157 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806095 2571 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 17:41:18.807157 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806098 2571 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 17:41:18.807157 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806100 2571 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 17:41:18.807157 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806103 2571 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 17:41:18.807157 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806105 2571 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 17:41:18.807157 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806108 2571 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 17:41:18.807157 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806110 2571 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 17:41:18.807157 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806113 2571 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 17:41:18.807157 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806116 2571 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 17:41:18.807618 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806118 2571 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 17:41:18.807618 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806121 2571 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 17:41:18.807618 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806124 2571 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 17:41:18.807618 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806127 2571 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 17:41:18.807618 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806129 2571 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 17:41:18.807618 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806132 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 17:41:18.807618 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806135 2571 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 17:41:18.807618 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806138 2571 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 17:41:18.807618 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806141 2571 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 17:41:18.807618 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806143 2571 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 17:41:18.807618 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806146 2571 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 17:41:18.807618 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806148 2571 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 17:41:18.807618 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806151 2571 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 17:41:18.807618 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806153 2571 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 17:41:18.807618 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806156 2571 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 17:41:18.807618 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806159 2571 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 17:41:18.807618 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806162 2571 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 17:41:18.807618 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806164 2571 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 17:41:18.807618 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806167 2571 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 17:41:18.807618 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806170 2571 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 17:41:18.808116 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806173 2571 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 17:41:18.808116 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806176 2571 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 17:41:18.808116 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806178 2571 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 17:41:18.808116 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806181 2571 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 17:41:18.808116 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806563 2571 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 17:41:18.808116 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806568 2571 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 17:41:18.808116 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806571 2571 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 17:41:18.808116 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806575 2571 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 17:41:18.808116 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806579 2571 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 17:41:18.808116 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806582 2571 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 17:41:18.808116 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806585 2571 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 17:41:18.808116 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806587 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 17:41:18.808116 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806590 2571 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 17:41:18.808116 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806593 2571 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 17:41:18.808116 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806596 2571 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 17:41:18.808116 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806598 2571 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 17:41:18.808116 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806602 2571 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 17:41:18.808116 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806604 2571 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 17:41:18.808116 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806607 2571 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 17:41:18.808565 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806611 2571 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 17:41:18.808565 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806614 2571 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 17:41:18.808565 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806617 2571 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 17:41:18.808565 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806620 2571 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 17:41:18.808565 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806633 2571 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 17:41:18.808565 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806636 2571 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 17:41:18.808565 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806639 2571 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 17:41:18.808565 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806642 2571 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 17:41:18.808565 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806644 2571 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 17:41:18.808565 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806648 2571 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 17:41:18.808565 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806650 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 17:41:18.808565 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806653 2571 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 17:41:18.808565 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806656 2571 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 17:41:18.808565 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806658 2571 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 17:41:18.808565 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806661 2571 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 17:41:18.808565 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806663 2571 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 17:41:18.808565 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806666 2571 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 17:41:18.808565 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806668 2571 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 17:41:18.808565 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806671 2571 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 17:41:18.808565 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806673 2571 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 17:41:18.809113 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806676 2571 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 17:41:18.809113 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806679 2571 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 17:41:18.809113 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806681 2571 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 17:41:18.809113 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806683 2571 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 17:41:18.809113 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806686 2571 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 17:41:18.809113 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806689 2571 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 17:41:18.809113 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806691 2571 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 17:41:18.809113 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806694 2571 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 17:41:18.809113 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806696 2571 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 17:41:18.809113 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806699 2571 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 17:41:18.809113 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806702 2571 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 17:41:18.809113 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806705 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 17:41:18.809113 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806708 2571 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 17:41:18.809113 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806710 2571 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 17:41:18.809113 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806713 2571 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 17:41:18.809113 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806715 2571 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 17:41:18.809113 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806718 2571 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 17:41:18.809113 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806720 2571 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 17:41:18.809113 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806723 2571 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 17:41:18.809113 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806725 2571 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 17:41:18.809600 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806728 2571 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 17:41:18.809600 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806730 2571 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 17:41:18.809600 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806733 2571 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 17:41:18.809600 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806736 2571 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 17:41:18.809600 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806738 2571 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 17:41:18.809600 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806741 2571 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 17:41:18.809600 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806743 2571 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 17:41:18.809600 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806746 2571 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 17:41:18.809600 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806749 2571 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 17:41:18.809600 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806751 2571 feature_gate.go:328] unrecognized feature gate: Example Apr 16 17:41:18.809600 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806755 2571 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 17:41:18.809600 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806757 2571 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 17:41:18.809600 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806760 2571 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 17:41:18.809600 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806764 2571 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 17:41:18.809600 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806767 2571 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 17:41:18.809600 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806771 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 17:41:18.809600 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806773 2571 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 17:41:18.809600 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806776 2571 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 17:41:18.809600 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806778 2571 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 17:41:18.810154 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806781 2571 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 17:41:18.810154 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806784 2571 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 17:41:18.810154 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806787 2571 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 17:41:18.810154 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806789 2571 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 17:41:18.810154 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806794 2571 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 17:41:18.810154 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806811 2571 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 17:41:18.810154 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806814 2571 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 17:41:18.810154 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806818 2571 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 17:41:18.810154 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806821 2571 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 17:41:18.810154 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806823 2571 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 17:41:18.810154 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806826 2571 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 17:41:18.810154 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.806829 2571 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 17:41:18.810154 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.806942 2571 flags.go:64] FLAG: --address="0.0.0.0" Apr 16 17:41:18.810154 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.806953 2571 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Apr 16 17:41:18.810154 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.806961 2571 flags.go:64] FLAG: --anonymous-auth="true" Apr 16 17:41:18.810154 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.806966 2571 flags.go:64] FLAG: --application-metrics-count-limit="100" Apr 16 17:41:18.810154 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.806971 2571 flags.go:64] FLAG: --authentication-token-webhook="false" Apr 16 17:41:18.810154 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.806974 2571 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Apr 16 17:41:18.810154 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.806979 2571 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Apr 16 17:41:18.810154 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.806983 2571 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Apr 16 17:41:18.810154 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807000 2571 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Apr 16 17:41:18.810673 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807004 2571 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Apr 16 17:41:18.810673 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807008 2571 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Apr 16 17:41:18.810673 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807011 2571 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Apr 16 17:41:18.810673 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807015 2571 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Apr 16 17:41:18.810673 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807019 2571 flags.go:64] FLAG: --cgroup-root="" Apr 16 17:41:18.810673 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807022 2571 flags.go:64] FLAG: --cgroups-per-qos="true" Apr 16 17:41:18.810673 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807025 2571 flags.go:64] FLAG: --client-ca-file="" Apr 16 17:41:18.810673 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807028 2571 flags.go:64] FLAG: --cloud-config="" Apr 16 17:41:18.810673 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807031 2571 flags.go:64] FLAG: --cloud-provider="external" Apr 16 17:41:18.810673 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807034 2571 flags.go:64] FLAG: --cluster-dns="[]" Apr 16 17:41:18.810673 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807038 2571 flags.go:64] FLAG: --cluster-domain="" Apr 16 17:41:18.810673 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807041 2571 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Apr 16 17:41:18.810673 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807045 2571 flags.go:64] FLAG: --config-dir="" Apr 16 17:41:18.810673 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807048 2571 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Apr 16 17:41:18.810673 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807052 2571 flags.go:64] FLAG: --container-log-max-files="5" Apr 16 17:41:18.810673 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807056 2571 flags.go:64] FLAG: --container-log-max-size="10Mi" Apr 16 17:41:18.810673 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807059 2571 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Apr 16 17:41:18.810673 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807062 2571 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Apr 16 17:41:18.810673 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807065 2571 flags.go:64] FLAG: --containerd-namespace="k8s.io" Apr 16 17:41:18.810673 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807069 2571 flags.go:64] FLAG: --contention-profiling="false" Apr 16 17:41:18.810673 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807072 2571 flags.go:64] FLAG: --cpu-cfs-quota="true" Apr 16 17:41:18.810673 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807075 2571 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Apr 16 17:41:18.810673 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807078 2571 flags.go:64] FLAG: --cpu-manager-policy="none" Apr 16 17:41:18.810673 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807081 2571 flags.go:64] FLAG: --cpu-manager-policy-options="" Apr 16 17:41:18.810673 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807086 2571 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Apr 16 17:41:18.811312 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807089 2571 flags.go:64] FLAG: --enable-controller-attach-detach="true" Apr 16 17:41:18.811312 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807091 2571 flags.go:64] FLAG: --enable-debugging-handlers="true" Apr 16 17:41:18.811312 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807094 2571 flags.go:64] FLAG: --enable-load-reader="false" Apr 16 17:41:18.811312 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807098 2571 flags.go:64] FLAG: --enable-server="true" Apr 16 17:41:18.811312 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807101 2571 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Apr 16 17:41:18.811312 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807105 2571 flags.go:64] FLAG: --event-burst="100" Apr 16 17:41:18.811312 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807108 2571 flags.go:64] FLAG: --event-qps="50" Apr 16 17:41:18.811312 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807112 2571 flags.go:64] FLAG: --event-storage-age-limit="default=0" Apr 16 17:41:18.811312 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807115 2571 flags.go:64] FLAG: --event-storage-event-limit="default=0" Apr 16 17:41:18.811312 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807118 2571 flags.go:64] FLAG: --eviction-hard="" Apr 16 17:41:18.811312 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807121 2571 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Apr 16 17:41:18.811312 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807128 2571 flags.go:64] FLAG: --eviction-minimum-reclaim="" Apr 16 17:41:18.811312 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807131 2571 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Apr 16 17:41:18.811312 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807134 2571 flags.go:64] FLAG: --eviction-soft="" Apr 16 17:41:18.811312 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807137 2571 flags.go:64] FLAG: --eviction-soft-grace-period="" Apr 16 17:41:18.811312 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807140 2571 flags.go:64] FLAG: --exit-on-lock-contention="false" Apr 16 17:41:18.811312 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807143 2571 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Apr 16 17:41:18.811312 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807146 2571 flags.go:64] FLAG: --experimental-mounter-path="" Apr 16 17:41:18.811312 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807149 2571 flags.go:64] FLAG: --fail-cgroupv1="false" Apr 16 17:41:18.811312 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807154 2571 flags.go:64] FLAG: --fail-swap-on="true" Apr 16 17:41:18.811312 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807157 2571 flags.go:64] FLAG: --feature-gates="" Apr 16 17:41:18.811312 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807160 2571 flags.go:64] FLAG: --file-check-frequency="20s" Apr 16 17:41:18.811312 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807164 2571 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Apr 16 17:41:18.811312 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807167 2571 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Apr 16 17:41:18.811312 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807170 2571 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Apr 16 17:41:18.811941 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807174 2571 flags.go:64] FLAG: --healthz-port="10248" Apr 16 17:41:18.811941 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807177 2571 flags.go:64] FLAG: --help="false" Apr 16 17:41:18.811941 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807180 2571 flags.go:64] FLAG: --hostname-override="ip-10-0-138-134.ec2.internal" Apr 16 17:41:18.811941 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807183 2571 flags.go:64] FLAG: --housekeeping-interval="10s" Apr 16 17:41:18.811941 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807186 2571 flags.go:64] FLAG: --http-check-frequency="20s" Apr 16 17:41:18.811941 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807189 2571 flags.go:64] FLAG: --image-credential-provider-bin-dir="/usr/libexec/kubelet-image-credential-provider-plugins" Apr 16 17:41:18.811941 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807192 2571 flags.go:64] FLAG: --image-credential-provider-config="/etc/kubernetes/credential-providers/ecr-credential-provider.yaml" Apr 16 17:41:18.811941 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807196 2571 flags.go:64] FLAG: --image-gc-high-threshold="85" Apr 16 17:41:18.811941 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807199 2571 flags.go:64] FLAG: --image-gc-low-threshold="80" Apr 16 17:41:18.811941 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807201 2571 flags.go:64] FLAG: --image-service-endpoint="" Apr 16 17:41:18.811941 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807204 2571 flags.go:64] FLAG: --kernel-memcg-notification="false" Apr 16 17:41:18.811941 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807207 2571 flags.go:64] FLAG: --kube-api-burst="100" Apr 16 17:41:18.811941 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807210 2571 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Apr 16 17:41:18.811941 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807213 2571 flags.go:64] FLAG: --kube-api-qps="50" Apr 16 17:41:18.811941 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807216 2571 flags.go:64] FLAG: --kube-reserved="" Apr 16 17:41:18.811941 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807219 2571 flags.go:64] FLAG: --kube-reserved-cgroup="" Apr 16 17:41:18.811941 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807222 2571 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Apr 16 17:41:18.811941 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807225 2571 flags.go:64] FLAG: --kubelet-cgroups="" Apr 16 17:41:18.811941 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807229 2571 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Apr 16 17:41:18.811941 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807232 2571 flags.go:64] FLAG: --lock-file="" Apr 16 17:41:18.811941 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807235 2571 flags.go:64] FLAG: --log-cadvisor-usage="false" Apr 16 17:41:18.811941 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807237 2571 flags.go:64] FLAG: --log-flush-frequency="5s" Apr 16 17:41:18.811941 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807241 2571 flags.go:64] FLAG: --log-json-info-buffer-size="0" Apr 16 17:41:18.811941 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807246 2571 flags.go:64] FLAG: --log-json-split-stream="false" Apr 16 17:41:18.812540 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807249 2571 flags.go:64] FLAG: --log-text-info-buffer-size="0" Apr 16 17:41:18.812540 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807251 2571 flags.go:64] FLAG: --log-text-split-stream="false" Apr 16 17:41:18.812540 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807256 2571 flags.go:64] FLAG: --logging-format="text" Apr 16 17:41:18.812540 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807259 2571 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Apr 16 17:41:18.812540 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807262 2571 flags.go:64] FLAG: --make-iptables-util-chains="true" Apr 16 17:41:18.812540 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807265 2571 flags.go:64] FLAG: --manifest-url="" Apr 16 17:41:18.812540 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807268 2571 flags.go:64] FLAG: --manifest-url-header="" Apr 16 17:41:18.812540 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807272 2571 flags.go:64] FLAG: --max-housekeeping-interval="15s" Apr 16 17:41:18.812540 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807275 2571 flags.go:64] FLAG: --max-open-files="1000000" Apr 16 17:41:18.812540 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807279 2571 flags.go:64] FLAG: --max-pods="110" Apr 16 17:41:18.812540 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807283 2571 flags.go:64] FLAG: --maximum-dead-containers="-1" Apr 16 17:41:18.812540 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807286 2571 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Apr 16 17:41:18.812540 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807288 2571 flags.go:64] FLAG: --memory-manager-policy="None" Apr 16 17:41:18.812540 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807291 2571 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Apr 16 17:41:18.812540 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807294 2571 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Apr 16 17:41:18.812540 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807297 2571 flags.go:64] FLAG: --node-ip="0.0.0.0" Apr 16 17:41:18.812540 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807301 2571 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhel" Apr 16 17:41:18.812540 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807308 2571 flags.go:64] FLAG: --node-status-max-images="50" Apr 16 17:41:18.812540 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807311 2571 flags.go:64] FLAG: --node-status-update-frequency="10s" Apr 16 17:41:18.812540 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807314 2571 flags.go:64] FLAG: --oom-score-adj="-999" Apr 16 17:41:18.812540 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807318 2571 flags.go:64] FLAG: --pod-cidr="" Apr 16 17:41:18.812540 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807321 2571 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc76bab72f320de3d4105c90d73c4fb139c09e20ce0fa8dcbc0cb59920d27dec" Apr 16 17:41:18.812540 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807326 2571 flags.go:64] FLAG: --pod-manifest-path="" Apr 16 17:41:18.813105 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807328 2571 flags.go:64] FLAG: --pod-max-pids="-1" Apr 16 17:41:18.813105 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807331 2571 flags.go:64] FLAG: --pods-per-core="0" Apr 16 17:41:18.813105 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807334 2571 flags.go:64] FLAG: --port="10250" Apr 16 17:41:18.813105 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807337 2571 flags.go:64] FLAG: --protect-kernel-defaults="false" Apr 16 17:41:18.813105 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807342 2571 flags.go:64] FLAG: --provider-id="aws:///us-east-1a/i-09dfcaea8f89d074f" Apr 16 17:41:18.813105 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807345 2571 flags.go:64] FLAG: --qos-reserved="" Apr 16 17:41:18.813105 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807348 2571 flags.go:64] FLAG: --read-only-port="10255" Apr 16 17:41:18.813105 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807351 2571 flags.go:64] FLAG: --register-node="true" Apr 16 17:41:18.813105 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807354 2571 flags.go:64] FLAG: --register-schedulable="true" Apr 16 17:41:18.813105 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807356 2571 flags.go:64] FLAG: --register-with-taints="" Apr 16 17:41:18.813105 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807364 2571 flags.go:64] FLAG: --registry-burst="10" Apr 16 17:41:18.813105 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807368 2571 flags.go:64] FLAG: --registry-qps="5" Apr 16 17:41:18.813105 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807371 2571 flags.go:64] FLAG: --reserved-cpus="" Apr 16 17:41:18.813105 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807374 2571 flags.go:64] FLAG: --reserved-memory="" Apr 16 17:41:18.813105 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807378 2571 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Apr 16 17:41:18.813105 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807381 2571 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Apr 16 17:41:18.813105 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807384 2571 flags.go:64] FLAG: --rotate-certificates="false" Apr 16 17:41:18.813105 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807387 2571 flags.go:64] FLAG: --rotate-server-certificates="false" Apr 16 17:41:18.813105 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807390 2571 flags.go:64] FLAG: --runonce="false" Apr 16 17:41:18.813105 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807393 2571 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Apr 16 17:41:18.813105 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807396 2571 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Apr 16 17:41:18.813105 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807399 2571 flags.go:64] FLAG: --seccomp-default="false" Apr 16 17:41:18.813105 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807403 2571 flags.go:64] FLAG: --serialize-image-pulls="true" Apr 16 17:41:18.813105 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807405 2571 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Apr 16 17:41:18.813105 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807408 2571 flags.go:64] FLAG: --storage-driver-db="cadvisor" Apr 16 17:41:18.813105 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807411 2571 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Apr 16 17:41:18.813736 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807414 2571 flags.go:64] FLAG: --storage-driver-password="root" Apr 16 17:41:18.813736 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807418 2571 flags.go:64] FLAG: --storage-driver-secure="false" Apr 16 17:41:18.813736 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807420 2571 flags.go:64] FLAG: --storage-driver-table="stats" Apr 16 17:41:18.813736 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807423 2571 flags.go:64] FLAG: --storage-driver-user="root" Apr 16 17:41:18.813736 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807426 2571 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Apr 16 17:41:18.813736 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807429 2571 flags.go:64] FLAG: --sync-frequency="1m0s" Apr 16 17:41:18.813736 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807433 2571 flags.go:64] FLAG: --system-cgroups="" Apr 16 17:41:18.813736 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807435 2571 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Apr 16 17:41:18.813736 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807440 2571 flags.go:64] FLAG: --system-reserved-cgroup="" Apr 16 17:41:18.813736 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807460 2571 flags.go:64] FLAG: --tls-cert-file="" Apr 16 17:41:18.813736 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807466 2571 flags.go:64] FLAG: --tls-cipher-suites="[]" Apr 16 17:41:18.813736 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807471 2571 flags.go:64] FLAG: --tls-min-version="" Apr 16 17:41:18.813736 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807474 2571 flags.go:64] FLAG: --tls-private-key-file="" Apr 16 17:41:18.813736 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807477 2571 flags.go:64] FLAG: --topology-manager-policy="none" Apr 16 17:41:18.813736 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807479 2571 flags.go:64] FLAG: --topology-manager-policy-options="" Apr 16 17:41:18.813736 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807482 2571 flags.go:64] FLAG: --topology-manager-scope="container" Apr 16 17:41:18.813736 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807485 2571 flags.go:64] FLAG: --v="2" Apr 16 17:41:18.813736 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807494 2571 flags.go:64] FLAG: --version="false" Apr 16 17:41:18.813736 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807498 2571 flags.go:64] FLAG: --vmodule="" Apr 16 17:41:18.813736 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807502 2571 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Apr 16 17:41:18.813736 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807505 2571 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Apr 16 17:41:18.813736 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807592 2571 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 17:41:18.813736 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807596 2571 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 17:41:18.813736 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807600 2571 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 17:41:18.814322 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807604 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 17:41:18.814322 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807607 2571 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 17:41:18.814322 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807609 2571 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 17:41:18.814322 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807612 2571 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 17:41:18.814322 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807615 2571 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 17:41:18.814322 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807617 2571 feature_gate.go:328] unrecognized feature gate: Example Apr 16 17:41:18.814322 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807620 2571 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 17:41:18.814322 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807623 2571 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 17:41:18.814322 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807626 2571 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 17:41:18.814322 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807628 2571 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 17:41:18.814322 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807631 2571 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 17:41:18.814322 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807634 2571 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 17:41:18.814322 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807637 2571 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 17:41:18.814322 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807640 2571 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 17:41:18.814322 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807642 2571 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 17:41:18.814322 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807645 2571 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 17:41:18.814322 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807648 2571 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 17:41:18.814322 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807650 2571 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 17:41:18.814322 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807654 2571 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 17:41:18.814322 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807656 2571 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 17:41:18.814839 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807661 2571 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 17:41:18.814839 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807664 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 17:41:18.814839 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807668 2571 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 17:41:18.814839 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807671 2571 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 17:41:18.814839 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807673 2571 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 17:41:18.814839 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807677 2571 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 17:41:18.814839 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807680 2571 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 17:41:18.814839 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807682 2571 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 17:41:18.814839 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807685 2571 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 17:41:18.814839 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807688 2571 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 17:41:18.814839 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807690 2571 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 17:41:18.814839 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807693 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 17:41:18.814839 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807695 2571 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 17:41:18.814839 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807698 2571 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 17:41:18.814839 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807700 2571 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 17:41:18.814839 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807702 2571 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 17:41:18.814839 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807705 2571 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 17:41:18.814839 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807708 2571 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 17:41:18.814839 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807712 2571 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 17:41:18.815335 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807715 2571 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 17:41:18.815335 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807717 2571 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 17:41:18.815335 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807720 2571 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 17:41:18.815335 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807723 2571 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 17:41:18.815335 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807725 2571 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 17:41:18.815335 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807728 2571 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 17:41:18.815335 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807730 2571 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 17:41:18.815335 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807733 2571 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 17:41:18.815335 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807736 2571 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 17:41:18.815335 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807739 2571 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 17:41:18.815335 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807741 2571 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 17:41:18.815335 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807744 2571 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 17:41:18.815335 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807747 2571 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 17:41:18.815335 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807750 2571 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 17:41:18.815335 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807753 2571 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 17:41:18.815335 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807755 2571 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 17:41:18.815335 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807758 2571 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 17:41:18.815335 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807760 2571 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 17:41:18.815335 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807764 2571 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 17:41:18.815335 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807767 2571 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 17:41:18.815849 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807769 2571 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 17:41:18.815849 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807772 2571 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 17:41:18.815849 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807775 2571 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 17:41:18.815849 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807777 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 17:41:18.815849 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807780 2571 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 17:41:18.815849 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807783 2571 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 17:41:18.815849 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807785 2571 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 17:41:18.815849 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807788 2571 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 17:41:18.815849 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807790 2571 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 17:41:18.815849 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807793 2571 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 17:41:18.815849 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807796 2571 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 17:41:18.815849 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807798 2571 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 17:41:18.815849 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807801 2571 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 17:41:18.815849 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807803 2571 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 17:41:18.815849 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807806 2571 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 17:41:18.815849 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807808 2571 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 17:41:18.815849 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807811 2571 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 17:41:18.815849 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807814 2571 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 17:41:18.815849 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807817 2571 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 17:41:18.815849 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807819 2571 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 17:41:18.816367 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807822 2571 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 17:41:18.816367 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807825 2571 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 17:41:18.816367 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807827 2571 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 17:41:18.816367 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.807830 2571 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 17:41:18.816367 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.807838 2571 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 16 17:41:18.816367 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.814670 2571 server.go:530] "Kubelet version" kubeletVersion="v1.33.9" Apr 16 17:41:18.816367 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.814687 2571 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 17:41:18.816367 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814734 2571 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 17:41:18.816367 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814739 2571 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 17:41:18.816367 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814743 2571 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 17:41:18.816367 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814747 2571 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 17:41:18.816367 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814750 2571 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 17:41:18.816367 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814753 2571 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 17:41:18.816367 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814756 2571 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 17:41:18.816367 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814759 2571 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 17:41:18.816745 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814762 2571 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 17:41:18.816745 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814765 2571 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 17:41:18.816745 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814768 2571 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 17:41:18.816745 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814770 2571 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 17:41:18.816745 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814773 2571 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 17:41:18.816745 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814775 2571 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 17:41:18.816745 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814779 2571 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 17:41:18.816745 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814783 2571 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 17:41:18.816745 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814785 2571 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 17:41:18.816745 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814788 2571 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 17:41:18.816745 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814791 2571 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 17:41:18.816745 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814793 2571 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 17:41:18.816745 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814796 2571 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 17:41:18.816745 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814798 2571 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 17:41:18.816745 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814801 2571 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 17:41:18.816745 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814804 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 17:41:18.816745 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814806 2571 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 17:41:18.816745 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814809 2571 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 17:41:18.816745 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814812 2571 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 17:41:18.817217 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814817 2571 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 17:41:18.817217 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814820 2571 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 17:41:18.817217 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814823 2571 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 17:41:18.817217 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814826 2571 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 17:41:18.817217 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814830 2571 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 17:41:18.817217 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814833 2571 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 17:41:18.817217 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814836 2571 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 17:41:18.817217 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814840 2571 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 17:41:18.817217 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814843 2571 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 17:41:18.817217 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814846 2571 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 17:41:18.817217 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814848 2571 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 17:41:18.817217 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814866 2571 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 17:41:18.817217 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814870 2571 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 17:41:18.817217 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814875 2571 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 17:41:18.817217 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814879 2571 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 17:41:18.817217 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814883 2571 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 17:41:18.817217 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814886 2571 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 17:41:18.817217 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814889 2571 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 17:41:18.817217 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814891 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 17:41:18.817217 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814894 2571 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 17:41:18.817716 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814897 2571 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 17:41:18.817716 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814900 2571 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 17:41:18.817716 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814903 2571 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 17:41:18.817716 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814906 2571 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 17:41:18.817716 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814908 2571 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 17:41:18.817716 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814911 2571 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 17:41:18.817716 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814914 2571 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 17:41:18.817716 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814917 2571 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 17:41:18.817716 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814920 2571 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 17:41:18.817716 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814923 2571 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 17:41:18.817716 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814926 2571 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 17:41:18.817716 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814930 2571 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 17:41:18.817716 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814933 2571 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 17:41:18.817716 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814936 2571 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 17:41:18.817716 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814939 2571 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 17:41:18.817716 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814941 2571 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 17:41:18.817716 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814944 2571 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 17:41:18.817716 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814947 2571 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 17:41:18.817716 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814950 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 17:41:18.817716 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814952 2571 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 17:41:18.818236 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814956 2571 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 17:41:18.818236 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814958 2571 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 17:41:18.818236 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814961 2571 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 17:41:18.818236 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814963 2571 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 17:41:18.818236 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814966 2571 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 17:41:18.818236 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814968 2571 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 17:41:18.818236 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814971 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 17:41:18.818236 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814973 2571 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 17:41:18.818236 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814976 2571 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 17:41:18.818236 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814979 2571 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 17:41:18.818236 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814981 2571 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 17:41:18.818236 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814983 2571 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 17:41:18.818236 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814986 2571 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 17:41:18.818236 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814988 2571 feature_gate.go:328] unrecognized feature gate: Example Apr 16 17:41:18.818236 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814991 2571 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 17:41:18.818236 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814994 2571 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 17:41:18.818236 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814997 2571 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 17:41:18.818236 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.814999 2571 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 17:41:18.818236 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815002 2571 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 17:41:18.818699 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.815007 2571 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 16 17:41:18.818699 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815101 2571 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 17:41:18.818699 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815105 2571 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 17:41:18.818699 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815108 2571 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 17:41:18.818699 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815111 2571 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 17:41:18.818699 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815115 2571 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 17:41:18.818699 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815117 2571 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 17:41:18.818699 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815120 2571 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 17:41:18.818699 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815123 2571 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 17:41:18.818699 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815125 2571 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 17:41:18.818699 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815128 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 17:41:18.818699 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815131 2571 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 17:41:18.818699 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815134 2571 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 17:41:18.818699 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815136 2571 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 17:41:18.818699 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815139 2571 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 17:41:18.818699 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815142 2571 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 17:41:18.819115 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815144 2571 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 17:41:18.819115 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815147 2571 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 17:41:18.819115 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815150 2571 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 17:41:18.819115 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815152 2571 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 17:41:18.819115 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815155 2571 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 17:41:18.819115 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815157 2571 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 17:41:18.819115 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815160 2571 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 17:41:18.819115 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815162 2571 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 17:41:18.819115 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815165 2571 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 17:41:18.819115 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815168 2571 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 17:41:18.819115 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815170 2571 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 17:41:18.819115 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815173 2571 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 17:41:18.819115 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815175 2571 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 17:41:18.819115 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815178 2571 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 17:41:18.819115 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815180 2571 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 17:41:18.819115 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815183 2571 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 17:41:18.819115 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815186 2571 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 17:41:18.819115 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815189 2571 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 17:41:18.819115 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815192 2571 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 17:41:18.819581 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815196 2571 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 17:41:18.819581 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815199 2571 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 17:41:18.819581 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815202 2571 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 17:41:18.819581 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815206 2571 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 17:41:18.819581 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815209 2571 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 17:41:18.819581 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815211 2571 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 17:41:18.819581 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815214 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 17:41:18.819581 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815217 2571 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 17:41:18.819581 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815220 2571 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 17:41:18.819581 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815223 2571 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 17:41:18.819581 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815225 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 17:41:18.819581 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815228 2571 feature_gate.go:328] unrecognized feature gate: Example Apr 16 17:41:18.819581 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815231 2571 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 17:41:18.819581 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815233 2571 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 17:41:18.819581 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815236 2571 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 17:41:18.819581 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815238 2571 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 17:41:18.819581 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815241 2571 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 17:41:18.819581 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815243 2571 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 17:41:18.819581 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815246 2571 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 17:41:18.820081 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815248 2571 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 17:41:18.820081 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815251 2571 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 17:41:18.820081 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815254 2571 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 17:41:18.820081 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815256 2571 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 17:41:18.820081 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815259 2571 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 17:41:18.820081 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815261 2571 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 17:41:18.820081 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815264 2571 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 17:41:18.820081 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815266 2571 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 17:41:18.820081 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815269 2571 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 17:41:18.820081 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815271 2571 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 17:41:18.820081 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815274 2571 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 17:41:18.820081 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815276 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 17:41:18.820081 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815279 2571 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 17:41:18.820081 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815282 2571 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 17:41:18.820081 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815285 2571 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 17:41:18.820081 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815287 2571 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 17:41:18.820081 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815290 2571 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 17:41:18.820081 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815293 2571 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 17:41:18.820081 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815296 2571 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 17:41:18.820081 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815298 2571 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 17:41:18.820565 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815301 2571 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 17:41:18.820565 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815304 2571 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 17:41:18.820565 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815307 2571 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 17:41:18.820565 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815309 2571 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 17:41:18.820565 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815312 2571 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 17:41:18.820565 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815314 2571 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 17:41:18.820565 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815317 2571 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 17:41:18.820565 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815319 2571 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 17:41:18.820565 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815322 2571 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 17:41:18.820565 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815326 2571 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 17:41:18.820565 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815329 2571 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 17:41:18.820565 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815333 2571 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 17:41:18.820565 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:18.815336 2571 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 17:41:18.820565 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.815340 2571 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 16 17:41:18.820565 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.816015 2571 server.go:962] "Client rotation is on, will bootstrap in background" Apr 16 17:41:18.820950 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.820371 2571 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Apr 16 17:41:18.821175 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.821163 2571 server.go:1019] "Starting client certificate rotation" Apr 16 17:41:18.821268 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.821253 2571 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 16 17:41:18.821306 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.821297 2571 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 16 17:41:18.842389 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.842372 2571 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 16 17:41:18.846268 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.846084 2571 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 16 17:41:18.861453 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.861431 2571 log.go:25] "Validated CRI v1 runtime API" Apr 16 17:41:18.867735 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.867715 2571 log.go:25] "Validated CRI v1 image API" Apr 16 17:41:18.869093 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.869076 2571 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 17:41:18.870393 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.870376 2571 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 16 17:41:18.874231 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.874212 2571 fs.go:135] Filesystem UUIDs: map[2288303e-83f4-4427-8703-a0d0656add32:/dev/nvme0n1p4 7B77-95E7:/dev/nvme0n1p2 8b62da4d-c118-46a0-8379-febf3a688e6f:/dev/nvme0n1p3] Apr 16 17:41:18.874296 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.874231 2571 fs.go:136] Filesystem partitions: map[/dev/nvme0n1p3:{mountpoint:/boot major:259 minor:3 fsType:ext4 blockSize:0} /dev/nvme0n1p4:{mountpoint:/var major:259 minor:4 fsType:xfs blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Apr 16 17:41:18.879584 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.879479 2571 manager.go:217] Machine: {Timestamp:2026-04-16 17:41:18.877794549 +0000 UTC m=+0.351024025 CPUVendorID:GenuineIntel NumCores:8 NumPhysicalCores:4 NumSockets:1 CpuFrequency:3100650 MemoryCapacity:33164488704 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ec285fa01a6a570f9117348de12fd7b7 SystemUUID:ec285fa0-1a6a-570f-9117-348de12fd7b7 BootID:306bf8c7-3db0-4e9e-9d02-d9c317754fcc Filesystems:[{Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6098944 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16582246400 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/nvme0n1p3 DeviceMajor:259 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16582242304 Type:vfs Inodes:4048399 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6632898560 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/nvme0n1p4 DeviceMajor:259 DeviceMinor:4 Capacity:128243970048 Type:vfs Inodes:62651840 HasInodes:true}] DiskMap:map[259:0:{Name:nvme0n1 Major:259 Minor:0 Size:128849018880 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:02:6d:20:42:3f:8d Speed:0 Mtu:9001} {Name:ens5 MacAddress:02:6d:20:42:3f:8d Speed:0 Mtu:9001} {Name:ovs-system MacAddress:2a:27:e1:b4:9e:75 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33164488704 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0 4] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:1 Threads:[1 5] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:2 Threads:[2 6] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:3 Threads:[3 7] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:}] Caches:[{Id:0 Size:37486592 Type:Unified Level:3}] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Apr 16 17:41:18.879584 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.879578 2571 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Apr 16 17:41:18.879695 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.879648 2571 manager.go:233] Version: {KernelVersion:5.14.0-570.104.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20260401-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Apr 16 17:41:18.880607 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.880586 2571 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 17:41:18.880746 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.880610 2571 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-10-0-138-134.ec2.internal","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 17:41:18.880794 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.880755 2571 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 17:41:18.880794 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.880764 2571 container_manager_linux.go:306] "Creating device plugin manager" Apr 16 17:41:18.880794 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.880780 2571 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 16 17:41:18.881452 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.881442 2571 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 16 17:41:18.882330 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.882321 2571 state_mem.go:36] "Initialized new in-memory state store" Apr 16 17:41:18.882443 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.882434 2571 server.go:1267] "Using root directory" path="/var/lib/kubelet" Apr 16 17:41:18.884665 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.884655 2571 kubelet.go:491] "Attempting to sync node with API server" Apr 16 17:41:18.885216 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.885207 2571 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 17:41:18.885253 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.885228 2571 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Apr 16 17:41:18.885253 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.885237 2571 kubelet.go:397] "Adding apiserver pod source" Apr 16 17:41:18.885253 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.885246 2571 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 17:41:18.886123 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.886112 2571 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 16 17:41:18.886173 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.886130 2571 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 16 17:41:18.888792 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.888776 2571 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.10-2.rhaos4.20.gita4d0894.el9" apiVersion="v1" Apr 16 17:41:18.890499 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.890484 2571 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 17:41:18.891869 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.891834 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Apr 16 17:41:18.891948 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.891877 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Apr 16 17:41:18.891948 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.891891 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Apr 16 17:41:18.891948 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.891903 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Apr 16 17:41:18.891948 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.891915 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Apr 16 17:41:18.891948 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.891927 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Apr 16 17:41:18.892103 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.891958 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Apr 16 17:41:18.892103 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.891970 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Apr 16 17:41:18.892103 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.891983 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Apr 16 17:41:18.892103 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.891996 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Apr 16 17:41:18.892103 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.892026 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Apr 16 17:41:18.892103 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.892044 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Apr 16 17:41:18.892841 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.892832 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Apr 16 17:41:18.892841 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.892841 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Apr 16 17:41:18.895287 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.895263 2571 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-h4whj" Apr 16 17:41:18.896925 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.896908 2571 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 16 17:41:18.896996 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.896955 2571 server.go:1295] "Started kubelet" Apr 16 17:41:18.897134 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.897075 2571 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 17:41:18.897134 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.897101 2571 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 17:41:18.897261 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.897164 2571 server_v1.go:47] "podresources" method="list" useActivePods=true Apr 16 17:41:18.897663 ip-10-0-138-134 systemd[1]: Started Kubernetes Kubelet. Apr 16 17:41:18.898251 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:18.898230 2571 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 17:41:18.898312 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.898247 2571 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-138-134.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 16 17:41:18.898374 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.898356 2571 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 17:41:18.898423 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:18.898385 2571 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-138-134.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 17:41:18.898666 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.898515 2571 server.go:317] "Adding debug handlers to kubelet server" Apr 16 17:41:18.902798 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.902780 2571 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-h4whj" Apr 16 17:41:18.903108 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:18.902329 2571 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-138-134.ec2.internal.18a6e72bed68e773 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-138-134.ec2.internal,UID:ip-10-0-138-134.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-10-0-138-134.ec2.internal,},FirstTimestamp:2026-04-16 17:41:18.896924531 +0000 UTC m=+0.370153994,LastTimestamp:2026-04-16 17:41:18.896924531 +0000 UTC m=+0.370153994,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-138-134.ec2.internal,}" Apr 16 17:41:18.903901 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.903880 2571 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Apr 16 17:41:18.904425 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.904404 2571 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 17:41:18.905143 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:18.905109 2571 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-138-134.ec2.internal\" not found" Apr 16 17:41:18.905386 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.905361 2571 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 16 17:41:18.905469 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.905400 2571 factory.go:55] Registering systemd factory Apr 16 17:41:18.905469 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.905416 2571 factory.go:223] Registration of the systemd container factory successfully Apr 16 17:41:18.905564 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.905529 2571 volume_manager.go:295] "The desired_state_of_world populator starts" Apr 16 17:41:18.905564 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.905542 2571 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 16 17:41:18.905660 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.905616 2571 factory.go:153] Registering CRI-O factory Apr 16 17:41:18.905660 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.905631 2571 factory.go:223] Registration of the crio container factory successfully Apr 16 17:41:18.905660 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.905641 2571 reconstruct.go:97] "Volume reconstruction finished" Apr 16 17:41:18.905660 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.905648 2571 reconciler.go:26] "Reconciler: start to sync state" Apr 16 17:41:18.905829 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.905690 2571 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Apr 16 17:41:18.905829 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.905713 2571 factory.go:103] Registering Raw factory Apr 16 17:41:18.905829 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.905728 2571 manager.go:1196] Started watching for new ooms in manager Apr 16 17:41:18.906425 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:18.906387 2571 kubelet.go:1618] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Apr 16 17:41:18.906663 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.906640 2571 manager.go:319] Starting recovery of all containers Apr 16 17:41:18.912603 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.912576 2571 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 16 17:41:18.913974 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.913789 2571 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 17:41:18.917691 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:18.917385 2571 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-10-0-138-134.ec2.internal\" not found" node="ip-10-0-138-134.ec2.internal" Apr 16 17:41:18.918973 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.918957 2571 manager.go:324] Recovery completed Apr 16 17:41:18.923767 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.923754 2571 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 17:41:18.926031 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.925944 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-138-134.ec2.internal" event="NodeHasSufficientMemory" Apr 16 17:41:18.926031 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.925975 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-138-134.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 17:41:18.926031 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.925991 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-138-134.ec2.internal" event="NodeHasSufficientPID" Apr 16 17:41:18.926488 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.926474 2571 cpu_manager.go:222] "Starting CPU manager" policy="none" Apr 16 17:41:18.926535 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.926489 2571 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Apr 16 17:41:18.926535 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.926505 2571 state_mem.go:36] "Initialized new in-memory state store" Apr 16 17:41:18.928784 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.928770 2571 policy_none.go:49] "None policy: Start" Apr 16 17:41:18.928832 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.928787 2571 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 16 17:41:18.928832 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.928796 2571 state_mem.go:35] "Initializing new in-memory state store" Apr 16 17:41:18.967387 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.967374 2571 manager.go:341] "Starting Device Plugin manager" Apr 16 17:41:18.969249 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:18.967398 2571 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 17:41:18.969249 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.967406 2571 server.go:85] "Starting device plugin registration server" Apr 16 17:41:18.969249 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.967610 2571 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 17:41:18.969249 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.967624 2571 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 17:41:18.969249 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.967761 2571 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Apr 16 17:41:18.969249 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.967838 2571 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Apr 16 17:41:18.969249 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:18.967846 2571 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 17:41:18.969249 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:18.968288 2571 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Apr 16 17:41:18.969249 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:18.968315 2571 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-138-134.ec2.internal\" not found" Apr 16 17:41:19.030017 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.030002 2571 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 16 17:41:19.030017 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.030024 2571 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 16 17:41:19.030128 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.030037 2571 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 17:41:19.030128 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.030043 2571 kubelet.go:2451] "Starting kubelet main sync loop" Apr 16 17:41:19.030128 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:19.030070 2571 kubelet.go:2475] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Apr 16 17:41:19.033827 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.033813 2571 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 17:41:19.068224 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.068189 2571 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 17:41:19.068898 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.068882 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-138-134.ec2.internal" event="NodeHasSufficientMemory" Apr 16 17:41:19.068966 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.068908 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-138-134.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 17:41:19.068966 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.068917 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-138-134.ec2.internal" event="NodeHasSufficientPID" Apr 16 17:41:19.068966 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.068936 2571 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-138-134.ec2.internal" Apr 16 17:41:19.077396 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.077384 2571 kubelet_node_status.go:81] "Successfully registered node" node="ip-10-0-138-134.ec2.internal" Apr 16 17:41:19.077436 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:19.077401 2571 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-138-134.ec2.internal\": node \"ip-10-0-138-134.ec2.internal\" not found" Apr 16 17:41:19.090268 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:19.090249 2571 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-138-134.ec2.internal\" not found" Apr 16 17:41:19.130918 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.130889 2571 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-138-134.ec2.internal","kube-system/kube-apiserver-proxy-ip-10-0-138-134.ec2.internal"] Apr 16 17:41:19.130998 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.130971 2571 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 17:41:19.131674 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.131661 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-138-134.ec2.internal" event="NodeHasSufficientMemory" Apr 16 17:41:19.131735 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.131686 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-138-134.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 17:41:19.131735 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.131700 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-138-134.ec2.internal" event="NodeHasSufficientPID" Apr 16 17:41:19.132875 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.132849 2571 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 17:41:19.133002 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.132991 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-138-134.ec2.internal" Apr 16 17:41:19.133045 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.133016 2571 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 17:41:19.133516 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.133489 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-138-134.ec2.internal" event="NodeHasSufficientMemory" Apr 16 17:41:19.133585 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.133519 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-138-134.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 17:41:19.133585 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.133530 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-138-134.ec2.internal" event="NodeHasSufficientPID" Apr 16 17:41:19.133585 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.133498 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-138-134.ec2.internal" event="NodeHasSufficientMemory" Apr 16 17:41:19.133585 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.133584 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-138-134.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 17:41:19.133711 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.133598 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-138-134.ec2.internal" event="NodeHasSufficientPID" Apr 16 17:41:19.135466 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.135452 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-138-134.ec2.internal" Apr 16 17:41:19.135522 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.135475 2571 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 17:41:19.136054 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.136038 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-138-134.ec2.internal" event="NodeHasSufficientMemory" Apr 16 17:41:19.136111 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.136074 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-138-134.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 17:41:19.136111 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.136091 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-138-134.ec2.internal" event="NodeHasSufficientPID" Apr 16 17:41:19.157460 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:19.157444 2571 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-138-134.ec2.internal\" not found" node="ip-10-0-138-134.ec2.internal" Apr 16 17:41:19.161888 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:19.161874 2571 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-138-134.ec2.internal\" not found" node="ip-10-0-138-134.ec2.internal" Apr 16 17:41:19.190657 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:19.190639 2571 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-138-134.ec2.internal\" not found" Apr 16 17:41:19.207070 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.207055 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/67b84718c32104f594de327de7cddecd-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-138-134.ec2.internal\" (UID: \"67b84718c32104f594de327de7cddecd\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-138-134.ec2.internal" Apr 16 17:41:19.207134 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.207078 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/67b84718c32104f594de327de7cddecd-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-138-134.ec2.internal\" (UID: \"67b84718c32104f594de327de7cddecd\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-138-134.ec2.internal" Apr 16 17:41:19.207134 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.207107 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/612d2783cb89512547948baa230ca5ee-config\") pod \"kube-apiserver-proxy-ip-10-0-138-134.ec2.internal\" (UID: \"612d2783cb89512547948baa230ca5ee\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-138-134.ec2.internal" Apr 16 17:41:19.291082 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:19.291064 2571 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-138-134.ec2.internal\" not found" Apr 16 17:41:19.307458 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.307439 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/67b84718c32104f594de327de7cddecd-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-138-134.ec2.internal\" (UID: \"67b84718c32104f594de327de7cddecd\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-138-134.ec2.internal" Apr 16 17:41:19.307524 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.307406 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/67b84718c32104f594de327de7cddecd-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-138-134.ec2.internal\" (UID: \"67b84718c32104f594de327de7cddecd\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-138-134.ec2.internal" Apr 16 17:41:19.307524 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.307502 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/67b84718c32104f594de327de7cddecd-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-138-134.ec2.internal\" (UID: \"67b84718c32104f594de327de7cddecd\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-138-134.ec2.internal" Apr 16 17:41:19.307587 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.307529 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/612d2783cb89512547948baa230ca5ee-config\") pod \"kube-apiserver-proxy-ip-10-0-138-134.ec2.internal\" (UID: \"612d2783cb89512547948baa230ca5ee\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-138-134.ec2.internal" Apr 16 17:41:19.307587 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.307551 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/612d2783cb89512547948baa230ca5ee-config\") pod \"kube-apiserver-proxy-ip-10-0-138-134.ec2.internal\" (UID: \"612d2783cb89512547948baa230ca5ee\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-138-134.ec2.internal" Apr 16 17:41:19.307646 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.307589 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/67b84718c32104f594de327de7cddecd-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-138-134.ec2.internal\" (UID: \"67b84718c32104f594de327de7cddecd\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-138-134.ec2.internal" Apr 16 17:41:19.391817 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:19.391798 2571 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-138-134.ec2.internal\" not found" Apr 16 17:41:19.461339 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.461322 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-138-134.ec2.internal" Apr 16 17:41:19.464931 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.464914 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-138-134.ec2.internal" Apr 16 17:41:19.492852 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:19.492830 2571 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-138-134.ec2.internal\" not found" Apr 16 17:41:19.593401 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:19.593366 2571 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-138-134.ec2.internal\" not found" Apr 16 17:41:19.693896 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:19.693825 2571 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-138-134.ec2.internal\" not found" Apr 16 17:41:19.794474 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:19.794446 2571 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-138-134.ec2.internal\" not found" Apr 16 17:41:19.820914 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.820895 2571 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Apr 16 17:41:19.821394 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.821065 2571 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Apr 16 17:41:19.821394 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.821072 2571 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Apr 16 17:41:19.894085 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:19.894050 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67b84718c32104f594de327de7cddecd.slice/crio-d6b5a31bea5d1408c9ff1eee9ecfb7900c40d1fceb43d0c5762d07bf378c1ae1 WatchSource:0}: Error finding container d6b5a31bea5d1408c9ff1eee9ecfb7900c40d1fceb43d0c5762d07bf378c1ae1: Status 404 returned error can't find the container with id d6b5a31bea5d1408c9ff1eee9ecfb7900c40d1fceb43d0c5762d07bf378c1ae1 Apr 16 17:41:19.894476 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:19.894460 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod612d2783cb89512547948baa230ca5ee.slice/crio-8c581370bf0144846c5d4f17edaa6661ac544027fa76f863ffef2f5b7778329a WatchSource:0}: Error finding container 8c581370bf0144846c5d4f17edaa6661ac544027fa76f863ffef2f5b7778329a: Status 404 returned error can't find the container with id 8c581370bf0144846c5d4f17edaa6661ac544027fa76f863ffef2f5b7778329a Apr 16 17:41:19.894536 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:19.894492 2571 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-138-134.ec2.internal\" not found" Apr 16 17:41:19.898649 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.898635 2571 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 17:41:19.904499 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.904476 2571 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Apr 16 17:41:19.904499 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.904475 2571 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2028-04-15 17:36:18 +0000 UTC" deadline="2027-10-04 21:44:27.088765073 +0000 UTC" Apr 16 17:41:19.904499 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.904500 2571 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="12868h3m7.184269289s" Apr 16 17:41:19.917800 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.917783 2571 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 16 17:41:19.934847 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.934828 2571 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-8f7pn" Apr 16 17:41:19.942320 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:19.942305 2571 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-8f7pn" Apr 16 17:41:19.995419 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:19.995374 2571 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-138-134.ec2.internal\" not found" Apr 16 17:41:20.017107 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.017088 2571 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 17:41:20.032257 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.032223 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-138-134.ec2.internal" event={"ID":"67b84718c32104f594de327de7cddecd","Type":"ContainerStarted","Data":"d6b5a31bea5d1408c9ff1eee9ecfb7900c40d1fceb43d0c5762d07bf378c1ae1"} Apr 16 17:41:20.033188 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.033167 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-138-134.ec2.internal" event={"ID":"612d2783cb89512547948baa230ca5ee","Type":"ContainerStarted","Data":"8c581370bf0144846c5d4f17edaa6661ac544027fa76f863ffef2f5b7778329a"} Apr 16 17:41:20.064798 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.064778 2571 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 17:41:20.105234 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.105214 2571 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-138-134.ec2.internal" Apr 16 17:41:20.115915 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.115895 2571 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 16 17:41:20.118301 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.118289 2571 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-138-134.ec2.internal" Apr 16 17:41:20.125618 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.125604 2571 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 16 17:41:20.887300 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.887269 2571 apiserver.go:52] "Watching apiserver" Apr 16 17:41:20.892575 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.892553 2571 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Apr 16 17:41:20.893477 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.893454 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kube-system/konnectivity-agent-hc5pn","kube-system/kube-apiserver-proxy-ip-10-0-138-134.ec2.internal","openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-cmzsn","openshift-cluster-node-tuning-operator/tuned-fv6lf","openshift-dns/node-resolver-cgldj","openshift-multus/multus-2lqx9","openshift-network-operator/iptables-alerter-wjvxt","openshift-ovn-kubernetes/ovnkube-node-69xw9","openshift-image-registry/node-ca-hdwwl","openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-138-134.ec2.internal","openshift-multus/multus-additional-cni-plugins-tlljg","openshift-multus/network-metrics-daemon-x4kh5","openshift-network-diagnostics/network-check-target-wrf29"] Apr 16 17:41:20.896148 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.896129 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-wjvxt" Apr 16 17:41:20.897171 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.897146 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-cmzsn" Apr 16 17:41:20.898084 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.898064 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:20.898445 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.898423 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-dockercfg-q5gxj\"" Apr 16 17:41:20.898530 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.898436 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Apr 16 17:41:20.898677 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.898663 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Apr 16 17:41:20.898973 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.898958 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Apr 16 17:41:20.899099 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.899011 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"kube-root-ca.crt\"" Apr 16 17:41:20.899187 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.899123 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"openshift-service-ca.crt\"" Apr 16 17:41:20.899280 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.899199 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-cgldj" Apr 16 17:41:20.899464 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.899348 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2lqx9" Apr 16 17:41:20.899554 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.899473 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-metrics-serving-cert\"" Apr 16 17:41:20.899677 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.899580 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-sa-dockercfg-4m88m\"" Apr 16 17:41:20.900164 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.900107 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"tuned-dockercfg-lhz44\"" Apr 16 17:41:20.900164 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.900138 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"kube-root-ca.crt\"" Apr 16 17:41:20.900322 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.900142 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"openshift-service-ca.crt\"" Apr 16 17:41:20.900516 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.900500 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-hc5pn" Apr 16 17:41:20.901031 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.901016 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-qmr25\"" Apr 16 17:41:20.901031 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.901024 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Apr 16 17:41:20.901183 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.901168 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Apr 16 17:41:20.901352 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.901334 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Apr 16 17:41:20.901464 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.901450 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-mjst5\"" Apr 16 17:41:20.901519 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.901510 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Apr 16 17:41:20.901665 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.901628 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:20.901961 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.901938 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Apr 16 17:41:20.902048 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.901977 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"konnectivity-agent\"" Apr 16 17:41:20.902048 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.902009 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Apr 16 17:41:20.902785 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.902770 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"default-dockercfg-c4mbn\"" Apr 16 17:41:20.902882 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.902812 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kube-system\"/\"konnectivity-ca-bundle\"" Apr 16 17:41:20.903033 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.903016 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-hdwwl" Apr 16 17:41:20.903986 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.903419 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Apr 16 17:41:20.903986 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.903443 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-kkmhp\"" Apr 16 17:41:20.903986 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.903531 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Apr 16 17:41:20.904105 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.903978 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Apr 16 17:41:20.905816 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.904592 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Apr 16 17:41:20.905816 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.904602 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-tlljg" Apr 16 17:41:20.905816 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.904904 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Apr 16 17:41:20.905816 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.905616 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Apr 16 17:41:20.907660 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.907642 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x4kh5" Apr 16 17:41:20.907749 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:20.907729 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x4kh5" podUID="5b6b22f4-0d78-4198-b821-0f4f52115d9c" Apr 16 17:41:20.909431 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.909033 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Apr 16 17:41:20.909431 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.909109 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Apr 16 17:41:20.909431 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.909127 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Apr 16 17:41:20.909431 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.909184 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-q52x8\"" Apr 16 17:41:20.909431 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.909242 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Apr 16 17:41:20.909431 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.909257 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-dwljk\"" Apr 16 17:41:20.910092 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.910074 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-wrf29" Apr 16 17:41:20.910192 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:20.910161 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-wrf29" podUID="679af96e-47e1-4212-8ee0-e6d82d302834" Apr 16 17:41:20.910388 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.910364 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Apr 16 17:41:20.916415 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.916398 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-os-release\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:20.916513 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.916428 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-host-run-ovn-kubernetes\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:20.916513 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.916464 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:20.916619 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.916528 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3e976806-7125-4a84-96f3-609791878cd8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-tlljg\" (UID: \"3e976806-7125-4a84-96f3-609791878cd8\") " pod="openshift-multus/multus-additional-cni-plugins-tlljg" Apr 16 17:41:20.916619 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.916564 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0cbb16f9-6753-46fc-bde2-d1725b6a8bd7-registration-dir\") pod \"aws-ebs-csi-driver-node-cmzsn\" (UID: \"0cbb16f9-6753-46fc-bde2-d1725b6a8bd7\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-cmzsn" Apr 16 17:41:20.916619 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.916592 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-etc-kubernetes\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:20.916759 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.916616 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5e111816-868e-46a5-9605-122507f445ea-sys\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:20.916759 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.916639 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0cbb16f9-6753-46fc-bde2-d1725b6a8bd7-socket-dir\") pod \"aws-ebs-csi-driver-node-cmzsn\" (UID: \"0cbb16f9-6753-46fc-bde2-d1725b6a8bd7\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-cmzsn" Apr 16 17:41:20.916759 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.916665 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5e111816-868e-46a5-9605-122507f445ea-etc-kubernetes\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:20.916759 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.916704 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b9b6b64b-dc77-4472-8311-249ac8242441-cni-binary-copy\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:20.916759 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.916734 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-host-run-k8s-cni-cncf-io\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:20.917017 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.916770 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/14054c8d-e316-4c67-a78f-0c44fc1bddc2-host-slash\") pod \"iptables-alerter-wjvxt\" (UID: \"14054c8d-e316-4c67-a78f-0c44fc1bddc2\") " pod="openshift-network-operator/iptables-alerter-wjvxt" Apr 16 17:41:20.917017 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.916797 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvhpj\" (UniqueName: \"kubernetes.io/projected/14054c8d-e316-4c67-a78f-0c44fc1bddc2-kube-api-access-mvhpj\") pod \"iptables-alerter-wjvxt\" (UID: \"14054c8d-e316-4c67-a78f-0c44fc1bddc2\") " pod="openshift-network-operator/iptables-alerter-wjvxt" Apr 16 17:41:20.917017 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.916824 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/5e111816-868e-46a5-9605-122507f445ea-etc-sysctl-conf\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:20.917017 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.916848 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-multus-conf-dir\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:20.917017 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.916887 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2ff7\" (UniqueName: \"kubernetes.io/projected/5e111816-868e-46a5-9605-122507f445ea-kube-api-access-g2ff7\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:20.917017 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.916911 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-run-systemd\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:20.917017 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.916936 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-log-socket\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:20.917017 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.916973 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62dg9\" (UniqueName: \"kubernetes.io/projected/dd4660ce-5c75-41dc-a8ed-0e0e55ac0a01-kube-api-access-62dg9\") pod \"node-ca-hdwwl\" (UID: \"dd4660ce-5c75-41dc-a8ed-0e0e55ac0a01\") " pod="openshift-image-registry/node-ca-hdwwl" Apr 16 17:41:20.917017 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.916998 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-system-cni-dir\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:20.917017 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917020 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e111816-868e-46a5-9605-122507f445ea-lib-modules\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:20.917483 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917043 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/21c668ba-6d2b-43f2-926f-50b6a51598db-konnectivity-ca\") pod \"konnectivity-agent-hc5pn\" (UID: \"21c668ba-6d2b-43f2-926f-50b6a51598db\") " pod="kube-system/konnectivity-agent-hc5pn" Apr 16 17:41:20.917483 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917065 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp6jw\" (UniqueName: \"kubernetes.io/projected/8976bd90-ca55-4016-a1b8-fffeec56d443-kube-api-access-fp6jw\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:20.917483 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917089 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-cnibin\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:20.917483 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917110 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-host-cni-netd\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:20.917483 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917134 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8976bd90-ca55-4016-a1b8-fffeec56d443-ovnkube-config\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:20.917483 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917157 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-host-var-lib-kubelet\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:20.917483 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917182 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-multus-cni-dir\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:20.917483 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917215 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3d328985-f90b-469d-a8a4-9962d8311ef2-tmp-dir\") pod \"node-resolver-cgldj\" (UID: \"3d328985-f90b-469d-a8a4-9962d8311ef2\") " pod="openshift-dns/node-resolver-cgldj" Apr 16 17:41:20.917483 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917239 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/0cbb16f9-6753-46fc-bde2-d1725b6a8bd7-sys-fs\") pod \"aws-ebs-csi-driver-node-cmzsn\" (UID: \"0cbb16f9-6753-46fc-bde2-d1725b6a8bd7\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-cmzsn" Apr 16 17:41:20.917483 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917260 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/5e111816-868e-46a5-9605-122507f445ea-etc-sysconfig\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:20.917483 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917281 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/5e111816-868e-46a5-9605-122507f445ea-etc-tuned\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:20.917483 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917303 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-host-kubelet\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:20.917483 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917325 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-node-log\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:20.917483 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917347 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0cbb16f9-6753-46fc-bde2-d1725b6a8bd7-kubelet-dir\") pod \"aws-ebs-csi-driver-node-cmzsn\" (UID: \"0cbb16f9-6753-46fc-bde2-d1725b6a8bd7\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-cmzsn" Apr 16 17:41:20.917483 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917368 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/0cbb16f9-6753-46fc-bde2-d1725b6a8bd7-device-dir\") pod \"aws-ebs-csi-driver-node-cmzsn\" (UID: \"0cbb16f9-6753-46fc-bde2-d1725b6a8bd7\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-cmzsn" Apr 16 17:41:20.917483 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917392 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b9b6b64b-dc77-4472-8311-249ac8242441-multus-daemon-config\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:20.917483 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917411 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/5e111816-868e-46a5-9605-122507f445ea-etc-sysctl-d\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:20.918211 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917448 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8wwz\" (UniqueName: \"kubernetes.io/projected/3d328985-f90b-469d-a8a4-9962d8311ef2-kube-api-access-z8wwz\") pod \"node-resolver-cgldj\" (UID: \"3d328985-f90b-469d-a8a4-9962d8311ef2\") " pod="openshift-dns/node-resolver-cgldj" Apr 16 17:41:20.918211 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917473 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/21c668ba-6d2b-43f2-926f-50b6a51598db-agent-certs\") pod \"konnectivity-agent-hc5pn\" (UID: \"21c668ba-6d2b-43f2-926f-50b6a51598db\") " pod="kube-system/konnectivity-agent-hc5pn" Apr 16 17:41:20.918211 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917496 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-host-run-netns\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:20.918211 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917517 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/dd4660ce-5c75-41dc-a8ed-0e0e55ac0a01-serviceca\") pod \"node-ca-hdwwl\" (UID: \"dd4660ce-5c75-41dc-a8ed-0e0e55ac0a01\") " pod="openshift-image-registry/node-ca-hdwwl" Apr 16 17:41:20.918211 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917536 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/0cbb16f9-6753-46fc-bde2-d1725b6a8bd7-etc-selinux\") pod \"aws-ebs-csi-driver-node-cmzsn\" (UID: \"0cbb16f9-6753-46fc-bde2-d1725b6a8bd7\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-cmzsn" Apr 16 17:41:20.918211 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917577 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-host-run-multus-certs\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:20.918211 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917611 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b6b22f4-0d78-4198-b821-0f4f52115d9c-metrics-certs\") pod \"network-metrics-daemon-x4kh5\" (UID: \"5b6b22f4-0d78-4198-b821-0f4f52115d9c\") " pod="openshift-multus/network-metrics-daemon-x4kh5" Apr 16 17:41:20.918211 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917635 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgl7z\" (UniqueName: \"kubernetes.io/projected/5b6b22f4-0d78-4198-b821-0f4f52115d9c-kube-api-access-sgl7z\") pod \"network-metrics-daemon-x4kh5\" (UID: \"5b6b22f4-0d78-4198-b821-0f4f52115d9c\") " pod="openshift-multus/network-metrics-daemon-x4kh5" Apr 16 17:41:20.918211 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917663 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/14054c8d-e316-4c67-a78f-0c44fc1bddc2-iptables-alerter-script\") pod \"iptables-alerter-wjvxt\" (UID: \"14054c8d-e316-4c67-a78f-0c44fc1bddc2\") " pod="openshift-network-operator/iptables-alerter-wjvxt" Apr 16 17:41:20.918211 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917686 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/5e111816-868e-46a5-9605-122507f445ea-etc-systemd\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:20.918211 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917708 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8976bd90-ca55-4016-a1b8-fffeec56d443-ovn-node-metrics-cert\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:20.918211 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917730 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8976bd90-ca55-4016-a1b8-fffeec56d443-ovnkube-script-lib\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:20.918211 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917764 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3e976806-7125-4a84-96f3-609791878cd8-cni-binary-copy\") pod \"multus-additional-cni-plugins-tlljg\" (UID: \"3e976806-7125-4a84-96f3-609791878cd8\") " pod="openshift-multus/multus-additional-cni-plugins-tlljg" Apr 16 17:41:20.918211 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917795 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-host-slash\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:20.918211 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917817 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-run-openvswitch\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:20.918211 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917834 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8976bd90-ca55-4016-a1b8-fffeec56d443-env-overrides\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:20.918952 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917892 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3e976806-7125-4a84-96f3-609791878cd8-system-cni-dir\") pod \"multus-additional-cni-plugins-tlljg\" (UID: \"3e976806-7125-4a84-96f3-609791878cd8\") " pod="openshift-multus/multus-additional-cni-plugins-tlljg" Apr 16 17:41:20.918952 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917929 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3e976806-7125-4a84-96f3-609791878cd8-cnibin\") pod \"multus-additional-cni-plugins-tlljg\" (UID: \"3e976806-7125-4a84-96f3-609791878cd8\") " pod="openshift-multus/multus-additional-cni-plugins-tlljg" Apr 16 17:41:20.918952 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917970 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bbwj\" (UniqueName: \"kubernetes.io/projected/3e976806-7125-4a84-96f3-609791878cd8-kube-api-access-4bbwj\") pod \"multus-additional-cni-plugins-tlljg\" (UID: \"3e976806-7125-4a84-96f3-609791878cd8\") " pod="openshift-multus/multus-additional-cni-plugins-tlljg" Apr 16 17:41:20.918952 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.917990 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqtdh\" (UniqueName: \"kubernetes.io/projected/0cbb16f9-6753-46fc-bde2-d1725b6a8bd7-kube-api-access-nqtdh\") pod \"aws-ebs-csi-driver-node-cmzsn\" (UID: \"0cbb16f9-6753-46fc-bde2-d1725b6a8bd7\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-cmzsn" Apr 16 17:41:20.918952 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.918041 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-multus-socket-dir-parent\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:20.918952 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.918078 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-var-lib-openvswitch\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:20.918952 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.918102 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd4660ce-5c75-41dc-a8ed-0e0e55ac0a01-host\") pod \"node-ca-hdwwl\" (UID: \"dd4660ce-5c75-41dc-a8ed-0e0e55ac0a01\") " pod="openshift-image-registry/node-ca-hdwwl" Apr 16 17:41:20.918952 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.918128 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3e976806-7125-4a84-96f3-609791878cd8-os-release\") pod \"multus-additional-cni-plugins-tlljg\" (UID: \"3e976806-7125-4a84-96f3-609791878cd8\") " pod="openshift-multus/multus-additional-cni-plugins-tlljg" Apr 16 17:41:20.918952 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.918155 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/3e976806-7125-4a84-96f3-609791878cd8-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-tlljg\" (UID: \"3e976806-7125-4a84-96f3-609791878cd8\") " pod="openshift-multus/multus-additional-cni-plugins-tlljg" Apr 16 17:41:20.918952 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.918186 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5e111816-868e-46a5-9605-122507f445ea-host\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:20.918952 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.918216 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5e111816-868e-46a5-9605-122507f445ea-tmp\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:20.918952 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.918243 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3d328985-f90b-469d-a8a4-9962d8311ef2-hosts-file\") pod \"node-resolver-cgldj\" (UID: \"3d328985-f90b-469d-a8a4-9962d8311ef2\") " pod="openshift-dns/node-resolver-cgldj" Apr 16 17:41:20.918952 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.918270 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-systemd-units\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:20.918952 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.918295 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-etc-openvswitch\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:20.918952 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.918329 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vfwt\" (UniqueName: \"kubernetes.io/projected/b9b6b64b-dc77-4472-8311-249ac8242441-kube-api-access-8vfwt\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:20.918952 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.918360 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5e111816-868e-46a5-9605-122507f445ea-run\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:20.919373 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.918382 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-host-var-lib-cni-bin\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:20.919373 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.918403 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-host-var-lib-cni-multus\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:20.919373 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.918446 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/5e111816-868e-46a5-9605-122507f445ea-etc-modprobe-d\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:20.919373 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.918469 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-run-ovn\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:20.919373 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.918498 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-host-cni-bin\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:20.919373 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.918524 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3e976806-7125-4a84-96f3-609791878cd8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-tlljg\" (UID: \"3e976806-7125-4a84-96f3-609791878cd8\") " pod="openshift-multus/multus-additional-cni-plugins-tlljg" Apr 16 17:41:20.919373 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.918546 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-host-run-netns\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:20.919373 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.918569 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-hostroot\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:20.919373 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.918592 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5e111816-868e-46a5-9605-122507f445ea-var-lib-kubelet\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:20.937665 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.937647 2571 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 17:41:20.943389 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.943368 2571 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-15 17:36:19 +0000 UTC" deadline="2027-11-17 01:53:35.122046078 +0000 UTC" Apr 16 17:41:20.943389 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:20.943387 2571 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="13904h12m14.178661272s" Apr 16 17:41:21.007217 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.007192 2571 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 16 17:41:21.019409 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.019384 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-host-var-lib-kubelet\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:21.019409 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.019412 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-multus-cni-dir\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:21.019618 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.019427 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3d328985-f90b-469d-a8a4-9962d8311ef2-tmp-dir\") pod \"node-resolver-cgldj\" (UID: \"3d328985-f90b-469d-a8a4-9962d8311ef2\") " pod="openshift-dns/node-resolver-cgldj" Apr 16 17:41:21.019618 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.019442 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/0cbb16f9-6753-46fc-bde2-d1725b6a8bd7-sys-fs\") pod \"aws-ebs-csi-driver-node-cmzsn\" (UID: \"0cbb16f9-6753-46fc-bde2-d1725b6a8bd7\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-cmzsn" Apr 16 17:41:21.019618 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.019459 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/5e111816-868e-46a5-9605-122507f445ea-etc-sysconfig\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:21.019618 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.019517 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/5e111816-868e-46a5-9605-122507f445ea-etc-sysconfig\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:21.019618 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.019534 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-multus-cni-dir\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:21.019618 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.019538 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/0cbb16f9-6753-46fc-bde2-d1725b6a8bd7-sys-fs\") pod \"aws-ebs-csi-driver-node-cmzsn\" (UID: \"0cbb16f9-6753-46fc-bde2-d1725b6a8bd7\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-cmzsn" Apr 16 17:41:21.019618 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.019536 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-host-var-lib-kubelet\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:21.019618 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.019566 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/5e111816-868e-46a5-9605-122507f445ea-etc-tuned\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:21.019618 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.019598 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-host-kubelet\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.020042 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.019624 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-node-log\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.020042 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.019647 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0cbb16f9-6753-46fc-bde2-d1725b6a8bd7-kubelet-dir\") pod \"aws-ebs-csi-driver-node-cmzsn\" (UID: \"0cbb16f9-6753-46fc-bde2-d1725b6a8bd7\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-cmzsn" Apr 16 17:41:21.020042 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.019657 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-host-kubelet\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.020042 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.019672 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/0cbb16f9-6753-46fc-bde2-d1725b6a8bd7-device-dir\") pod \"aws-ebs-csi-driver-node-cmzsn\" (UID: \"0cbb16f9-6753-46fc-bde2-d1725b6a8bd7\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-cmzsn" Apr 16 17:41:21.020042 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.019703 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b9b6b64b-dc77-4472-8311-249ac8242441-multus-daemon-config\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:21.020042 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.019714 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-node-log\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.020042 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.019726 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/5e111816-868e-46a5-9605-122507f445ea-etc-sysctl-d\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:21.020042 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.019737 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3d328985-f90b-469d-a8a4-9962d8311ef2-tmp-dir\") pod \"node-resolver-cgldj\" (UID: \"3d328985-f90b-469d-a8a4-9962d8311ef2\") " pod="openshift-dns/node-resolver-cgldj" Apr 16 17:41:21.020042 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.019742 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0cbb16f9-6753-46fc-bde2-d1725b6a8bd7-kubelet-dir\") pod \"aws-ebs-csi-driver-node-cmzsn\" (UID: \"0cbb16f9-6753-46fc-bde2-d1725b6a8bd7\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-cmzsn" Apr 16 17:41:21.020042 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.019751 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z8wwz\" (UniqueName: \"kubernetes.io/projected/3d328985-f90b-469d-a8a4-9962d8311ef2-kube-api-access-z8wwz\") pod \"node-resolver-cgldj\" (UID: \"3d328985-f90b-469d-a8a4-9962d8311ef2\") " pod="openshift-dns/node-resolver-cgldj" Apr 16 17:41:21.020042 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.019785 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/21c668ba-6d2b-43f2-926f-50b6a51598db-agent-certs\") pod \"konnectivity-agent-hc5pn\" (UID: \"21c668ba-6d2b-43f2-926f-50b6a51598db\") " pod="kube-system/konnectivity-agent-hc5pn" Apr 16 17:41:21.020042 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.019781 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/0cbb16f9-6753-46fc-bde2-d1725b6a8bd7-device-dir\") pod \"aws-ebs-csi-driver-node-cmzsn\" (UID: \"0cbb16f9-6753-46fc-bde2-d1725b6a8bd7\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-cmzsn" Apr 16 17:41:21.020042 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.019820 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-host-run-netns\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.020042 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.019897 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/5e111816-868e-46a5-9605-122507f445ea-etc-sysctl-d\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:21.020042 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.019870 2571 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Apr 16 17:41:21.020042 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.019928 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/dd4660ce-5c75-41dc-a8ed-0e0e55ac0a01-serviceca\") pod \"node-ca-hdwwl\" (UID: \"dd4660ce-5c75-41dc-a8ed-0e0e55ac0a01\") " pod="openshift-image-registry/node-ca-hdwwl" Apr 16 17:41:21.020042 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.019956 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/0cbb16f9-6753-46fc-bde2-d1725b6a8bd7-etc-selinux\") pod \"aws-ebs-csi-driver-node-cmzsn\" (UID: \"0cbb16f9-6753-46fc-bde2-d1725b6a8bd7\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-cmzsn" Apr 16 17:41:21.020042 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.019981 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-host-run-multus-certs\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:21.020848 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020005 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b6b22f4-0d78-4198-b821-0f4f52115d9c-metrics-certs\") pod \"network-metrics-daemon-x4kh5\" (UID: \"5b6b22f4-0d78-4198-b821-0f4f52115d9c\") " pod="openshift-multus/network-metrics-daemon-x4kh5" Apr 16 17:41:21.020848 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020030 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sgl7z\" (UniqueName: \"kubernetes.io/projected/5b6b22f4-0d78-4198-b821-0f4f52115d9c-kube-api-access-sgl7z\") pod \"network-metrics-daemon-x4kh5\" (UID: \"5b6b22f4-0d78-4198-b821-0f4f52115d9c\") " pod="openshift-multus/network-metrics-daemon-x4kh5" Apr 16 17:41:21.020848 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020053 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/14054c8d-e316-4c67-a78f-0c44fc1bddc2-iptables-alerter-script\") pod \"iptables-alerter-wjvxt\" (UID: \"14054c8d-e316-4c67-a78f-0c44fc1bddc2\") " pod="openshift-network-operator/iptables-alerter-wjvxt" Apr 16 17:41:21.020848 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020074 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/5e111816-868e-46a5-9605-122507f445ea-etc-systemd\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:21.020848 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020099 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8976bd90-ca55-4016-a1b8-fffeec56d443-ovn-node-metrics-cert\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.020848 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020112 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/0cbb16f9-6753-46fc-bde2-d1725b6a8bd7-etc-selinux\") pod \"aws-ebs-csi-driver-node-cmzsn\" (UID: \"0cbb16f9-6753-46fc-bde2-d1725b6a8bd7\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-cmzsn" Apr 16 17:41:21.020848 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020123 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8976bd90-ca55-4016-a1b8-fffeec56d443-ovnkube-script-lib\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.020848 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.019920 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-host-run-netns\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.020848 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020147 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3e976806-7125-4a84-96f3-609791878cd8-cni-binary-copy\") pod \"multus-additional-cni-plugins-tlljg\" (UID: \"3e976806-7125-4a84-96f3-609791878cd8\") " pod="openshift-multus/multus-additional-cni-plugins-tlljg" Apr 16 17:41:21.020848 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020172 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-host-slash\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.020848 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020195 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-run-openvswitch\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.020848 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020265 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/5e111816-868e-46a5-9605-122507f445ea-etc-systemd\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:21.020848 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020218 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8976bd90-ca55-4016-a1b8-fffeec56d443-env-overrides\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.020848 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020357 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/dd4660ce-5c75-41dc-a8ed-0e0e55ac0a01-serviceca\") pod \"node-ca-hdwwl\" (UID: \"dd4660ce-5c75-41dc-a8ed-0e0e55ac0a01\") " pod="openshift-image-registry/node-ca-hdwwl" Apr 16 17:41:21.020848 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020380 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3e976806-7125-4a84-96f3-609791878cd8-system-cni-dir\") pod \"multus-additional-cni-plugins-tlljg\" (UID: \"3e976806-7125-4a84-96f3-609791878cd8\") " pod="openshift-multus/multus-additional-cni-plugins-tlljg" Apr 16 17:41:21.020848 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020419 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3e976806-7125-4a84-96f3-609791878cd8-cnibin\") pod \"multus-additional-cni-plugins-tlljg\" (UID: \"3e976806-7125-4a84-96f3-609791878cd8\") " pod="openshift-multus/multus-additional-cni-plugins-tlljg" Apr 16 17:41:21.020848 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020447 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4bbwj\" (UniqueName: \"kubernetes.io/projected/3e976806-7125-4a84-96f3-609791878cd8-kube-api-access-4bbwj\") pod \"multus-additional-cni-plugins-tlljg\" (UID: \"3e976806-7125-4a84-96f3-609791878cd8\") " pod="openshift-multus/multus-additional-cni-plugins-tlljg" Apr 16 17:41:21.021642 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020473 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nqtdh\" (UniqueName: \"kubernetes.io/projected/0cbb16f9-6753-46fc-bde2-d1725b6a8bd7-kube-api-access-nqtdh\") pod \"aws-ebs-csi-driver-node-cmzsn\" (UID: \"0cbb16f9-6753-46fc-bde2-d1725b6a8bd7\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-cmzsn" Apr 16 17:41:21.021642 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020499 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-multus-socket-dir-parent\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:21.021642 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:21.020512 2571 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 17:41:21.021642 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020523 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-var-lib-openvswitch\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.021642 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020547 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd4660ce-5c75-41dc-a8ed-0e0e55ac0a01-host\") pod \"node-ca-hdwwl\" (UID: \"dd4660ce-5c75-41dc-a8ed-0e0e55ac0a01\") " pod="openshift-image-registry/node-ca-hdwwl" Apr 16 17:41:21.021642 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:21.020601 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b6b22f4-0d78-4198-b821-0f4f52115d9c-metrics-certs podName:5b6b22f4-0d78-4198-b821-0f4f52115d9c nodeName:}" failed. No retries permitted until 2026-04-16 17:41:21.520571081 +0000 UTC m=+2.993800548 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b6b22f4-0d78-4198-b821-0f4f52115d9c-metrics-certs") pod "network-metrics-daemon-x4kh5" (UID: "5b6b22f4-0d78-4198-b821-0f4f52115d9c") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 17:41:21.021642 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020626 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3e976806-7125-4a84-96f3-609791878cd8-cnibin\") pod \"multus-additional-cni-plugins-tlljg\" (UID: \"3e976806-7125-4a84-96f3-609791878cd8\") " pod="openshift-multus/multus-additional-cni-plugins-tlljg" Apr 16 17:41:21.021642 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020627 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3e976806-7125-4a84-96f3-609791878cd8-os-release\") pod \"multus-additional-cni-plugins-tlljg\" (UID: \"3e976806-7125-4a84-96f3-609791878cd8\") " pod="openshift-multus/multus-additional-cni-plugins-tlljg" Apr 16 17:41:21.021642 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020672 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/3e976806-7125-4a84-96f3-609791878cd8-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-tlljg\" (UID: \"3e976806-7125-4a84-96f3-609791878cd8\") " pod="openshift-multus/multus-additional-cni-plugins-tlljg" Apr 16 17:41:21.021642 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020691 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3e976806-7125-4a84-96f3-609791878cd8-os-release\") pod \"multus-additional-cni-plugins-tlljg\" (UID: \"3e976806-7125-4a84-96f3-609791878cd8\") " pod="openshift-multus/multus-additional-cni-plugins-tlljg" Apr 16 17:41:21.021642 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020691 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8976bd90-ca55-4016-a1b8-fffeec56d443-ovnkube-script-lib\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.021642 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020715 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5e111816-868e-46a5-9605-122507f445ea-host\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:21.021642 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020751 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b9b6b64b-dc77-4472-8311-249ac8242441-multus-daemon-config\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:21.021642 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020796 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5e111816-868e-46a5-9605-122507f445ea-tmp\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:21.021642 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020807 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-var-lib-openvswitch\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.021642 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020828 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3d328985-f90b-469d-a8a4-9962d8311ef2-hosts-file\") pod \"node-resolver-cgldj\" (UID: \"3d328985-f90b-469d-a8a4-9962d8311ef2\") " pod="openshift-dns/node-resolver-cgldj" Apr 16 17:41:21.021642 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020847 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd4660ce-5c75-41dc-a8ed-0e0e55ac0a01-host\") pod \"node-ca-hdwwl\" (UID: \"dd4660ce-5c75-41dc-a8ed-0e0e55ac0a01\") " pod="openshift-image-registry/node-ca-hdwwl" Apr 16 17:41:21.022408 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020880 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-systemd-units\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.022408 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020908 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-etc-openvswitch\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.022408 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020934 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8vfwt\" (UniqueName: \"kubernetes.io/projected/b9b6b64b-dc77-4472-8311-249ac8242441-kube-api-access-8vfwt\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:21.022408 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020948 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-systemd-units\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.022408 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020960 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5e111816-868e-46a5-9605-122507f445ea-run\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:21.022408 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020986 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-host-var-lib-cni-bin\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:21.022408 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020997 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-multus-socket-dir-parent\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:21.022408 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021009 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-host-var-lib-cni-multus\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:21.022408 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021033 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/5e111816-868e-46a5-9605-122507f445ea-etc-modprobe-d\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:21.022408 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021057 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-run-ovn\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.022408 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021082 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-host-cni-bin\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.022408 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021103 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3e976806-7125-4a84-96f3-609791878cd8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-tlljg\" (UID: \"3e976806-7125-4a84-96f3-609791878cd8\") " pod="openshift-multus/multus-additional-cni-plugins-tlljg" Apr 16 17:41:21.022408 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021123 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-host-run-netns\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:21.022408 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021142 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-hostroot\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:21.022408 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021163 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5e111816-868e-46a5-9605-122507f445ea-var-lib-kubelet\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:21.022408 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021215 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5e111816-868e-46a5-9605-122507f445ea-var-lib-kubelet\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:21.022408 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021277 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/14054c8d-e316-4c67-a78f-0c44fc1bddc2-iptables-alerter-script\") pod \"iptables-alerter-wjvxt\" (UID: \"14054c8d-e316-4c67-a78f-0c44fc1bddc2\") " pod="openshift-network-operator/iptables-alerter-wjvxt" Apr 16 17:41:21.022408 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.020995 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-host-slash\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.023247 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021035 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-run-openvswitch\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.023247 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021370 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5e111816-868e-46a5-9605-122507f445ea-run\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:21.023247 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021413 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-host-var-lib-cni-bin\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:21.023247 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021447 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-host-cni-bin\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.023247 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021452 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-etc-openvswitch\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.023247 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021455 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5e111816-868e-46a5-9605-122507f445ea-host\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:21.023247 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021479 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8976bd90-ca55-4016-a1b8-fffeec56d443-env-overrides\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.023247 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021485 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-host-var-lib-cni-multus\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:21.023247 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021510 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3e976806-7125-4a84-96f3-609791878cd8-system-cni-dir\") pod \"multus-additional-cni-plugins-tlljg\" (UID: \"3e976806-7125-4a84-96f3-609791878cd8\") " pod="openshift-multus/multus-additional-cni-plugins-tlljg" Apr 16 17:41:21.023247 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021526 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/3e976806-7125-4a84-96f3-609791878cd8-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-tlljg\" (UID: \"3e976806-7125-4a84-96f3-609791878cd8\") " pod="openshift-multus/multus-additional-cni-plugins-tlljg" Apr 16 17:41:21.023247 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021542 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-os-release\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:21.023247 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021565 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/5e111816-868e-46a5-9605-122507f445ea-etc-modprobe-d\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:21.023247 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021571 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-host-run-ovn-kubernetes\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.023247 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021572 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-host-run-multus-certs\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:21.023247 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021609 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-host-run-netns\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:21.023247 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021619 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-hostroot\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:21.023247 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021667 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-os-release\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:21.023247 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021714 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-host-run-ovn-kubernetes\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.023998 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021785 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3d328985-f90b-469d-a8a4-9962d8311ef2-hosts-file\") pod \"node-resolver-cgldj\" (UID: \"3d328985-f90b-469d-a8a4-9962d8311ef2\") " pod="openshift-dns/node-resolver-cgldj" Apr 16 17:41:21.023998 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021790 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-run-ovn\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.023998 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021847 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.023998 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021899 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3e976806-7125-4a84-96f3-609791878cd8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-tlljg\" (UID: \"3e976806-7125-4a84-96f3-609791878cd8\") " pod="openshift-multus/multus-additional-cni-plugins-tlljg" Apr 16 17:41:21.023998 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021907 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.023998 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021926 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0cbb16f9-6753-46fc-bde2-d1725b6a8bd7-registration-dir\") pod \"aws-ebs-csi-driver-node-cmzsn\" (UID: \"0cbb16f9-6753-46fc-bde2-d1725b6a8bd7\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-cmzsn" Apr 16 17:41:21.023998 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021950 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-etc-kubernetes\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:21.023998 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021958 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3e976806-7125-4a84-96f3-609791878cd8-cni-binary-copy\") pod \"multus-additional-cni-plugins-tlljg\" (UID: \"3e976806-7125-4a84-96f3-609791878cd8\") " pod="openshift-multus/multus-additional-cni-plugins-tlljg" Apr 16 17:41:21.023998 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021973 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5e111816-868e-46a5-9605-122507f445ea-sys\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:21.023998 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.021997 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0cbb16f9-6753-46fc-bde2-d1725b6a8bd7-socket-dir\") pod \"aws-ebs-csi-driver-node-cmzsn\" (UID: \"0cbb16f9-6753-46fc-bde2-d1725b6a8bd7\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-cmzsn" Apr 16 17:41:21.023998 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.022008 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-etc-kubernetes\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:21.023998 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.022008 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0cbb16f9-6753-46fc-bde2-d1725b6a8bd7-registration-dir\") pod \"aws-ebs-csi-driver-node-cmzsn\" (UID: \"0cbb16f9-6753-46fc-bde2-d1725b6a8bd7\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-cmzsn" Apr 16 17:41:21.023998 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.022023 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5e111816-868e-46a5-9605-122507f445ea-etc-kubernetes\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:21.023998 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.022049 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b9b6b64b-dc77-4472-8311-249ac8242441-cni-binary-copy\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:21.023998 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.022057 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5e111816-868e-46a5-9605-122507f445ea-sys\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:21.023998 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.022075 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-host-run-k8s-cni-cncf-io\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:21.023998 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.022093 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0cbb16f9-6753-46fc-bde2-d1725b6a8bd7-socket-dir\") pod \"aws-ebs-csi-driver-node-cmzsn\" (UID: \"0cbb16f9-6753-46fc-bde2-d1725b6a8bd7\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-cmzsn" Apr 16 17:41:21.024602 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.022101 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/14054c8d-e316-4c67-a78f-0c44fc1bddc2-host-slash\") pod \"iptables-alerter-wjvxt\" (UID: \"14054c8d-e316-4c67-a78f-0c44fc1bddc2\") " pod="openshift-network-operator/iptables-alerter-wjvxt" Apr 16 17:41:21.024602 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.022114 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5e111816-868e-46a5-9605-122507f445ea-etc-kubernetes\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:21.024602 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.022153 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/14054c8d-e316-4c67-a78f-0c44fc1bddc2-host-slash\") pod \"iptables-alerter-wjvxt\" (UID: \"14054c8d-e316-4c67-a78f-0c44fc1bddc2\") " pod="openshift-network-operator/iptables-alerter-wjvxt" Apr 16 17:41:21.024602 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.022194 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mvhpj\" (UniqueName: \"kubernetes.io/projected/14054c8d-e316-4c67-a78f-0c44fc1bddc2-kube-api-access-mvhpj\") pod \"iptables-alerter-wjvxt\" (UID: \"14054c8d-e316-4c67-a78f-0c44fc1bddc2\") " pod="openshift-network-operator/iptables-alerter-wjvxt" Apr 16 17:41:21.024602 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.022203 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-host-run-k8s-cni-cncf-io\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:21.024602 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.022225 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/5e111816-868e-46a5-9605-122507f445ea-etc-sysctl-conf\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:21.024602 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.022252 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-multus-conf-dir\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:21.024602 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.022278 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g2ff7\" (UniqueName: \"kubernetes.io/projected/5e111816-868e-46a5-9605-122507f445ea-kube-api-access-g2ff7\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:21.024602 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.022301 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-run-systemd\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.024602 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.022324 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-log-socket\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.024602 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.022379 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-62dg9\" (UniqueName: \"kubernetes.io/projected/dd4660ce-5c75-41dc-a8ed-0e0e55ac0a01-kube-api-access-62dg9\") pod \"node-ca-hdwwl\" (UID: \"dd4660ce-5c75-41dc-a8ed-0e0e55ac0a01\") " pod="openshift-image-registry/node-ca-hdwwl" Apr 16 17:41:21.024602 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.022427 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-527cz\" (UniqueName: \"kubernetes.io/projected/679af96e-47e1-4212-8ee0-e6d82d302834-kube-api-access-527cz\") pod \"network-check-target-wrf29\" (UID: \"679af96e-47e1-4212-8ee0-e6d82d302834\") " pod="openshift-network-diagnostics/network-check-target-wrf29" Apr 16 17:41:21.024602 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.022455 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-system-cni-dir\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:21.024602 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.022487 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e111816-868e-46a5-9605-122507f445ea-lib-modules\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:21.024602 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.022512 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/21c668ba-6d2b-43f2-926f-50b6a51598db-konnectivity-ca\") pod \"konnectivity-agent-hc5pn\" (UID: \"21c668ba-6d2b-43f2-926f-50b6a51598db\") " pod="kube-system/konnectivity-agent-hc5pn" Apr 16 17:41:21.024602 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.022512 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b9b6b64b-dc77-4472-8311-249ac8242441-cni-binary-copy\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:21.024602 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.022559 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-multus-conf-dir\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:21.025292 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.022564 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-run-systemd\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.025292 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.022602 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fp6jw\" (UniqueName: \"kubernetes.io/projected/8976bd90-ca55-4016-a1b8-fffeec56d443-kube-api-access-fp6jw\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.025292 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.022614 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/5e111816-868e-46a5-9605-122507f445ea-etc-sysctl-conf\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:21.025292 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.022639 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-cnibin\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:21.025292 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.022616 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-log-socket\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.025292 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.022350 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3e976806-7125-4a84-96f3-609791878cd8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-tlljg\" (UID: \"3e976806-7125-4a84-96f3-609791878cd8\") " pod="openshift-multus/multus-additional-cni-plugins-tlljg" Apr 16 17:41:21.025292 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.022668 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-host-cni-netd\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.025292 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.022695 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8976bd90-ca55-4016-a1b8-fffeec56d443-ovnkube-config\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.025292 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.022796 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-cnibin\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:21.025292 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.022809 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b9b6b64b-dc77-4472-8311-249ac8242441-system-cni-dir\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:21.025292 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.022837 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8976bd90-ca55-4016-a1b8-fffeec56d443-host-cni-netd\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.025292 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.022926 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e111816-868e-46a5-9605-122507f445ea-lib-modules\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:21.025292 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.023168 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8976bd90-ca55-4016-a1b8-fffeec56d443-ovnkube-config\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.025292 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.023278 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3e976806-7125-4a84-96f3-609791878cd8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-tlljg\" (UID: \"3e976806-7125-4a84-96f3-609791878cd8\") " pod="openshift-multus/multus-additional-cni-plugins-tlljg" Apr 16 17:41:21.025292 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.023363 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/21c668ba-6d2b-43f2-926f-50b6a51598db-konnectivity-ca\") pod \"konnectivity-agent-hc5pn\" (UID: \"21c668ba-6d2b-43f2-926f-50b6a51598db\") " pod="kube-system/konnectivity-agent-hc5pn" Apr 16 17:41:21.025292 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.023707 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5e111816-868e-46a5-9605-122507f445ea-tmp\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:21.025292 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.023831 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/5e111816-868e-46a5-9605-122507f445ea-etc-tuned\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:21.025292 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.024382 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8976bd90-ca55-4016-a1b8-fffeec56d443-ovn-node-metrics-cert\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.026087 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.025173 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/21c668ba-6d2b-43f2-926f-50b6a51598db-agent-certs\") pod \"konnectivity-agent-hc5pn\" (UID: \"21c668ba-6d2b-43f2-926f-50b6a51598db\") " pod="kube-system/konnectivity-agent-hc5pn" Apr 16 17:41:21.027545 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.027465 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8wwz\" (UniqueName: \"kubernetes.io/projected/3d328985-f90b-469d-a8a4-9962d8311ef2-kube-api-access-z8wwz\") pod \"node-resolver-cgldj\" (UID: \"3d328985-f90b-469d-a8a4-9962d8311ef2\") " pod="openshift-dns/node-resolver-cgldj" Apr 16 17:41:21.028622 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.028553 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgl7z\" (UniqueName: \"kubernetes.io/projected/5b6b22f4-0d78-4198-b821-0f4f52115d9c-kube-api-access-sgl7z\") pod \"network-metrics-daemon-x4kh5\" (UID: \"5b6b22f4-0d78-4198-b821-0f4f52115d9c\") " pod="openshift-multus/network-metrics-daemon-x4kh5" Apr 16 17:41:21.028990 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.028968 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqtdh\" (UniqueName: \"kubernetes.io/projected/0cbb16f9-6753-46fc-bde2-d1725b6a8bd7-kube-api-access-nqtdh\") pod \"aws-ebs-csi-driver-node-cmzsn\" (UID: \"0cbb16f9-6753-46fc-bde2-d1725b6a8bd7\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-cmzsn" Apr 16 17:41:21.029250 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.029231 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bbwj\" (UniqueName: \"kubernetes.io/projected/3e976806-7125-4a84-96f3-609791878cd8-kube-api-access-4bbwj\") pod \"multus-additional-cni-plugins-tlljg\" (UID: \"3e976806-7125-4a84-96f3-609791878cd8\") " pod="openshift-multus/multus-additional-cni-plugins-tlljg" Apr 16 17:41:21.029374 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.029355 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vfwt\" (UniqueName: \"kubernetes.io/projected/b9b6b64b-dc77-4472-8311-249ac8242441-kube-api-access-8vfwt\") pod \"multus-2lqx9\" (UID: \"b9b6b64b-dc77-4472-8311-249ac8242441\") " pod="openshift-multus/multus-2lqx9" Apr 16 17:41:21.036517 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.036488 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-62dg9\" (UniqueName: \"kubernetes.io/projected/dd4660ce-5c75-41dc-a8ed-0e0e55ac0a01-kube-api-access-62dg9\") pod \"node-ca-hdwwl\" (UID: \"dd4660ce-5c75-41dc-a8ed-0e0e55ac0a01\") " pod="openshift-image-registry/node-ca-hdwwl" Apr 16 17:41:21.037048 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.037024 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2ff7\" (UniqueName: \"kubernetes.io/projected/5e111816-868e-46a5-9605-122507f445ea-kube-api-access-g2ff7\") pod \"tuned-fv6lf\" (UID: \"5e111816-868e-46a5-9605-122507f445ea\") " pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:21.037363 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.037345 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvhpj\" (UniqueName: \"kubernetes.io/projected/14054c8d-e316-4c67-a78f-0c44fc1bddc2-kube-api-access-mvhpj\") pod \"iptables-alerter-wjvxt\" (UID: \"14054c8d-e316-4c67-a78f-0c44fc1bddc2\") " pod="openshift-network-operator/iptables-alerter-wjvxt" Apr 16 17:41:21.037755 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.037739 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp6jw\" (UniqueName: \"kubernetes.io/projected/8976bd90-ca55-4016-a1b8-fffeec56d443-kube-api-access-fp6jw\") pod \"ovnkube-node-69xw9\" (UID: \"8976bd90-ca55-4016-a1b8-fffeec56d443\") " pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.123726 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.123702 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-527cz\" (UniqueName: \"kubernetes.io/projected/679af96e-47e1-4212-8ee0-e6d82d302834-kube-api-access-527cz\") pod \"network-check-target-wrf29\" (UID: \"679af96e-47e1-4212-8ee0-e6d82d302834\") " pod="openshift-network-diagnostics/network-check-target-wrf29" Apr 16 17:41:21.128503 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:21.128479 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 17:41:21.128588 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:21.128510 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 17:41:21.128588 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:21.128523 2571 projected.go:194] Error preparing data for projected volume kube-api-access-527cz for pod openshift-network-diagnostics/network-check-target-wrf29: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 17:41:21.128674 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:21.128602 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/679af96e-47e1-4212-8ee0-e6d82d302834-kube-api-access-527cz podName:679af96e-47e1-4212-8ee0-e6d82d302834 nodeName:}" failed. No retries permitted until 2026-04-16 17:41:21.628582645 +0000 UTC m=+3.101812102 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-527cz" (UniqueName: "kubernetes.io/projected/679af96e-47e1-4212-8ee0-e6d82d302834-kube-api-access-527cz") pod "network-check-target-wrf29" (UID: "679af96e-47e1-4212-8ee0-e6d82d302834") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 17:41:21.209150 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.209123 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-wjvxt" Apr 16 17:41:21.214776 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.214753 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-cmzsn" Apr 16 17:41:21.222250 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.222229 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" Apr 16 17:41:21.226806 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.226791 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-cgldj" Apr 16 17:41:21.233304 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.233285 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2lqx9" Apr 16 17:41:21.240835 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.240816 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-hc5pn" Apr 16 17:41:21.248411 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.248391 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:21.255940 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.255924 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-hdwwl" Apr 16 17:41:21.261382 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.261364 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-tlljg" Apr 16 17:41:21.321947 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.321928 2571 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 17:41:21.446579 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:21.446502 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd4660ce_5c75_41dc_a8ed_0e0e55ac0a01.slice/crio-c5bd8d39c56a741345a8d08da6ec9d78c4db6e50e962bc69ba86d2566eb2486c WatchSource:0}: Error finding container c5bd8d39c56a741345a8d08da6ec9d78c4db6e50e962bc69ba86d2566eb2486c: Status 404 returned error can't find the container with id c5bd8d39c56a741345a8d08da6ec9d78c4db6e50e962bc69ba86d2566eb2486c Apr 16 17:41:21.448248 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:21.448229 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9b6b64b_dc77_4472_8311_249ac8242441.slice/crio-2b99738348a81634814e82f998b5d9f911660e5146cf149b7dcae61afc32a271 WatchSource:0}: Error finding container 2b99738348a81634814e82f998b5d9f911660e5146cf149b7dcae61afc32a271: Status 404 returned error can't find the container with id 2b99738348a81634814e82f998b5d9f911660e5146cf149b7dcae61afc32a271 Apr 16 17:41:21.448940 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:21.448917 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0cbb16f9_6753_46fc_bde2_d1725b6a8bd7.slice/crio-44c1a47de8a3c290b0c53fd82a171440826d065bade7b47ef72b5c1565e5d4c2 WatchSource:0}: Error finding container 44c1a47de8a3c290b0c53fd82a171440826d065bade7b47ef72b5c1565e5d4c2: Status 404 returned error can't find the container with id 44c1a47de8a3c290b0c53fd82a171440826d065bade7b47ef72b5c1565e5d4c2 Apr 16 17:41:21.450276 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:21.450199 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8976bd90_ca55_4016_a1b8_fffeec56d443.slice/crio-68be4762fb6c9d0ef8eda6d5fa9401461d7ff400d2822fe352719260ddb591bf WatchSource:0}: Error finding container 68be4762fb6c9d0ef8eda6d5fa9401461d7ff400d2822fe352719260ddb591bf: Status 404 returned error can't find the container with id 68be4762fb6c9d0ef8eda6d5fa9401461d7ff400d2822fe352719260ddb591bf Apr 16 17:41:21.451532 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:21.451459 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d328985_f90b_469d_a8a4_9962d8311ef2.slice/crio-11d4ca63bf7785c7008c5902a518746eaf33a3a627c838329efcd5a4925c9090 WatchSource:0}: Error finding container 11d4ca63bf7785c7008c5902a518746eaf33a3a627c838329efcd5a4925c9090: Status 404 returned error can't find the container with id 11d4ca63bf7785c7008c5902a518746eaf33a3a627c838329efcd5a4925c9090 Apr 16 17:41:21.454585 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:21.454387 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21c668ba_6d2b_43f2_926f_50b6a51598db.slice/crio-6836f47505f522f28698dbae8a7d4d6ebc2bfe13835de5f89ea9505d62beeab8 WatchSource:0}: Error finding container 6836f47505f522f28698dbae8a7d4d6ebc2bfe13835de5f89ea9505d62beeab8: Status 404 returned error can't find the container with id 6836f47505f522f28698dbae8a7d4d6ebc2bfe13835de5f89ea9505d62beeab8 Apr 16 17:41:21.527724 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.527495 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b6b22f4-0d78-4198-b821-0f4f52115d9c-metrics-certs\") pod \"network-metrics-daemon-x4kh5\" (UID: \"5b6b22f4-0d78-4198-b821-0f4f52115d9c\") " pod="openshift-multus/network-metrics-daemon-x4kh5" Apr 16 17:41:21.527724 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:21.527621 2571 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 17:41:21.527724 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:21.527695 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b6b22f4-0d78-4198-b821-0f4f52115d9c-metrics-certs podName:5b6b22f4-0d78-4198-b821-0f4f52115d9c nodeName:}" failed. No retries permitted until 2026-04-16 17:41:22.527669607 +0000 UTC m=+4.000899072 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b6b22f4-0d78-4198-b821-0f4f52115d9c-metrics-certs") pod "network-metrics-daemon-x4kh5" (UID: "5b6b22f4-0d78-4198-b821-0f4f52115d9c") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 17:41:21.729470 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.729391 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-527cz\" (UniqueName: \"kubernetes.io/projected/679af96e-47e1-4212-8ee0-e6d82d302834-kube-api-access-527cz\") pod \"network-check-target-wrf29\" (UID: \"679af96e-47e1-4212-8ee0-e6d82d302834\") " pod="openshift-network-diagnostics/network-check-target-wrf29" Apr 16 17:41:21.729574 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:21.729537 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 17:41:21.729574 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:21.729555 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 17:41:21.729574 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:21.729564 2571 projected.go:194] Error preparing data for projected volume kube-api-access-527cz for pod openshift-network-diagnostics/network-check-target-wrf29: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 17:41:21.729701 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:21.729628 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/679af96e-47e1-4212-8ee0-e6d82d302834-kube-api-access-527cz podName:679af96e-47e1-4212-8ee0-e6d82d302834 nodeName:}" failed. No retries permitted until 2026-04-16 17:41:22.729613515 +0000 UTC m=+4.202842969 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-527cz" (UniqueName: "kubernetes.io/projected/679af96e-47e1-4212-8ee0-e6d82d302834-kube-api-access-527cz") pod "network-check-target-wrf29" (UID: "679af96e-47e1-4212-8ee0-e6d82d302834") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 17:41:21.944634 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.944559 2571 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-15 17:36:19 +0000 UTC" deadline="2027-09-25 23:14:38.590643034 +0000 UTC" Apr 16 17:41:21.944634 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:21.944594 2571 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="12653h33m16.646052886s" Apr 16 17:41:22.031844 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:22.031292 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-wrf29" Apr 16 17:41:22.031844 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:22.031409 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-wrf29" podUID="679af96e-47e1-4212-8ee0-e6d82d302834" Apr 16 17:41:22.051991 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:22.051892 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-cgldj" event={"ID":"3d328985-f90b-469d-a8a4-9962d8311ef2","Type":"ContainerStarted","Data":"11d4ca63bf7785c7008c5902a518746eaf33a3a627c838329efcd5a4925c9090"} Apr 16 17:41:22.057622 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:22.057559 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2lqx9" event={"ID":"b9b6b64b-dc77-4472-8311-249ac8242441","Type":"ContainerStarted","Data":"2b99738348a81634814e82f998b5d9f911660e5146cf149b7dcae61afc32a271"} Apr 16 17:41:22.071043 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:22.070348 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-138-134.ec2.internal" event={"ID":"612d2783cb89512547948baa230ca5ee","Type":"ContainerStarted","Data":"1fc505f725ad6d83ecc9cfcdb5ed4d6f7a9f60f306225295da1e46ed887bf926"} Apr 16 17:41:22.077389 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:22.077361 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" event={"ID":"5e111816-868e-46a5-9605-122507f445ea","Type":"ContainerStarted","Data":"8ae5c9a7dfd11c967b2ed2f1d68b25fb1eca6576674c5f3afe4acda1322e06e3"} Apr 16 17:41:22.085351 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:22.085329 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-hc5pn" event={"ID":"21c668ba-6d2b-43f2-926f-50b6a51598db","Type":"ContainerStarted","Data":"6836f47505f522f28698dbae8a7d4d6ebc2bfe13835de5f89ea9505d62beeab8"} Apr 16 17:41:22.093491 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:22.093453 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-wjvxt" event={"ID":"14054c8d-e316-4c67-a78f-0c44fc1bddc2","Type":"ContainerStarted","Data":"de4ef4d2ad0fca2784f55a1aced58785f814698b24968cd8b1bcf54a1263fea4"} Apr 16 17:41:22.094934 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:22.094887 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-cmzsn" event={"ID":"0cbb16f9-6753-46fc-bde2-d1725b6a8bd7","Type":"ContainerStarted","Data":"44c1a47de8a3c290b0c53fd82a171440826d065bade7b47ef72b5c1565e5d4c2"} Apr 16 17:41:22.096222 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:22.096203 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" event={"ID":"8976bd90-ca55-4016-a1b8-fffeec56d443","Type":"ContainerStarted","Data":"68be4762fb6c9d0ef8eda6d5fa9401461d7ff400d2822fe352719260ddb591bf"} Apr 16 17:41:22.097800 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:22.097737 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-hdwwl" event={"ID":"dd4660ce-5c75-41dc-a8ed-0e0e55ac0a01","Type":"ContainerStarted","Data":"c5bd8d39c56a741345a8d08da6ec9d78c4db6e50e962bc69ba86d2566eb2486c"} Apr 16 17:41:22.099966 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:22.099928 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tlljg" event={"ID":"3e976806-7125-4a84-96f3-609791878cd8","Type":"ContainerStarted","Data":"a6edd59efaf4bdc61bb7d52190235fe4f3d9038293cab2d47d16f5ce175aba06"} Apr 16 17:41:22.538213 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:22.538179 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b6b22f4-0d78-4198-b821-0f4f52115d9c-metrics-certs\") pod \"network-metrics-daemon-x4kh5\" (UID: \"5b6b22f4-0d78-4198-b821-0f4f52115d9c\") " pod="openshift-multus/network-metrics-daemon-x4kh5" Apr 16 17:41:22.538359 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:22.538324 2571 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 17:41:22.538423 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:22.538382 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b6b22f4-0d78-4198-b821-0f4f52115d9c-metrics-certs podName:5b6b22f4-0d78-4198-b821-0f4f52115d9c nodeName:}" failed. No retries permitted until 2026-04-16 17:41:24.538364 +0000 UTC m=+6.011593468 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b6b22f4-0d78-4198-b821-0f4f52115d9c-metrics-certs") pod "network-metrics-daemon-x4kh5" (UID: "5b6b22f4-0d78-4198-b821-0f4f52115d9c") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 17:41:22.739396 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:22.739360 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-527cz\" (UniqueName: \"kubernetes.io/projected/679af96e-47e1-4212-8ee0-e6d82d302834-kube-api-access-527cz\") pod \"network-check-target-wrf29\" (UID: \"679af96e-47e1-4212-8ee0-e6d82d302834\") " pod="openshift-network-diagnostics/network-check-target-wrf29" Apr 16 17:41:22.739610 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:22.739581 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 17:41:22.739610 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:22.739602 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 17:41:22.739739 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:22.739614 2571 projected.go:194] Error preparing data for projected volume kube-api-access-527cz for pod openshift-network-diagnostics/network-check-target-wrf29: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 17:41:22.739739 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:22.739667 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/679af96e-47e1-4212-8ee0-e6d82d302834-kube-api-access-527cz podName:679af96e-47e1-4212-8ee0-e6d82d302834 nodeName:}" failed. No retries permitted until 2026-04-16 17:41:24.739648325 +0000 UTC m=+6.212877792 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-527cz" (UniqueName: "kubernetes.io/projected/679af96e-47e1-4212-8ee0-e6d82d302834-kube-api-access-527cz") pod "network-check-target-wrf29" (UID: "679af96e-47e1-4212-8ee0-e6d82d302834") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 17:41:23.033381 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:23.033306 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x4kh5" Apr 16 17:41:23.033804 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:23.033440 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x4kh5" podUID="5b6b22f4-0d78-4198-b821-0f4f52115d9c" Apr 16 17:41:23.113264 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:23.113229 2571 generic.go:358] "Generic (PLEG): container finished" podID="67b84718c32104f594de327de7cddecd" containerID="c577f03003373ed6a4d57f71793f242f685ca9a699b9063effb0704dbe5a65e3" exitCode=0 Apr 16 17:41:23.114142 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:23.114117 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-138-134.ec2.internal" event={"ID":"67b84718c32104f594de327de7cddecd","Type":"ContainerDied","Data":"c577f03003373ed6a4d57f71793f242f685ca9a699b9063effb0704dbe5a65e3"} Apr 16 17:41:23.126113 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:23.126062 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-proxy-ip-10-0-138-134.ec2.internal" podStartSLOduration=3.126045479 podStartE2EDuration="3.126045479s" podCreationTimestamp="2026-04-16 17:41:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 17:41:22.084265763 +0000 UTC m=+3.557495237" watchObservedRunningTime="2026-04-16 17:41:23.126045479 +0000 UTC m=+4.599274953" Apr 16 17:41:24.031522 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:24.030986 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-wrf29" Apr 16 17:41:24.031522 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:24.031106 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-wrf29" podUID="679af96e-47e1-4212-8ee0-e6d82d302834" Apr 16 17:41:24.125122 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:24.125085 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-138-134.ec2.internal" event={"ID":"67b84718c32104f594de327de7cddecd","Type":"ContainerStarted","Data":"58ef9f47f2e28a25d26b41bb400b2a18c3d26ba512d5a75ecec517b8e7822aec"} Apr 16 17:41:24.556647 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:24.556608 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b6b22f4-0d78-4198-b821-0f4f52115d9c-metrics-certs\") pod \"network-metrics-daemon-x4kh5\" (UID: \"5b6b22f4-0d78-4198-b821-0f4f52115d9c\") " pod="openshift-multus/network-metrics-daemon-x4kh5" Apr 16 17:41:24.556809 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:24.556769 2571 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 17:41:24.556883 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:24.556843 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b6b22f4-0d78-4198-b821-0f4f52115d9c-metrics-certs podName:5b6b22f4-0d78-4198-b821-0f4f52115d9c nodeName:}" failed. No retries permitted until 2026-04-16 17:41:28.556823019 +0000 UTC m=+10.030052488 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b6b22f4-0d78-4198-b821-0f4f52115d9c-metrics-certs") pod "network-metrics-daemon-x4kh5" (UID: "5b6b22f4-0d78-4198-b821-0f4f52115d9c") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 17:41:24.758881 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:24.758784 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-527cz\" (UniqueName: \"kubernetes.io/projected/679af96e-47e1-4212-8ee0-e6d82d302834-kube-api-access-527cz\") pod \"network-check-target-wrf29\" (UID: \"679af96e-47e1-4212-8ee0-e6d82d302834\") " pod="openshift-network-diagnostics/network-check-target-wrf29" Apr 16 17:41:24.759057 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:24.758953 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 17:41:24.759057 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:24.758972 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 17:41:24.759057 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:24.758984 2571 projected.go:194] Error preparing data for projected volume kube-api-access-527cz for pod openshift-network-diagnostics/network-check-target-wrf29: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 17:41:24.759057 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:24.759043 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/679af96e-47e1-4212-8ee0-e6d82d302834-kube-api-access-527cz podName:679af96e-47e1-4212-8ee0-e6d82d302834 nodeName:}" failed. No retries permitted until 2026-04-16 17:41:28.759024242 +0000 UTC m=+10.232253695 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-527cz" (UniqueName: "kubernetes.io/projected/679af96e-47e1-4212-8ee0-e6d82d302834-kube-api-access-527cz") pod "network-check-target-wrf29" (UID: "679af96e-47e1-4212-8ee0-e6d82d302834") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 17:41:25.031210 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:25.031176 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x4kh5" Apr 16 17:41:25.031390 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:25.031306 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x4kh5" podUID="5b6b22f4-0d78-4198-b821-0f4f52115d9c" Apr 16 17:41:25.695631 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:25.695573 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-138-134.ec2.internal" podStartSLOduration=5.695553246 podStartE2EDuration="5.695553246s" podCreationTimestamp="2026-04-16 17:41:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 17:41:24.140680139 +0000 UTC m=+5.613909614" watchObservedRunningTime="2026-04-16 17:41:25.695553246 +0000 UTC m=+7.168782719" Apr 16 17:41:25.696573 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:25.696549 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kube-system/global-pull-secret-syncer-mtgtk"] Apr 16 17:41:25.698903 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:25.698839 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mtgtk" Apr 16 17:41:25.699008 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:25.698934 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mtgtk" podUID="a7cbf992-bfb2-4889-ba57-9de812ce16d4" Apr 16 17:41:25.766389 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:25.766197 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/a7cbf992-bfb2-4889-ba57-9de812ce16d4-dbus\") pod \"global-pull-secret-syncer-mtgtk\" (UID: \"a7cbf992-bfb2-4889-ba57-9de812ce16d4\") " pod="kube-system/global-pull-secret-syncer-mtgtk" Apr 16 17:41:25.766389 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:25.766237 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/a7cbf992-bfb2-4889-ba57-9de812ce16d4-original-pull-secret\") pod \"global-pull-secret-syncer-mtgtk\" (UID: \"a7cbf992-bfb2-4889-ba57-9de812ce16d4\") " pod="kube-system/global-pull-secret-syncer-mtgtk" Apr 16 17:41:25.766389 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:25.766295 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/a7cbf992-bfb2-4889-ba57-9de812ce16d4-kubelet-config\") pod \"global-pull-secret-syncer-mtgtk\" (UID: \"a7cbf992-bfb2-4889-ba57-9de812ce16d4\") " pod="kube-system/global-pull-secret-syncer-mtgtk" Apr 16 17:41:25.867185 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:25.867013 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/a7cbf992-bfb2-4889-ba57-9de812ce16d4-kubelet-config\") pod \"global-pull-secret-syncer-mtgtk\" (UID: \"a7cbf992-bfb2-4889-ba57-9de812ce16d4\") " pod="kube-system/global-pull-secret-syncer-mtgtk" Apr 16 17:41:25.867185 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:25.867093 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/a7cbf992-bfb2-4889-ba57-9de812ce16d4-dbus\") pod \"global-pull-secret-syncer-mtgtk\" (UID: \"a7cbf992-bfb2-4889-ba57-9de812ce16d4\") " pod="kube-system/global-pull-secret-syncer-mtgtk" Apr 16 17:41:25.867185 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:25.867119 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/a7cbf992-bfb2-4889-ba57-9de812ce16d4-original-pull-secret\") pod \"global-pull-secret-syncer-mtgtk\" (UID: \"a7cbf992-bfb2-4889-ba57-9de812ce16d4\") " pod="kube-system/global-pull-secret-syncer-mtgtk" Apr 16 17:41:25.867442 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:25.867230 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/a7cbf992-bfb2-4889-ba57-9de812ce16d4-kubelet-config\") pod \"global-pull-secret-syncer-mtgtk\" (UID: \"a7cbf992-bfb2-4889-ba57-9de812ce16d4\") " pod="kube-system/global-pull-secret-syncer-mtgtk" Apr 16 17:41:25.867442 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:25.867257 2571 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 17:41:25.867442 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:25.867296 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/a7cbf992-bfb2-4889-ba57-9de812ce16d4-dbus\") pod \"global-pull-secret-syncer-mtgtk\" (UID: \"a7cbf992-bfb2-4889-ba57-9de812ce16d4\") " pod="kube-system/global-pull-secret-syncer-mtgtk" Apr 16 17:41:25.867442 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:25.867314 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7cbf992-bfb2-4889-ba57-9de812ce16d4-original-pull-secret podName:a7cbf992-bfb2-4889-ba57-9de812ce16d4 nodeName:}" failed. No retries permitted until 2026-04-16 17:41:26.367295812 +0000 UTC m=+7.840525289 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/a7cbf992-bfb2-4889-ba57-9de812ce16d4-original-pull-secret") pod "global-pull-secret-syncer-mtgtk" (UID: "a7cbf992-bfb2-4889-ba57-9de812ce16d4") : object "kube-system"/"original-pull-secret" not registered Apr 16 17:41:26.030436 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:26.030351 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-wrf29" Apr 16 17:41:26.030592 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:26.030482 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-wrf29" podUID="679af96e-47e1-4212-8ee0-e6d82d302834" Apr 16 17:41:26.370812 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:26.370741 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/a7cbf992-bfb2-4889-ba57-9de812ce16d4-original-pull-secret\") pod \"global-pull-secret-syncer-mtgtk\" (UID: \"a7cbf992-bfb2-4889-ba57-9de812ce16d4\") " pod="kube-system/global-pull-secret-syncer-mtgtk" Apr 16 17:41:26.370986 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:26.370879 2571 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 17:41:26.370986 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:26.370944 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7cbf992-bfb2-4889-ba57-9de812ce16d4-original-pull-secret podName:a7cbf992-bfb2-4889-ba57-9de812ce16d4 nodeName:}" failed. No retries permitted until 2026-04-16 17:41:27.37092701 +0000 UTC m=+8.844156477 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/a7cbf992-bfb2-4889-ba57-9de812ce16d4-original-pull-secret") pod "global-pull-secret-syncer-mtgtk" (UID: "a7cbf992-bfb2-4889-ba57-9de812ce16d4") : object "kube-system"/"original-pull-secret" not registered Apr 16 17:41:27.030943 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:27.030843 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x4kh5" Apr 16 17:41:27.031388 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:27.030998 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x4kh5" podUID="5b6b22f4-0d78-4198-b821-0f4f52115d9c" Apr 16 17:41:27.031388 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:27.031362 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mtgtk" Apr 16 17:41:27.031484 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:27.031451 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mtgtk" podUID="a7cbf992-bfb2-4889-ba57-9de812ce16d4" Apr 16 17:41:27.378582 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:27.378499 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/a7cbf992-bfb2-4889-ba57-9de812ce16d4-original-pull-secret\") pod \"global-pull-secret-syncer-mtgtk\" (UID: \"a7cbf992-bfb2-4889-ba57-9de812ce16d4\") " pod="kube-system/global-pull-secret-syncer-mtgtk" Apr 16 17:41:27.378757 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:27.378678 2571 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 17:41:27.378757 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:27.378755 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7cbf992-bfb2-4889-ba57-9de812ce16d4-original-pull-secret podName:a7cbf992-bfb2-4889-ba57-9de812ce16d4 nodeName:}" failed. No retries permitted until 2026-04-16 17:41:29.37873591 +0000 UTC m=+10.851965375 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/a7cbf992-bfb2-4889-ba57-9de812ce16d4-original-pull-secret") pod "global-pull-secret-syncer-mtgtk" (UID: "a7cbf992-bfb2-4889-ba57-9de812ce16d4") : object "kube-system"/"original-pull-secret" not registered Apr 16 17:41:28.031021 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:28.030489 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-wrf29" Apr 16 17:41:28.031021 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:28.030632 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-wrf29" podUID="679af96e-47e1-4212-8ee0-e6d82d302834" Apr 16 17:41:28.590110 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:28.590076 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b6b22f4-0d78-4198-b821-0f4f52115d9c-metrics-certs\") pod \"network-metrics-daemon-x4kh5\" (UID: \"5b6b22f4-0d78-4198-b821-0f4f52115d9c\") " pod="openshift-multus/network-metrics-daemon-x4kh5" Apr 16 17:41:28.590301 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:28.590184 2571 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 17:41:28.590301 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:28.590247 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b6b22f4-0d78-4198-b821-0f4f52115d9c-metrics-certs podName:5b6b22f4-0d78-4198-b821-0f4f52115d9c nodeName:}" failed. No retries permitted until 2026-04-16 17:41:36.590227881 +0000 UTC m=+18.063457339 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b6b22f4-0d78-4198-b821-0f4f52115d9c-metrics-certs") pod "network-metrics-daemon-x4kh5" (UID: "5b6b22f4-0d78-4198-b821-0f4f52115d9c") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 17:41:28.791506 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:28.791470 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-527cz\" (UniqueName: \"kubernetes.io/projected/679af96e-47e1-4212-8ee0-e6d82d302834-kube-api-access-527cz\") pod \"network-check-target-wrf29\" (UID: \"679af96e-47e1-4212-8ee0-e6d82d302834\") " pod="openshift-network-diagnostics/network-check-target-wrf29" Apr 16 17:41:28.791677 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:28.791650 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 17:41:28.791677 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:28.791677 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 17:41:28.791798 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:28.791689 2571 projected.go:194] Error preparing data for projected volume kube-api-access-527cz for pod openshift-network-diagnostics/network-check-target-wrf29: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 17:41:28.791798 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:28.791747 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/679af96e-47e1-4212-8ee0-e6d82d302834-kube-api-access-527cz podName:679af96e-47e1-4212-8ee0-e6d82d302834 nodeName:}" failed. No retries permitted until 2026-04-16 17:41:36.791728027 +0000 UTC m=+18.264957485 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-527cz" (UniqueName: "kubernetes.io/projected/679af96e-47e1-4212-8ee0-e6d82d302834-kube-api-access-527cz") pod "network-check-target-wrf29" (UID: "679af96e-47e1-4212-8ee0-e6d82d302834") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 17:41:29.031899 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:29.031867 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x4kh5" Apr 16 17:41:29.032330 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:29.032026 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x4kh5" podUID="5b6b22f4-0d78-4198-b821-0f4f52115d9c" Apr 16 17:41:29.032330 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:29.032080 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mtgtk" Apr 16 17:41:29.032330 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:29.032155 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mtgtk" podUID="a7cbf992-bfb2-4889-ba57-9de812ce16d4" Apr 16 17:41:29.395449 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:29.395415 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/a7cbf992-bfb2-4889-ba57-9de812ce16d4-original-pull-secret\") pod \"global-pull-secret-syncer-mtgtk\" (UID: \"a7cbf992-bfb2-4889-ba57-9de812ce16d4\") " pod="kube-system/global-pull-secret-syncer-mtgtk" Apr 16 17:41:29.395612 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:29.395556 2571 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 17:41:29.395672 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:29.395622 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7cbf992-bfb2-4889-ba57-9de812ce16d4-original-pull-secret podName:a7cbf992-bfb2-4889-ba57-9de812ce16d4 nodeName:}" failed. No retries permitted until 2026-04-16 17:41:33.395602027 +0000 UTC m=+14.868831479 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/a7cbf992-bfb2-4889-ba57-9de812ce16d4-original-pull-secret") pod "global-pull-secret-syncer-mtgtk" (UID: "a7cbf992-bfb2-4889-ba57-9de812ce16d4") : object "kube-system"/"original-pull-secret" not registered Apr 16 17:41:30.031254 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:30.031221 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-wrf29" Apr 16 17:41:30.031418 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:30.031348 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-wrf29" podUID="679af96e-47e1-4212-8ee0-e6d82d302834" Apr 16 17:41:31.031104 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:31.031073 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x4kh5" Apr 16 17:41:31.031496 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:31.031073 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mtgtk" Apr 16 17:41:31.031496 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:31.031194 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x4kh5" podUID="5b6b22f4-0d78-4198-b821-0f4f52115d9c" Apr 16 17:41:31.031496 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:31.031279 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mtgtk" podUID="a7cbf992-bfb2-4889-ba57-9de812ce16d4" Apr 16 17:41:32.031051 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:32.031020 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-wrf29" Apr 16 17:41:32.031229 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:32.031135 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-wrf29" podUID="679af96e-47e1-4212-8ee0-e6d82d302834" Apr 16 17:41:33.030553 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:33.030516 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mtgtk" Apr 16 17:41:33.030821 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:33.030516 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x4kh5" Apr 16 17:41:33.030821 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:33.030637 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mtgtk" podUID="a7cbf992-bfb2-4889-ba57-9de812ce16d4" Apr 16 17:41:33.030821 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:33.030752 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x4kh5" podUID="5b6b22f4-0d78-4198-b821-0f4f52115d9c" Apr 16 17:41:33.425626 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:33.425592 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/a7cbf992-bfb2-4889-ba57-9de812ce16d4-original-pull-secret\") pod \"global-pull-secret-syncer-mtgtk\" (UID: \"a7cbf992-bfb2-4889-ba57-9de812ce16d4\") " pod="kube-system/global-pull-secret-syncer-mtgtk" Apr 16 17:41:33.426080 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:33.425722 2571 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 17:41:33.426080 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:33.425800 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7cbf992-bfb2-4889-ba57-9de812ce16d4-original-pull-secret podName:a7cbf992-bfb2-4889-ba57-9de812ce16d4 nodeName:}" failed. No retries permitted until 2026-04-16 17:41:41.425779996 +0000 UTC m=+22.899009446 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/a7cbf992-bfb2-4889-ba57-9de812ce16d4-original-pull-secret") pod "global-pull-secret-syncer-mtgtk" (UID: "a7cbf992-bfb2-4889-ba57-9de812ce16d4") : object "kube-system"/"original-pull-secret" not registered Apr 16 17:41:34.030689 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:34.030661 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-wrf29" Apr 16 17:41:34.030871 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:34.030758 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-wrf29" podUID="679af96e-47e1-4212-8ee0-e6d82d302834" Apr 16 17:41:35.030463 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:35.030426 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mtgtk" Apr 16 17:41:35.030920 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:35.030629 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mtgtk" podUID="a7cbf992-bfb2-4889-ba57-9de812ce16d4" Apr 16 17:41:35.031189 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:35.031169 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x4kh5" Apr 16 17:41:35.031368 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:35.031348 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x4kh5" podUID="5b6b22f4-0d78-4198-b821-0f4f52115d9c" Apr 16 17:41:36.030893 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:36.030850 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-wrf29" Apr 16 17:41:36.031306 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:36.030967 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-wrf29" podUID="679af96e-47e1-4212-8ee0-e6d82d302834" Apr 16 17:41:36.651458 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:36.651422 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b6b22f4-0d78-4198-b821-0f4f52115d9c-metrics-certs\") pod \"network-metrics-daemon-x4kh5\" (UID: \"5b6b22f4-0d78-4198-b821-0f4f52115d9c\") " pod="openshift-multus/network-metrics-daemon-x4kh5" Apr 16 17:41:36.651617 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:36.651553 2571 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 17:41:36.651665 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:36.651631 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b6b22f4-0d78-4198-b821-0f4f52115d9c-metrics-certs podName:5b6b22f4-0d78-4198-b821-0f4f52115d9c nodeName:}" failed. No retries permitted until 2026-04-16 17:41:52.651609959 +0000 UTC m=+34.124839422 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b6b22f4-0d78-4198-b821-0f4f52115d9c-metrics-certs") pod "network-metrics-daemon-x4kh5" (UID: "5b6b22f4-0d78-4198-b821-0f4f52115d9c") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 17:41:36.852411 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:36.852373 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-527cz\" (UniqueName: \"kubernetes.io/projected/679af96e-47e1-4212-8ee0-e6d82d302834-kube-api-access-527cz\") pod \"network-check-target-wrf29\" (UID: \"679af96e-47e1-4212-8ee0-e6d82d302834\") " pod="openshift-network-diagnostics/network-check-target-wrf29" Apr 16 17:41:36.852564 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:36.852544 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 17:41:36.852624 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:36.852568 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 17:41:36.852624 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:36.852581 2571 projected.go:194] Error preparing data for projected volume kube-api-access-527cz for pod openshift-network-diagnostics/network-check-target-wrf29: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 17:41:36.852711 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:36.852648 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/679af96e-47e1-4212-8ee0-e6d82d302834-kube-api-access-527cz podName:679af96e-47e1-4212-8ee0-e6d82d302834 nodeName:}" failed. No retries permitted until 2026-04-16 17:41:52.852628293 +0000 UTC m=+34.325857746 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-527cz" (UniqueName: "kubernetes.io/projected/679af96e-47e1-4212-8ee0-e6d82d302834-kube-api-access-527cz") pod "network-check-target-wrf29" (UID: "679af96e-47e1-4212-8ee0-e6d82d302834") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 17:41:37.030884 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:37.030773 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x4kh5" Apr 16 17:41:37.031046 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:37.030936 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x4kh5" podUID="5b6b22f4-0d78-4198-b821-0f4f52115d9c" Apr 16 17:41:37.031046 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:37.031004 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mtgtk" Apr 16 17:41:37.031488 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:37.031116 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mtgtk" podUID="a7cbf992-bfb2-4889-ba57-9de812ce16d4" Apr 16 17:41:38.030871 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:38.030828 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-wrf29" Apr 16 17:41:38.031050 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:38.030951 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-wrf29" podUID="679af96e-47e1-4212-8ee0-e6d82d302834" Apr 16 17:41:39.033676 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:39.033510 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x4kh5" Apr 16 17:41:39.034093 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:39.033774 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x4kh5" podUID="5b6b22f4-0d78-4198-b821-0f4f52115d9c" Apr 16 17:41:39.034093 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:39.033589 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mtgtk" Apr 16 17:41:39.034203 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:39.034182 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mtgtk" podUID="a7cbf992-bfb2-4889-ba57-9de812ce16d4" Apr 16 17:41:39.151025 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:39.150961 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" event={"ID":"5e111816-868e-46a5-9605-122507f445ea","Type":"ContainerStarted","Data":"3e39fbcfff7e120771a5f90c32907271e8348567953b313b76c8aa9d9bdc5a8c"} Apr 16 17:41:39.152202 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:39.152180 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-hc5pn" event={"ID":"21c668ba-6d2b-43f2-926f-50b6a51598db","Type":"ContainerStarted","Data":"23998ce62188221ea73fdc63968f72e0061efd5dfa60d05c8cee7550853a5dab"} Apr 16 17:41:39.153344 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:39.153324 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-cmzsn" event={"ID":"0cbb16f9-6753-46fc-bde2-d1725b6a8bd7","Type":"ContainerStarted","Data":"5bfffaaa6fb07a8cac9d174bac0fb9acaabd90d325986319cdf7f3c3ca3c945f"} Apr 16 17:41:39.154419 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:39.154395 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-hdwwl" event={"ID":"dd4660ce-5c75-41dc-a8ed-0e0e55ac0a01","Type":"ContainerStarted","Data":"14f1a90848341eef6ce75d16f27f82063cb48822676f9e0e7aabfd004ce03fe3"} Apr 16 17:41:39.155589 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:39.155570 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tlljg" event={"ID":"3e976806-7125-4a84-96f3-609791878cd8","Type":"ContainerStarted","Data":"9c5f252096d193e38198585282a104899971f9750854870c673b3cd962429fea"} Apr 16 17:41:39.156723 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:39.156691 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-cgldj" event={"ID":"3d328985-f90b-469d-a8a4-9962d8311ef2","Type":"ContainerStarted","Data":"4abb34470c4cf6735a7a3d7ec1bdcc046fc3001a7652025d79ffd004929b5946"} Apr 16 17:41:39.157868 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:39.157834 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2lqx9" event={"ID":"b9b6b64b-dc77-4472-8311-249ac8242441","Type":"ContainerStarted","Data":"6498202a73fbab7dcdd3b4b6483e1a4be54625fa246397d08b384f3f9bd3a100"} Apr 16 17:41:39.171449 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:39.171404 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-fv6lf" podStartSLOduration=2.973277755 podStartE2EDuration="20.171389219s" podCreationTimestamp="2026-04-16 17:41:19 +0000 UTC" firstStartedPulling="2026-04-16 17:41:21.461135082 +0000 UTC m=+2.934364548" lastFinishedPulling="2026-04-16 17:41:38.659246553 +0000 UTC m=+20.132476012" observedRunningTime="2026-04-16 17:41:39.171169473 +0000 UTC m=+20.644398946" watchObservedRunningTime="2026-04-16 17:41:39.171389219 +0000 UTC m=+20.644618769" Apr 16 17:41:39.229991 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:39.229952 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/konnectivity-agent-hc5pn" podStartSLOduration=3.028488043 podStartE2EDuration="20.229938072s" podCreationTimestamp="2026-04-16 17:41:19 +0000 UTC" firstStartedPulling="2026-04-16 17:41:21.457614148 +0000 UTC m=+2.930843603" lastFinishedPulling="2026-04-16 17:41:38.65906417 +0000 UTC m=+20.132293632" observedRunningTime="2026-04-16 17:41:39.229738265 +0000 UTC m=+20.702967739" watchObservedRunningTime="2026-04-16 17:41:39.229938072 +0000 UTC m=+20.703167544" Apr 16 17:41:39.230232 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:39.230205 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-2lqx9" podStartSLOduration=2.988013799 podStartE2EDuration="20.230197057s" podCreationTimestamp="2026-04-16 17:41:19 +0000 UTC" firstStartedPulling="2026-04-16 17:41:21.450904588 +0000 UTC m=+2.924134045" lastFinishedPulling="2026-04-16 17:41:38.693087839 +0000 UTC m=+20.166317303" observedRunningTime="2026-04-16 17:41:39.215491972 +0000 UTC m=+20.688721447" watchObservedRunningTime="2026-04-16 17:41:39.230197057 +0000 UTC m=+20.703426530" Apr 16 17:41:39.246849 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:39.246811 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-hdwwl" podStartSLOduration=3.035937376 podStartE2EDuration="20.246799614s" podCreationTimestamp="2026-04-16 17:41:19 +0000 UTC" firstStartedPulling="2026-04-16 17:41:21.448281962 +0000 UTC m=+2.921511427" lastFinishedPulling="2026-04-16 17:41:38.659144202 +0000 UTC m=+20.132373665" observedRunningTime="2026-04-16 17:41:39.245479878 +0000 UTC m=+20.718709352" watchObservedRunningTime="2026-04-16 17:41:39.246799614 +0000 UTC m=+20.720029087" Apr 16 17:41:39.882281 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:39.882128 2571 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock" Apr 16 17:41:39.978984 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:39.978851 2571 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock","Timestamp":"2026-04-16T17:41:39.882279133Z","UUID":"87a87f4a-5472-4208-b0e9-eb0f8a76a0c6","Handler":null,"Name":"","Endpoint":""} Apr 16 17:41:39.980724 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:39.980696 2571 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: ebs.csi.aws.com endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock versions: 1.0.0 Apr 16 17:41:39.980724 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:39.980727 2571 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: ebs.csi.aws.com at endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock Apr 16 17:41:40.031079 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:40.031059 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-wrf29" Apr 16 17:41:40.031164 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:40.031137 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-wrf29" podUID="679af96e-47e1-4212-8ee0-e6d82d302834" Apr 16 17:41:40.147173 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:40.147150 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kube-system/konnectivity-agent-hc5pn" Apr 16 17:41:40.160786 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:40.160760 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-cmzsn" event={"ID":"0cbb16f9-6753-46fc-bde2-d1725b6a8bd7","Type":"ContainerStarted","Data":"54015bffa4db49c828b0356539370d627de9391e6d085b5e441c7068f736976e"} Apr 16 17:41:40.163234 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:40.163214 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" event={"ID":"8976bd90-ca55-4016-a1b8-fffeec56d443","Type":"ContainerStarted","Data":"f299ec2ee26b3e07f0ab6d04c02b2d6743543eefcca09ac35c76c5d845dcef79"} Apr 16 17:41:40.163316 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:40.163239 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" event={"ID":"8976bd90-ca55-4016-a1b8-fffeec56d443","Type":"ContainerStarted","Data":"8d50d1ac529bdbf19d38ebd9662698523e2dd9d3cde7590cb1d9624d7e692d23"} Apr 16 17:41:40.163316 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:40.163250 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" event={"ID":"8976bd90-ca55-4016-a1b8-fffeec56d443","Type":"ContainerStarted","Data":"a9055552d38031a3472777ed785be81afb01b88cee06eac5e27efe8bcbade67e"} Apr 16 17:41:40.163316 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:40.163259 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" event={"ID":"8976bd90-ca55-4016-a1b8-fffeec56d443","Type":"ContainerStarted","Data":"80d53505a1d45c892954122caefefb8166e4d7256cede1388a66e13d592fec8b"} Apr 16 17:41:40.163316 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:40.163267 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" event={"ID":"8976bd90-ca55-4016-a1b8-fffeec56d443","Type":"ContainerStarted","Data":"34722a47219eadd37b34ac00bbe60bbfe1a3f90f0b058d8b090180210a78569d"} Apr 16 17:41:40.163316 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:40.163276 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" event={"ID":"8976bd90-ca55-4016-a1b8-fffeec56d443","Type":"ContainerStarted","Data":"ab6f08282f798fd4e835199e4867612f8a222a848c8237fe5cd53fb7fc7cd7f3"} Apr 16 17:41:40.164580 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:40.164558 2571 generic.go:358] "Generic (PLEG): container finished" podID="3e976806-7125-4a84-96f3-609791878cd8" containerID="9c5f252096d193e38198585282a104899971f9750854870c673b3cd962429fea" exitCode=0 Apr 16 17:41:40.164705 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:40.164677 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tlljg" event={"ID":"3e976806-7125-4a84-96f3-609791878cd8","Type":"ContainerDied","Data":"9c5f252096d193e38198585282a104899971f9750854870c673b3cd962429fea"} Apr 16 17:41:40.184795 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:40.184756 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-cgldj" podStartSLOduration=3.979861813 podStartE2EDuration="21.18474548s" podCreationTimestamp="2026-04-16 17:41:19 +0000 UTC" firstStartedPulling="2026-04-16 17:41:21.454284769 +0000 UTC m=+2.927514234" lastFinishedPulling="2026-04-16 17:41:38.659168437 +0000 UTC m=+20.132397901" observedRunningTime="2026-04-16 17:41:39.260789976 +0000 UTC m=+20.734019451" watchObservedRunningTime="2026-04-16 17:41:40.18474548 +0000 UTC m=+21.657975005" Apr 16 17:41:41.030511 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:41.030439 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x4kh5" Apr 16 17:41:41.030511 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:41.030470 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mtgtk" Apr 16 17:41:41.030793 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:41.030554 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x4kh5" podUID="5b6b22f4-0d78-4198-b821-0f4f52115d9c" Apr 16 17:41:41.030793 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:41.030677 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mtgtk" podUID="a7cbf992-bfb2-4889-ba57-9de812ce16d4" Apr 16 17:41:41.167302 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:41.167268 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-wjvxt" event={"ID":"14054c8d-e316-4c67-a78f-0c44fc1bddc2","Type":"ContainerStarted","Data":"b239065beb6ec8585a4cd927c247cd1e7805726782ff7e0bf3e631f6c46f3bce"} Apr 16 17:41:41.169307 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:41.169280 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-cmzsn" event={"ID":"0cbb16f9-6753-46fc-bde2-d1725b6a8bd7","Type":"ContainerStarted","Data":"0d13016b21e9d654f272b290dc994ad049f856301031be04ed15867d0653d19f"} Apr 16 17:41:41.181318 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:41.181272 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/iptables-alerter-wjvxt" podStartSLOduration=4.980117561 podStartE2EDuration="22.181259131s" podCreationTimestamp="2026-04-16 17:41:19 +0000 UTC" firstStartedPulling="2026-04-16 17:41:21.457965493 +0000 UTC m=+2.931194949" lastFinishedPulling="2026-04-16 17:41:38.659107061 +0000 UTC m=+20.132336519" observedRunningTime="2026-04-16 17:41:41.181203529 +0000 UTC m=+22.654433003" watchObservedRunningTime="2026-04-16 17:41:41.181259131 +0000 UTC m=+22.654488616" Apr 16 17:41:41.199216 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:41.199171 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-cmzsn" podStartSLOduration=2.96494691 podStartE2EDuration="22.199157278s" podCreationTimestamp="2026-04-16 17:41:19 +0000 UTC" firstStartedPulling="2026-04-16 17:41:21.450589421 +0000 UTC m=+2.923818889" lastFinishedPulling="2026-04-16 17:41:40.684799805 +0000 UTC m=+22.158029257" observedRunningTime="2026-04-16 17:41:41.199026812 +0000 UTC m=+22.672256310" watchObservedRunningTime="2026-04-16 17:41:41.199157278 +0000 UTC m=+22.672386752" Apr 16 17:41:41.485022 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:41.484989 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/a7cbf992-bfb2-4889-ba57-9de812ce16d4-original-pull-secret\") pod \"global-pull-secret-syncer-mtgtk\" (UID: \"a7cbf992-bfb2-4889-ba57-9de812ce16d4\") " pod="kube-system/global-pull-secret-syncer-mtgtk" Apr 16 17:41:41.485199 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:41.485101 2571 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 17:41:41.485199 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:41.485164 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7cbf992-bfb2-4889-ba57-9de812ce16d4-original-pull-secret podName:a7cbf992-bfb2-4889-ba57-9de812ce16d4 nodeName:}" failed. No retries permitted until 2026-04-16 17:41:57.485146098 +0000 UTC m=+38.958375562 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/a7cbf992-bfb2-4889-ba57-9de812ce16d4-original-pull-secret") pod "global-pull-secret-syncer-mtgtk" (UID: "a7cbf992-bfb2-4889-ba57-9de812ce16d4") : object "kube-system"/"original-pull-secret" not registered Apr 16 17:41:42.030545 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:42.030321 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-wrf29" Apr 16 17:41:42.030707 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:42.030660 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-wrf29" podUID="679af96e-47e1-4212-8ee0-e6d82d302834" Apr 16 17:41:42.173931 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:42.173892 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" event={"ID":"8976bd90-ca55-4016-a1b8-fffeec56d443","Type":"ContainerStarted","Data":"776803120b1fa6187b08c77f8afdf3ae4d4153caa52461051985c9a1f7d2d19c"} Apr 16 17:41:43.031159 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:43.031127 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x4kh5" Apr 16 17:41:43.031336 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:43.031273 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x4kh5" podUID="5b6b22f4-0d78-4198-b821-0f4f52115d9c" Apr 16 17:41:43.031387 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:43.031328 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mtgtk" Apr 16 17:41:43.031462 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:43.031442 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mtgtk" podUID="a7cbf992-bfb2-4889-ba57-9de812ce16d4" Apr 16 17:41:44.030968 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:44.030937 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-wrf29" Apr 16 17:41:44.031608 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:44.031056 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-wrf29" podUID="679af96e-47e1-4212-8ee0-e6d82d302834" Apr 16 17:41:44.049821 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:44.049796 2571 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/konnectivity-agent-hc5pn" Apr 16 17:41:44.050383 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:44.050366 2571 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/konnectivity-agent-hc5pn" Apr 16 17:41:44.180868 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:44.180799 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" event={"ID":"8976bd90-ca55-4016-a1b8-fffeec56d443","Type":"ContainerStarted","Data":"0acee0feb140cfeaf240d07883daa66cd39d935131c424564928fa268da00402"} Apr 16 17:41:44.181157 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:44.181136 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:44.181642 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:44.181627 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/konnectivity-agent-hc5pn" Apr 16 17:41:44.205564 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:44.205071 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" podStartSLOduration=7.431929725 podStartE2EDuration="25.205053631s" podCreationTimestamp="2026-04-16 17:41:19 +0000 UTC" firstStartedPulling="2026-04-16 17:41:21.451762392 +0000 UTC m=+2.924991846" lastFinishedPulling="2026-04-16 17:41:39.224886295 +0000 UTC m=+20.698115752" observedRunningTime="2026-04-16 17:41:44.203963649 +0000 UTC m=+25.677193123" watchObservedRunningTime="2026-04-16 17:41:44.205053631 +0000 UTC m=+25.678283105" Apr 16 17:41:45.031421 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:45.031192 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x4kh5" Apr 16 17:41:45.032148 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:45.031197 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mtgtk" Apr 16 17:41:45.032148 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:45.031550 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x4kh5" podUID="5b6b22f4-0d78-4198-b821-0f4f52115d9c" Apr 16 17:41:45.032148 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:45.031578 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mtgtk" podUID="a7cbf992-bfb2-4889-ba57-9de812ce16d4" Apr 16 17:41:45.184048 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:45.184010 2571 generic.go:358] "Generic (PLEG): container finished" podID="3e976806-7125-4a84-96f3-609791878cd8" containerID="c77ec192be50acd5267b2f78f01ca6aed200942c5a9a81e270a290126d71a484" exitCode=0 Apr 16 17:41:45.184164 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:45.184096 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tlljg" event={"ID":"3e976806-7125-4a84-96f3-609791878cd8","Type":"ContainerDied","Data":"c77ec192be50acd5267b2f78f01ca6aed200942c5a9a81e270a290126d71a484"} Apr 16 17:41:45.185422 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:45.184836 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:45.185422 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:45.184876 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:45.201170 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:45.201145 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:45.201328 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:45.201311 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:41:46.031121 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:46.031067 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-wrf29" Apr 16 17:41:46.031221 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:46.031165 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-wrf29" podUID="679af96e-47e1-4212-8ee0-e6d82d302834" Apr 16 17:41:46.119826 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:46.119802 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-mtgtk"] Apr 16 17:41:46.120160 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:46.119918 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mtgtk" Apr 16 17:41:46.120160 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:46.119990 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mtgtk" podUID="a7cbf992-bfb2-4889-ba57-9de812ce16d4" Apr 16 17:41:46.123052 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:46.123021 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-wrf29"] Apr 16 17:41:46.123792 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:46.123770 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-x4kh5"] Apr 16 17:41:46.123915 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:46.123901 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x4kh5" Apr 16 17:41:46.124049 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:46.124018 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x4kh5" podUID="5b6b22f4-0d78-4198-b821-0f4f52115d9c" Apr 16 17:41:46.188167 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:46.188145 2571 generic.go:358] "Generic (PLEG): container finished" podID="3e976806-7125-4a84-96f3-609791878cd8" containerID="c20a96d1ede265299488990c07fd6c75c6ad930617e747f35822ba1c65ae0198" exitCode=0 Apr 16 17:41:46.188265 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:46.188173 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tlljg" event={"ID":"3e976806-7125-4a84-96f3-609791878cd8","Type":"ContainerDied","Data":"c20a96d1ede265299488990c07fd6c75c6ad930617e747f35822ba1c65ae0198"} Apr 16 17:41:46.188265 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:46.188211 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-wrf29" Apr 16 17:41:46.188454 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:46.188436 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-wrf29" podUID="679af96e-47e1-4212-8ee0-e6d82d302834" Apr 16 17:41:47.192016 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:47.191985 2571 generic.go:358] "Generic (PLEG): container finished" podID="3e976806-7125-4a84-96f3-609791878cd8" containerID="9f6930635521eec2d4f5a480d558d6a5f0d106afbc9b77e4e448b5ba1b504929" exitCode=0 Apr 16 17:41:47.192345 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:47.192035 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tlljg" event={"ID":"3e976806-7125-4a84-96f3-609791878cd8","Type":"ContainerDied","Data":"9f6930635521eec2d4f5a480d558d6a5f0d106afbc9b77e4e448b5ba1b504929"} Apr 16 17:41:48.034270 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:48.031228 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x4kh5" Apr 16 17:41:48.034270 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:48.031600 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x4kh5" podUID="5b6b22f4-0d78-4198-b821-0f4f52115d9c" Apr 16 17:41:48.034270 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:48.031267 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-wrf29" Apr 16 17:41:48.034270 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:48.031723 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-wrf29" podUID="679af96e-47e1-4212-8ee0-e6d82d302834" Apr 16 17:41:48.034270 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:48.031251 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mtgtk" Apr 16 17:41:48.034270 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:48.031886 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mtgtk" podUID="a7cbf992-bfb2-4889-ba57-9de812ce16d4" Apr 16 17:41:50.030487 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:50.030456 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mtgtk" Apr 16 17:41:50.030970 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:50.030571 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-wrf29" Apr 16 17:41:50.030970 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:50.030589 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x4kh5" Apr 16 17:41:50.030970 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:50.030750 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mtgtk" podUID="a7cbf992-bfb2-4889-ba57-9de812ce16d4" Apr 16 17:41:50.030970 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:50.030844 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x4kh5" podUID="5b6b22f4-0d78-4198-b821-0f4f52115d9c" Apr 16 17:41:50.030970 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:50.030928 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-wrf29" podUID="679af96e-47e1-4212-8ee0-e6d82d302834" Apr 16 17:41:52.031254 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:52.031225 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x4kh5" Apr 16 17:41:52.031839 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:52.031222 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mtgtk" Apr 16 17:41:52.031839 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:52.031349 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x4kh5" podUID="5b6b22f4-0d78-4198-b821-0f4f52115d9c" Apr 16 17:41:52.031839 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:52.031225 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-wrf29" Apr 16 17:41:52.031839 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:52.031415 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mtgtk" podUID="a7cbf992-bfb2-4889-ba57-9de812ce16d4" Apr 16 17:41:52.031839 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:52.031487 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-wrf29" podUID="679af96e-47e1-4212-8ee0-e6d82d302834" Apr 16 17:41:52.675115 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:52.675085 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b6b22f4-0d78-4198-b821-0f4f52115d9c-metrics-certs\") pod \"network-metrics-daemon-x4kh5\" (UID: \"5b6b22f4-0d78-4198-b821-0f4f52115d9c\") " pod="openshift-multus/network-metrics-daemon-x4kh5" Apr 16 17:41:52.675293 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:52.675197 2571 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 17:41:52.675293 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:52.675248 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b6b22f4-0d78-4198-b821-0f4f52115d9c-metrics-certs podName:5b6b22f4-0d78-4198-b821-0f4f52115d9c nodeName:}" failed. No retries permitted until 2026-04-16 17:42:24.675231534 +0000 UTC m=+66.148460985 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b6b22f4-0d78-4198-b821-0f4f52115d9c-metrics-certs") pod "network-metrics-daemon-x4kh5" (UID: "5b6b22f4-0d78-4198-b821-0f4f52115d9c") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 17:41:52.877110 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:52.877084 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-527cz\" (UniqueName: \"kubernetes.io/projected/679af96e-47e1-4212-8ee0-e6d82d302834-kube-api-access-527cz\") pod \"network-check-target-wrf29\" (UID: \"679af96e-47e1-4212-8ee0-e6d82d302834\") " pod="openshift-network-diagnostics/network-check-target-wrf29" Apr 16 17:41:52.877262 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:52.877237 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 17:41:52.877307 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:52.877262 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 17:41:52.877307 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:52.877273 2571 projected.go:194] Error preparing data for projected volume kube-api-access-527cz for pod openshift-network-diagnostics/network-check-target-wrf29: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 17:41:52.877380 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:52.877341 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/679af96e-47e1-4212-8ee0-e6d82d302834-kube-api-access-527cz podName:679af96e-47e1-4212-8ee0-e6d82d302834 nodeName:}" failed. No retries permitted until 2026-04-16 17:42:24.87732539 +0000 UTC m=+66.350554841 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-527cz" (UniqueName: "kubernetes.io/projected/679af96e-47e1-4212-8ee0-e6d82d302834-kube-api-access-527cz") pod "network-check-target-wrf29" (UID: "679af96e-47e1-4212-8ee0-e6d82d302834") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 17:41:52.889870 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:52.889831 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-138-134.ec2.internal" event="NodeReady" Apr 16 17:41:52.889981 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:52.889965 2571 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Apr 16 17:41:52.923788 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:52.923760 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-65c7c55b77-4pqmk"] Apr 16 17:41:52.937010 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:52.936992 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-5pktg"] Apr 16 17:41:52.937167 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:52.937151 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:41:52.939121 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:52.939097 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Apr 16 17:41:52.939238 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:52.939100 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-private-configuration\"" Apr 16 17:41:52.939238 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:52.939104 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-vf8ds\"" Apr 16 17:41:52.939238 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:52.939106 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Apr 16 17:41:52.951760 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:52.951738 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Apr 16 17:41:52.952470 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:52.952348 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-2l84s"] Apr 16 17:41:52.952700 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:52.952682 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5pktg" Apr 16 17:41:52.954478 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:52.954419 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Apr 16 17:41:52.954478 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:52.954422 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Apr 16 17:41:52.954755 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:52.954738 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-2vlxb\"" Apr 16 17:41:52.964500 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:52.964482 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-2l84s"] Apr 16 17:41:52.964589 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:52.964521 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5pktg"] Apr 16 17:41:52.964589 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:52.964547 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-65c7c55b77-4pqmk"] Apr 16 17:41:52.964589 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:52.964575 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2l84s" Apr 16 17:41:52.966582 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:52.966565 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Apr 16 17:41:52.966729 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:52.966713 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-w8phs\"" Apr 16 17:41:52.966801 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:52.966750 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Apr 16 17:41:52.966896 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:52.966843 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Apr 16 17:41:53.077828 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.077759 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-installation-pull-secrets\") pod \"image-registry-65c7c55b77-4pqmk\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:41:53.077828 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.077801 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-bound-sa-token\") pod \"image-registry-65c7c55b77-4pqmk\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:41:53.078383 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.077836 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpwgr\" (UniqueName: \"kubernetes.io/projected/971c52dd-85d8-47b8-b0b5-7369b7459c82-kube-api-access-lpwgr\") pod \"dns-default-5pktg\" (UID: \"971c52dd-85d8-47b8-b0b5-7369b7459c82\") " pod="openshift-dns/dns-default-5pktg" Apr 16 17:41:53.078383 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.077881 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/971c52dd-85d8-47b8-b0b5-7369b7459c82-config-volume\") pod \"dns-default-5pktg\" (UID: \"971c52dd-85d8-47b8-b0b5-7369b7459c82\") " pod="openshift-dns/dns-default-5pktg" Apr 16 17:41:53.078383 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.077906 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/971c52dd-85d8-47b8-b0b5-7369b7459c82-tmp-dir\") pod \"dns-default-5pktg\" (UID: \"971c52dd-85d8-47b8-b0b5-7369b7459c82\") " pod="openshift-dns/dns-default-5pktg" Apr 16 17:41:53.078383 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.077944 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b6580a74-c19a-4cd2-b8e2-e8a8423dc761-cert\") pod \"ingress-canary-2l84s\" (UID: \"b6580a74-c19a-4cd2-b8e2-e8a8423dc761\") " pod="openshift-ingress-canary/ingress-canary-2l84s" Apr 16 17:41:53.078383 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.078052 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-image-registry-private-configuration\") pod \"image-registry-65c7c55b77-4pqmk\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:41:53.078383 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.078084 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-registry-tls\") pod \"image-registry-65c7c55b77-4pqmk\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:41:53.078383 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.078100 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-ca-trust-extracted\") pod \"image-registry-65c7c55b77-4pqmk\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:41:53.078383 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.078115 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-registry-certificates\") pod \"image-registry-65c7c55b77-4pqmk\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:41:53.078383 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.078159 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-trusted-ca\") pod \"image-registry-65c7c55b77-4pqmk\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:41:53.078383 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.078225 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/971c52dd-85d8-47b8-b0b5-7369b7459c82-metrics-tls\") pod \"dns-default-5pktg\" (UID: \"971c52dd-85d8-47b8-b0b5-7369b7459c82\") " pod="openshift-dns/dns-default-5pktg" Apr 16 17:41:53.078383 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.078256 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b58j9\" (UniqueName: \"kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-kube-api-access-b58j9\") pod \"image-registry-65c7c55b77-4pqmk\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:41:53.078383 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.078314 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl2zt\" (UniqueName: \"kubernetes.io/projected/b6580a74-c19a-4cd2-b8e2-e8a8423dc761-kube-api-access-hl2zt\") pod \"ingress-canary-2l84s\" (UID: \"b6580a74-c19a-4cd2-b8e2-e8a8423dc761\") " pod="openshift-ingress-canary/ingress-canary-2l84s" Apr 16 17:41:53.179097 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.179066 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lpwgr\" (UniqueName: \"kubernetes.io/projected/971c52dd-85d8-47b8-b0b5-7369b7459c82-kube-api-access-lpwgr\") pod \"dns-default-5pktg\" (UID: \"971c52dd-85d8-47b8-b0b5-7369b7459c82\") " pod="openshift-dns/dns-default-5pktg" Apr 16 17:41:53.179097 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.179099 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/971c52dd-85d8-47b8-b0b5-7369b7459c82-config-volume\") pod \"dns-default-5pktg\" (UID: \"971c52dd-85d8-47b8-b0b5-7369b7459c82\") " pod="openshift-dns/dns-default-5pktg" Apr 16 17:41:53.179291 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.179123 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/971c52dd-85d8-47b8-b0b5-7369b7459c82-tmp-dir\") pod \"dns-default-5pktg\" (UID: \"971c52dd-85d8-47b8-b0b5-7369b7459c82\") " pod="openshift-dns/dns-default-5pktg" Apr 16 17:41:53.179291 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.179151 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b6580a74-c19a-4cd2-b8e2-e8a8423dc761-cert\") pod \"ingress-canary-2l84s\" (UID: \"b6580a74-c19a-4cd2-b8e2-e8a8423dc761\") " pod="openshift-ingress-canary/ingress-canary-2l84s" Apr 16 17:41:53.179291 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:53.179231 2571 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 17:41:53.179440 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.179342 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-image-registry-private-configuration\") pod \"image-registry-65c7c55b77-4pqmk\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:41:53.179440 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:53.179366 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6580a74-c19a-4cd2-b8e2-e8a8423dc761-cert podName:b6580a74-c19a-4cd2-b8e2-e8a8423dc761 nodeName:}" failed. No retries permitted until 2026-04-16 17:41:53.679328364 +0000 UTC m=+35.152557826 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b6580a74-c19a-4cd2-b8e2-e8a8423dc761-cert") pod "ingress-canary-2l84s" (UID: "b6580a74-c19a-4cd2-b8e2-e8a8423dc761") : secret "canary-serving-cert" not found Apr 16 17:41:53.179440 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.179419 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-registry-tls\") pod \"image-registry-65c7c55b77-4pqmk\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:41:53.179605 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.179453 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-ca-trust-extracted\") pod \"image-registry-65c7c55b77-4pqmk\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:41:53.179605 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.179481 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-registry-certificates\") pod \"image-registry-65c7c55b77-4pqmk\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:41:53.179605 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.179506 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-trusted-ca\") pod \"image-registry-65c7c55b77-4pqmk\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:41:53.179605 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.179512 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/971c52dd-85d8-47b8-b0b5-7369b7459c82-tmp-dir\") pod \"dns-default-5pktg\" (UID: \"971c52dd-85d8-47b8-b0b5-7369b7459c82\") " pod="openshift-dns/dns-default-5pktg" Apr 16 17:41:53.179605 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:53.179534 2571 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 17:41:53.179605 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:53.179550 2571 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-65c7c55b77-4pqmk: secret "image-registry-tls" not found Apr 16 17:41:53.179605 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:53.179601 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-registry-tls podName:bd43cd3e-442d-4c1a-9eab-890d57ec3a13 nodeName:}" failed. No retries permitted until 2026-04-16 17:41:53.679584189 +0000 UTC m=+35.152813643 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-registry-tls") pod "image-registry-65c7c55b77-4pqmk" (UID: "bd43cd3e-442d-4c1a-9eab-890d57ec3a13") : secret "image-registry-tls" not found Apr 16 17:41:53.179605 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:53.179607 2571 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 17:41:53.180012 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.179536 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/971c52dd-85d8-47b8-b0b5-7369b7459c82-metrics-tls\") pod \"dns-default-5pktg\" (UID: \"971c52dd-85d8-47b8-b0b5-7369b7459c82\") " pod="openshift-dns/dns-default-5pktg" Apr 16 17:41:53.180012 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.179649 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b58j9\" (UniqueName: \"kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-kube-api-access-b58j9\") pod \"image-registry-65c7c55b77-4pqmk\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:41:53.180012 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:53.179680 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/971c52dd-85d8-47b8-b0b5-7369b7459c82-metrics-tls podName:971c52dd-85d8-47b8-b0b5-7369b7459c82 nodeName:}" failed. No retries permitted until 2026-04-16 17:41:53.679669659 +0000 UTC m=+35.152899110 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/971c52dd-85d8-47b8-b0b5-7369b7459c82-metrics-tls") pod "dns-default-5pktg" (UID: "971c52dd-85d8-47b8-b0b5-7369b7459c82") : secret "dns-default-metrics-tls" not found Apr 16 17:41:53.180012 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.179696 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/971c52dd-85d8-47b8-b0b5-7369b7459c82-config-volume\") pod \"dns-default-5pktg\" (UID: \"971c52dd-85d8-47b8-b0b5-7369b7459c82\") " pod="openshift-dns/dns-default-5pktg" Apr 16 17:41:53.180012 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.179706 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hl2zt\" (UniqueName: \"kubernetes.io/projected/b6580a74-c19a-4cd2-b8e2-e8a8423dc761-kube-api-access-hl2zt\") pod \"ingress-canary-2l84s\" (UID: \"b6580a74-c19a-4cd2-b8e2-e8a8423dc761\") " pod="openshift-ingress-canary/ingress-canary-2l84s" Apr 16 17:41:53.180012 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.179778 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-installation-pull-secrets\") pod \"image-registry-65c7c55b77-4pqmk\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:41:53.180012 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.179810 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-bound-sa-token\") pod \"image-registry-65c7c55b77-4pqmk\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:41:53.180370 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.180159 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-registry-certificates\") pod \"image-registry-65c7c55b77-4pqmk\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:41:53.180420 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.180368 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-ca-trust-extracted\") pod \"image-registry-65c7c55b77-4pqmk\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:41:53.180420 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.180390 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-trusted-ca\") pod \"image-registry-65c7c55b77-4pqmk\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:41:53.185153 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.185128 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-installation-pull-secrets\") pod \"image-registry-65c7c55b77-4pqmk\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:41:53.185279 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.185128 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-image-registry-private-configuration\") pod \"image-registry-65c7c55b77-4pqmk\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:41:53.190244 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.190223 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpwgr\" (UniqueName: \"kubernetes.io/projected/971c52dd-85d8-47b8-b0b5-7369b7459c82-kube-api-access-lpwgr\") pod \"dns-default-5pktg\" (UID: \"971c52dd-85d8-47b8-b0b5-7369b7459c82\") " pod="openshift-dns/dns-default-5pktg" Apr 16 17:41:53.192963 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.192937 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-bound-sa-token\") pod \"image-registry-65c7c55b77-4pqmk\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:41:53.193505 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.193483 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl2zt\" (UniqueName: \"kubernetes.io/projected/b6580a74-c19a-4cd2-b8e2-e8a8423dc761-kube-api-access-hl2zt\") pod \"ingress-canary-2l84s\" (UID: \"b6580a74-c19a-4cd2-b8e2-e8a8423dc761\") " pod="openshift-ingress-canary/ingress-canary-2l84s" Apr 16 17:41:53.193812 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.193797 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b58j9\" (UniqueName: \"kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-kube-api-access-b58j9\") pod \"image-registry-65c7c55b77-4pqmk\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:41:53.205973 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.205947 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tlljg" event={"ID":"3e976806-7125-4a84-96f3-609791878cd8","Type":"ContainerStarted","Data":"9fd1c154ab07b72f7036b71107450cf631913acb40ba89c6470b10ad5dc63126"} Apr 16 17:41:53.684244 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.684218 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-registry-tls\") pod \"image-registry-65c7c55b77-4pqmk\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:41:53.684408 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.684249 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/971c52dd-85d8-47b8-b0b5-7369b7459c82-metrics-tls\") pod \"dns-default-5pktg\" (UID: \"971c52dd-85d8-47b8-b0b5-7369b7459c82\") " pod="openshift-dns/dns-default-5pktg" Apr 16 17:41:53.684408 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:53.684288 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b6580a74-c19a-4cd2-b8e2-e8a8423dc761-cert\") pod \"ingress-canary-2l84s\" (UID: \"b6580a74-c19a-4cd2-b8e2-e8a8423dc761\") " pod="openshift-ingress-canary/ingress-canary-2l84s" Apr 16 17:41:53.684408 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:53.684361 2571 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 17:41:53.684408 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:53.684404 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6580a74-c19a-4cd2-b8e2-e8a8423dc761-cert podName:b6580a74-c19a-4cd2-b8e2-e8a8423dc761 nodeName:}" failed. No retries permitted until 2026-04-16 17:41:54.684390724 +0000 UTC m=+36.157620175 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b6580a74-c19a-4cd2-b8e2-e8a8423dc761-cert") pod "ingress-canary-2l84s" (UID: "b6580a74-c19a-4cd2-b8e2-e8a8423dc761") : secret "canary-serving-cert" not found Apr 16 17:41:53.684616 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:53.684362 2571 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 17:41:53.684616 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:53.684435 2571 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 17:41:53.684616 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:53.684454 2571 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-65c7c55b77-4pqmk: secret "image-registry-tls" not found Apr 16 17:41:53.684616 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:53.684479 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/971c52dd-85d8-47b8-b0b5-7369b7459c82-metrics-tls podName:971c52dd-85d8-47b8-b0b5-7369b7459c82 nodeName:}" failed. No retries permitted until 2026-04-16 17:41:54.684460343 +0000 UTC m=+36.157689794 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/971c52dd-85d8-47b8-b0b5-7369b7459c82-metrics-tls") pod "dns-default-5pktg" (UID: "971c52dd-85d8-47b8-b0b5-7369b7459c82") : secret "dns-default-metrics-tls" not found Apr 16 17:41:53.684616 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:53.684501 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-registry-tls podName:bd43cd3e-442d-4c1a-9eab-890d57ec3a13 nodeName:}" failed. No retries permitted until 2026-04-16 17:41:54.684486098 +0000 UTC m=+36.157715563 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-registry-tls") pod "image-registry-65c7c55b77-4pqmk" (UID: "bd43cd3e-442d-4c1a-9eab-890d57ec3a13") : secret "image-registry-tls" not found Apr 16 17:41:54.030377 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:54.030310 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-wrf29" Apr 16 17:41:54.030674 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:54.030319 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mtgtk" Apr 16 17:41:54.030674 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:54.030319 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x4kh5" Apr 16 17:41:54.033462 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:54.033438 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"original-pull-secret\"" Apr 16 17:41:54.033462 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:54.033453 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 16 17:41:54.033462 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:54.033438 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 16 17:41:54.033667 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:54.033445 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 16 17:41:54.033667 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:54.033445 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-8xx6l\"" Apr 16 17:41:54.033667 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:54.033459 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-qnscp\"" Apr 16 17:41:54.212257 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:54.212227 2571 generic.go:358] "Generic (PLEG): container finished" podID="3e976806-7125-4a84-96f3-609791878cd8" containerID="9fd1c154ab07b72f7036b71107450cf631913acb40ba89c6470b10ad5dc63126" exitCode=0 Apr 16 17:41:54.212257 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:54.212261 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tlljg" event={"ID":"3e976806-7125-4a84-96f3-609791878cd8","Type":"ContainerDied","Data":"9fd1c154ab07b72f7036b71107450cf631913acb40ba89c6470b10ad5dc63126"} Apr 16 17:41:54.692316 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:54.692291 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/971c52dd-85d8-47b8-b0b5-7369b7459c82-metrics-tls\") pod \"dns-default-5pktg\" (UID: \"971c52dd-85d8-47b8-b0b5-7369b7459c82\") " pod="openshift-dns/dns-default-5pktg" Apr 16 17:41:54.692452 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:54.692351 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b6580a74-c19a-4cd2-b8e2-e8a8423dc761-cert\") pod \"ingress-canary-2l84s\" (UID: \"b6580a74-c19a-4cd2-b8e2-e8a8423dc761\") " pod="openshift-ingress-canary/ingress-canary-2l84s" Apr 16 17:41:54.692452 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:54.692391 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-registry-tls\") pod \"image-registry-65c7c55b77-4pqmk\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:41:54.692521 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:54.692449 2571 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 17:41:54.692521 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:54.692467 2571 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 17:41:54.692521 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:54.692479 2571 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-65c7c55b77-4pqmk: secret "image-registry-tls" not found Apr 16 17:41:54.692521 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:54.692490 2571 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 17:41:54.692521 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:54.692521 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/971c52dd-85d8-47b8-b0b5-7369b7459c82-metrics-tls podName:971c52dd-85d8-47b8-b0b5-7369b7459c82 nodeName:}" failed. No retries permitted until 2026-04-16 17:41:56.692505849 +0000 UTC m=+38.165735300 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/971c52dd-85d8-47b8-b0b5-7369b7459c82-metrics-tls") pod "dns-default-5pktg" (UID: "971c52dd-85d8-47b8-b0b5-7369b7459c82") : secret "dns-default-metrics-tls" not found Apr 16 17:41:54.692674 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:54.692534 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-registry-tls podName:bd43cd3e-442d-4c1a-9eab-890d57ec3a13 nodeName:}" failed. No retries permitted until 2026-04-16 17:41:56.69252897 +0000 UTC m=+38.165758420 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-registry-tls") pod "image-registry-65c7c55b77-4pqmk" (UID: "bd43cd3e-442d-4c1a-9eab-890d57ec3a13") : secret "image-registry-tls" not found Apr 16 17:41:54.692674 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:54.692546 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6580a74-c19a-4cd2-b8e2-e8a8423dc761-cert podName:b6580a74-c19a-4cd2-b8e2-e8a8423dc761 nodeName:}" failed. No retries permitted until 2026-04-16 17:41:56.692539663 +0000 UTC m=+38.165769113 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b6580a74-c19a-4cd2-b8e2-e8a8423dc761-cert") pod "ingress-canary-2l84s" (UID: "b6580a74-c19a-4cd2-b8e2-e8a8423dc761") : secret "canary-serving-cert" not found Apr 16 17:41:55.215849 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:55.215820 2571 generic.go:358] "Generic (PLEG): container finished" podID="3e976806-7125-4a84-96f3-609791878cd8" containerID="47d791903fcab0606de420a18f672a492c1d88219c0a9cd690c6cf64d72b331b" exitCode=0 Apr 16 17:41:55.216294 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:55.215875 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tlljg" event={"ID":"3e976806-7125-4a84-96f3-609791878cd8","Type":"ContainerDied","Data":"47d791903fcab0606de420a18f672a492c1d88219c0a9cd690c6cf64d72b331b"} Apr 16 17:41:56.221118 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:56.221083 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tlljg" event={"ID":"3e976806-7125-4a84-96f3-609791878cd8","Type":"ContainerStarted","Data":"5ca7ec6259c74d83e1e087036973076fe081879fe3909b1524445fc64a0b6080"} Apr 16 17:41:56.243803 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:56.243760 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-tlljg" podStartSLOduration=5.699146616 podStartE2EDuration="37.243745762s" podCreationTimestamp="2026-04-16 17:41:19 +0000 UTC" firstStartedPulling="2026-04-16 17:41:21.460486917 +0000 UTC m=+2.933716387" lastFinishedPulling="2026-04-16 17:41:53.005086079 +0000 UTC m=+34.478315533" observedRunningTime="2026-04-16 17:41:56.242432944 +0000 UTC m=+37.715662417" watchObservedRunningTime="2026-04-16 17:41:56.243745762 +0000 UTC m=+37.716975234" Apr 16 17:41:56.705526 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:56.705494 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b6580a74-c19a-4cd2-b8e2-e8a8423dc761-cert\") pod \"ingress-canary-2l84s\" (UID: \"b6580a74-c19a-4cd2-b8e2-e8a8423dc761\") " pod="openshift-ingress-canary/ingress-canary-2l84s" Apr 16 17:41:56.705734 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:56.705561 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-registry-tls\") pod \"image-registry-65c7c55b77-4pqmk\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:41:56.705734 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:56.705588 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/971c52dd-85d8-47b8-b0b5-7369b7459c82-metrics-tls\") pod \"dns-default-5pktg\" (UID: \"971c52dd-85d8-47b8-b0b5-7369b7459c82\") " pod="openshift-dns/dns-default-5pktg" Apr 16 17:41:56.705734 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:56.705631 2571 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 17:41:56.705734 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:56.705686 2571 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 17:41:56.705734 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:56.705695 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6580a74-c19a-4cd2-b8e2-e8a8423dc761-cert podName:b6580a74-c19a-4cd2-b8e2-e8a8423dc761 nodeName:}" failed. No retries permitted until 2026-04-16 17:42:00.705675692 +0000 UTC m=+42.178905145 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b6580a74-c19a-4cd2-b8e2-e8a8423dc761-cert") pod "ingress-canary-2l84s" (UID: "b6580a74-c19a-4cd2-b8e2-e8a8423dc761") : secret "canary-serving-cert" not found Apr 16 17:41:56.705734 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:56.705699 2571 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-65c7c55b77-4pqmk: secret "image-registry-tls" not found Apr 16 17:41:56.705734 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:56.705714 2571 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 17:41:56.706065 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:56.705755 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-registry-tls podName:bd43cd3e-442d-4c1a-9eab-890d57ec3a13 nodeName:}" failed. No retries permitted until 2026-04-16 17:42:00.70574343 +0000 UTC m=+42.178972900 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-registry-tls") pod "image-registry-65c7c55b77-4pqmk" (UID: "bd43cd3e-442d-4c1a-9eab-890d57ec3a13") : secret "image-registry-tls" not found Apr 16 17:41:56.706065 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:41:56.705799 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/971c52dd-85d8-47b8-b0b5-7369b7459c82-metrics-tls podName:971c52dd-85d8-47b8-b0b5-7369b7459c82 nodeName:}" failed. No retries permitted until 2026-04-16 17:42:00.705782237 +0000 UTC m=+42.179011700 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/971c52dd-85d8-47b8-b0b5-7369b7459c82-metrics-tls") pod "dns-default-5pktg" (UID: "971c52dd-85d8-47b8-b0b5-7369b7459c82") : secret "dns-default-metrics-tls" not found Apr 16 17:41:57.510746 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:57.510708 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/a7cbf992-bfb2-4889-ba57-9de812ce16d4-original-pull-secret\") pod \"global-pull-secret-syncer-mtgtk\" (UID: \"a7cbf992-bfb2-4889-ba57-9de812ce16d4\") " pod="kube-system/global-pull-secret-syncer-mtgtk" Apr 16 17:41:57.513902 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:57.513880 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/a7cbf992-bfb2-4889-ba57-9de812ce16d4-original-pull-secret\") pod \"global-pull-secret-syncer-mtgtk\" (UID: \"a7cbf992-bfb2-4889-ba57-9de812ce16d4\") " pod="kube-system/global-pull-secret-syncer-mtgtk" Apr 16 17:41:57.645334 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:57.645299 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mtgtk" Apr 16 17:41:57.800977 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:57.800921 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-mtgtk"] Apr 16 17:41:57.804481 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:41:57.804457 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7cbf992_bfb2_4889_ba57_9de812ce16d4.slice/crio-43e367241bef114b627b990aa7be373a56af1048ee36da269369441d4cbeb10e WatchSource:0}: Error finding container 43e367241bef114b627b990aa7be373a56af1048ee36da269369441d4cbeb10e: Status 404 returned error can't find the container with id 43e367241bef114b627b990aa7be373a56af1048ee36da269369441d4cbeb10e Apr 16 17:41:58.225516 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:41:58.225485 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-mtgtk" event={"ID":"a7cbf992-bfb2-4889-ba57-9de812ce16d4","Type":"ContainerStarted","Data":"43e367241bef114b627b990aa7be373a56af1048ee36da269369441d4cbeb10e"} Apr 16 17:42:00.733673 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:42:00.733636 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b6580a74-c19a-4cd2-b8e2-e8a8423dc761-cert\") pod \"ingress-canary-2l84s\" (UID: \"b6580a74-c19a-4cd2-b8e2-e8a8423dc761\") " pod="openshift-ingress-canary/ingress-canary-2l84s" Apr 16 17:42:00.734158 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:42:00.733708 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-registry-tls\") pod \"image-registry-65c7c55b77-4pqmk\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:42:00.734158 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:42:00.733737 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/971c52dd-85d8-47b8-b0b5-7369b7459c82-metrics-tls\") pod \"dns-default-5pktg\" (UID: \"971c52dd-85d8-47b8-b0b5-7369b7459c82\") " pod="openshift-dns/dns-default-5pktg" Apr 16 17:42:00.734158 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:42:00.733795 2571 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 17:42:00.734158 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:42:00.733874 2571 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 17:42:00.734158 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:42:00.733894 2571 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-65c7c55b77-4pqmk: secret "image-registry-tls" not found Apr 16 17:42:00.734158 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:42:00.733924 2571 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 17:42:00.734158 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:42:00.733878 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6580a74-c19a-4cd2-b8e2-e8a8423dc761-cert podName:b6580a74-c19a-4cd2-b8e2-e8a8423dc761 nodeName:}" failed. No retries permitted until 2026-04-16 17:42:08.733848659 +0000 UTC m=+50.207078110 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b6580a74-c19a-4cd2-b8e2-e8a8423dc761-cert") pod "ingress-canary-2l84s" (UID: "b6580a74-c19a-4cd2-b8e2-e8a8423dc761") : secret "canary-serving-cert" not found Apr 16 17:42:00.734158 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:42:00.734013 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-registry-tls podName:bd43cd3e-442d-4c1a-9eab-890d57ec3a13 nodeName:}" failed. No retries permitted until 2026-04-16 17:42:08.733984309 +0000 UTC m=+50.207213773 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-registry-tls") pod "image-registry-65c7c55b77-4pqmk" (UID: "bd43cd3e-442d-4c1a-9eab-890d57ec3a13") : secret "image-registry-tls" not found Apr 16 17:42:00.734158 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:42:00.734030 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/971c52dd-85d8-47b8-b0b5-7369b7459c82-metrics-tls podName:971c52dd-85d8-47b8-b0b5-7369b7459c82 nodeName:}" failed. No retries permitted until 2026-04-16 17:42:08.734021903 +0000 UTC m=+50.207251360 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/971c52dd-85d8-47b8-b0b5-7369b7459c82-metrics-tls") pod "dns-default-5pktg" (UID: "971c52dd-85d8-47b8-b0b5-7369b7459c82") : secret "dns-default-metrics-tls" not found Apr 16 17:42:03.236142 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:42:03.236102 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-mtgtk" event={"ID":"a7cbf992-bfb2-4889-ba57-9de812ce16d4","Type":"ContainerStarted","Data":"c6f9c2d1b05504060f93f65a1ffcae59e655c6e412a0cda6508389c8d41b069c"} Apr 16 17:42:03.253789 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:42:03.253740 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/global-pull-secret-syncer-mtgtk" podStartSLOduration=33.952441544 podStartE2EDuration="38.253726556s" podCreationTimestamp="2026-04-16 17:41:25 +0000 UTC" firstStartedPulling="2026-04-16 17:41:57.806694032 +0000 UTC m=+39.279923487" lastFinishedPulling="2026-04-16 17:42:02.107979034 +0000 UTC m=+43.581208499" observedRunningTime="2026-04-16 17:42:03.252811062 +0000 UTC m=+44.726040536" watchObservedRunningTime="2026-04-16 17:42:03.253726556 +0000 UTC m=+44.726956007" Apr 16 17:42:08.789350 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:42:08.789316 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b6580a74-c19a-4cd2-b8e2-e8a8423dc761-cert\") pod \"ingress-canary-2l84s\" (UID: \"b6580a74-c19a-4cd2-b8e2-e8a8423dc761\") " pod="openshift-ingress-canary/ingress-canary-2l84s" Apr 16 17:42:08.789716 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:42:08.789368 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-registry-tls\") pod \"image-registry-65c7c55b77-4pqmk\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:42:08.789716 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:42:08.789388 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/971c52dd-85d8-47b8-b0b5-7369b7459c82-metrics-tls\") pod \"dns-default-5pktg\" (UID: \"971c52dd-85d8-47b8-b0b5-7369b7459c82\") " pod="openshift-dns/dns-default-5pktg" Apr 16 17:42:08.789716 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:42:08.789470 2571 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 17:42:08.789716 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:42:08.789475 2571 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 17:42:08.789716 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:42:08.789494 2571 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-65c7c55b77-4pqmk: secret "image-registry-tls" not found Apr 16 17:42:08.789716 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:42:08.789515 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/971c52dd-85d8-47b8-b0b5-7369b7459c82-metrics-tls podName:971c52dd-85d8-47b8-b0b5-7369b7459c82 nodeName:}" failed. No retries permitted until 2026-04-16 17:42:24.789501697 +0000 UTC m=+66.262731148 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/971c52dd-85d8-47b8-b0b5-7369b7459c82-metrics-tls") pod "dns-default-5pktg" (UID: "971c52dd-85d8-47b8-b0b5-7369b7459c82") : secret "dns-default-metrics-tls" not found Apr 16 17:42:08.789716 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:42:08.789549 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-registry-tls podName:bd43cd3e-442d-4c1a-9eab-890d57ec3a13 nodeName:}" failed. No retries permitted until 2026-04-16 17:42:24.789535027 +0000 UTC m=+66.262764477 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-registry-tls") pod "image-registry-65c7c55b77-4pqmk" (UID: "bd43cd3e-442d-4c1a-9eab-890d57ec3a13") : secret "image-registry-tls" not found Apr 16 17:42:08.789716 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:42:08.789575 2571 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 17:42:08.789716 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:42:08.789624 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6580a74-c19a-4cd2-b8e2-e8a8423dc761-cert podName:b6580a74-c19a-4cd2-b8e2-e8a8423dc761 nodeName:}" failed. No retries permitted until 2026-04-16 17:42:24.789614303 +0000 UTC m=+66.262843755 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b6580a74-c19a-4cd2-b8e2-e8a8423dc761-cert") pod "ingress-canary-2l84s" (UID: "b6580a74-c19a-4cd2-b8e2-e8a8423dc761") : secret "canary-serving-cert" not found Apr 16 17:42:17.203962 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:42:17.203925 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-69xw9" Apr 16 17:42:24.696998 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:42:24.696959 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b6b22f4-0d78-4198-b821-0f4f52115d9c-metrics-certs\") pod \"network-metrics-daemon-x4kh5\" (UID: \"5b6b22f4-0d78-4198-b821-0f4f52115d9c\") " pod="openshift-multus/network-metrics-daemon-x4kh5" Apr 16 17:42:24.699442 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:42:24.699421 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 16 17:42:24.708139 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:42:24.708122 2571 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Apr 16 17:42:24.708230 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:42:24.708180 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b6b22f4-0d78-4198-b821-0f4f52115d9c-metrics-certs podName:5b6b22f4-0d78-4198-b821-0f4f52115d9c nodeName:}" failed. No retries permitted until 2026-04-16 17:43:28.708161589 +0000 UTC m=+130.181391039 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b6b22f4-0d78-4198-b821-0f4f52115d9c-metrics-certs") pod "network-metrics-daemon-x4kh5" (UID: "5b6b22f4-0d78-4198-b821-0f4f52115d9c") : secret "metrics-daemon-secret" not found Apr 16 17:42:24.797168 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:42:24.797141 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b6580a74-c19a-4cd2-b8e2-e8a8423dc761-cert\") pod \"ingress-canary-2l84s\" (UID: \"b6580a74-c19a-4cd2-b8e2-e8a8423dc761\") " pod="openshift-ingress-canary/ingress-canary-2l84s" Apr 16 17:42:24.797260 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:42:24.797187 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-registry-tls\") pod \"image-registry-65c7c55b77-4pqmk\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:42:24.797260 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:42:24.797210 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/971c52dd-85d8-47b8-b0b5-7369b7459c82-metrics-tls\") pod \"dns-default-5pktg\" (UID: \"971c52dd-85d8-47b8-b0b5-7369b7459c82\") " pod="openshift-dns/dns-default-5pktg" Apr 16 17:42:24.797332 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:42:24.797284 2571 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 17:42:24.797365 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:42:24.797342 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6580a74-c19a-4cd2-b8e2-e8a8423dc761-cert podName:b6580a74-c19a-4cd2-b8e2-e8a8423dc761 nodeName:}" failed. No retries permitted until 2026-04-16 17:42:56.79732853 +0000 UTC m=+98.270557980 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b6580a74-c19a-4cd2-b8e2-e8a8423dc761-cert") pod "ingress-canary-2l84s" (UID: "b6580a74-c19a-4cd2-b8e2-e8a8423dc761") : secret "canary-serving-cert" not found Apr 16 17:42:24.797404 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:42:24.797289 2571 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 17:42:24.797404 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:42:24.797393 2571 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-65c7c55b77-4pqmk: secret "image-registry-tls" not found Apr 16 17:42:24.797463 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:42:24.797429 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-registry-tls podName:bd43cd3e-442d-4c1a-9eab-890d57ec3a13 nodeName:}" failed. No retries permitted until 2026-04-16 17:42:56.797417847 +0000 UTC m=+98.270647298 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-registry-tls") pod "image-registry-65c7c55b77-4pqmk" (UID: "bd43cd3e-442d-4c1a-9eab-890d57ec3a13") : secret "image-registry-tls" not found Apr 16 17:42:24.797463 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:42:24.797295 2571 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 17:42:24.797463 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:42:24.797455 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/971c52dd-85d8-47b8-b0b5-7369b7459c82-metrics-tls podName:971c52dd-85d8-47b8-b0b5-7369b7459c82 nodeName:}" failed. No retries permitted until 2026-04-16 17:42:56.797449938 +0000 UTC m=+98.270679388 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/971c52dd-85d8-47b8-b0b5-7369b7459c82-metrics-tls") pod "dns-default-5pktg" (UID: "971c52dd-85d8-47b8-b0b5-7369b7459c82") : secret "dns-default-metrics-tls" not found Apr 16 17:42:24.898104 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:42:24.898078 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-527cz\" (UniqueName: \"kubernetes.io/projected/679af96e-47e1-4212-8ee0-e6d82d302834-kube-api-access-527cz\") pod \"network-check-target-wrf29\" (UID: \"679af96e-47e1-4212-8ee0-e6d82d302834\") " pod="openshift-network-diagnostics/network-check-target-wrf29" Apr 16 17:42:24.900274 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:42:24.900258 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 16 17:42:24.910654 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:42:24.910636 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 16 17:42:24.921671 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:42:24.921646 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-527cz\" (UniqueName: \"kubernetes.io/projected/679af96e-47e1-4212-8ee0-e6d82d302834-kube-api-access-527cz\") pod \"network-check-target-wrf29\" (UID: \"679af96e-47e1-4212-8ee0-e6d82d302834\") " pod="openshift-network-diagnostics/network-check-target-wrf29" Apr 16 17:42:24.941592 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:42:24.941567 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-qnscp\"" Apr 16 17:42:24.950307 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:42:24.950268 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-wrf29" Apr 16 17:42:25.059140 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:42:25.059106 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-wrf29"] Apr 16 17:42:25.063295 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:42:25.063270 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod679af96e_47e1_4212_8ee0_e6d82d302834.slice/crio-7556b9006a2d3a2717a558077fcfb0169076b345ac25859c93382b9dae79a652 WatchSource:0}: Error finding container 7556b9006a2d3a2717a558077fcfb0169076b345ac25859c93382b9dae79a652: Status 404 returned error can't find the container with id 7556b9006a2d3a2717a558077fcfb0169076b345ac25859c93382b9dae79a652 Apr 16 17:42:25.282019 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:42:25.281959 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-wrf29" event={"ID":"679af96e-47e1-4212-8ee0-e6d82d302834","Type":"ContainerStarted","Data":"7556b9006a2d3a2717a558077fcfb0169076b345ac25859c93382b9dae79a652"} Apr 16 17:42:28.289845 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:42:28.289797 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-wrf29" event={"ID":"679af96e-47e1-4212-8ee0-e6d82d302834","Type":"ContainerStarted","Data":"57df6317bcc3e096e6b223e4978ace227834a2c45e0105b55ae67f12c93b9286"} Apr 16 17:42:28.290278 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:42:28.289953 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-wrf29" Apr 16 17:42:28.305231 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:42:28.305182 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-target-wrf29" podStartSLOduration=66.60744692 podStartE2EDuration="1m9.305169177s" podCreationTimestamp="2026-04-16 17:41:19 +0000 UTC" firstStartedPulling="2026-04-16 17:42:25.065005227 +0000 UTC m=+66.538234682" lastFinishedPulling="2026-04-16 17:42:27.762727485 +0000 UTC m=+69.235956939" observedRunningTime="2026-04-16 17:42:28.30434319 +0000 UTC m=+69.777572663" watchObservedRunningTime="2026-04-16 17:42:28.305169177 +0000 UTC m=+69.778398650" Apr 16 17:42:56.810615 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:42:56.810580 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b6580a74-c19a-4cd2-b8e2-e8a8423dc761-cert\") pod \"ingress-canary-2l84s\" (UID: \"b6580a74-c19a-4cd2-b8e2-e8a8423dc761\") " pod="openshift-ingress-canary/ingress-canary-2l84s" Apr 16 17:42:56.811035 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:42:56.810624 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-registry-tls\") pod \"image-registry-65c7c55b77-4pqmk\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:42:56.811035 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:42:56.810644 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/971c52dd-85d8-47b8-b0b5-7369b7459c82-metrics-tls\") pod \"dns-default-5pktg\" (UID: \"971c52dd-85d8-47b8-b0b5-7369b7459c82\") " pod="openshift-dns/dns-default-5pktg" Apr 16 17:42:56.811035 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:42:56.810725 2571 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 17:42:56.811035 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:42:56.810726 2571 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 17:42:56.811035 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:42:56.810772 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/971c52dd-85d8-47b8-b0b5-7369b7459c82-metrics-tls podName:971c52dd-85d8-47b8-b0b5-7369b7459c82 nodeName:}" failed. No retries permitted until 2026-04-16 17:44:00.810758598 +0000 UTC m=+162.283988049 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/971c52dd-85d8-47b8-b0b5-7369b7459c82-metrics-tls") pod "dns-default-5pktg" (UID: "971c52dd-85d8-47b8-b0b5-7369b7459c82") : secret "dns-default-metrics-tls" not found Apr 16 17:42:56.811035 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:42:56.810790 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6580a74-c19a-4cd2-b8e2-e8a8423dc761-cert podName:b6580a74-c19a-4cd2-b8e2-e8a8423dc761 nodeName:}" failed. No retries permitted until 2026-04-16 17:44:00.810777017 +0000 UTC m=+162.284006468 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b6580a74-c19a-4cd2-b8e2-e8a8423dc761-cert") pod "ingress-canary-2l84s" (UID: "b6580a74-c19a-4cd2-b8e2-e8a8423dc761") : secret "canary-serving-cert" not found Apr 16 17:42:56.811035 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:42:56.810729 2571 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 17:42:56.811035 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:42:56.810804 2571 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-65c7c55b77-4pqmk: secret "image-registry-tls" not found Apr 16 17:42:56.811035 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:42:56.810827 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-registry-tls podName:bd43cd3e-442d-4c1a-9eab-890d57ec3a13 nodeName:}" failed. No retries permitted until 2026-04-16 17:44:00.810820088 +0000 UTC m=+162.284049539 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-registry-tls") pod "image-registry-65c7c55b77-4pqmk" (UID: "bd43cd3e-442d-4c1a-9eab-890d57ec3a13") : secret "image-registry-tls" not found Apr 16 17:42:59.293888 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:42:59.293846 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-wrf29" Apr 16 17:43:27.685948 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.685915 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6667474d89-cr6kk"] Apr 16 17:43:27.687579 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.687563 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-cr6kk" Apr 16 17:43:27.691382 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.691356 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"cluster-monitoring-operator-tls\"" Apr 16 17:43:27.691515 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.691360 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kube-root-ca.crt\"" Apr 16 17:43:27.691579 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.691441 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"cluster-monitoring-operator-dockercfg-fmxs4\"" Apr 16 17:43:27.691579 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.691446 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"telemetry-config\"" Apr 16 17:43:27.691579 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.691573 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"openshift-service-ca.crt\"" Apr 16 17:43:27.692449 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.692429 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-5785d4fcdd-7pxpm"] Apr 16 17:43:27.694134 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.693933 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-5785d4fcdd-7pxpm" Apr 16 17:43:27.696062 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.696041 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"openshift-service-ca.crt\"" Apr 16 17:43:27.696159 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.696068 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-root-ca.crt\"" Apr 16 17:43:27.696159 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.696085 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"service-ca-bundle\"" Apr 16 17:43:27.696159 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.696101 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"operator-dockercfg-mvb96\"" Apr 16 17:43:27.696159 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.696088 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"openshift-insights-serving-cert\"" Apr 16 17:43:27.699009 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.698988 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6667474d89-cr6kk"] Apr 16 17:43:27.700604 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.700585 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"trusted-ca-bundle\"" Apr 16 17:43:27.705210 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.705188 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-5785d4fcdd-7pxpm"] Apr 16 17:43:27.789922 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.789885 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-6c44cbbfb4-k2krf"] Apr 16 17:43:27.792374 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.792350 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-6c44cbbfb4-k2krf" Apr 16 17:43:27.794463 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.794435 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"default-ingress-cert\"" Apr 16 17:43:27.794577 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.794526 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Apr 16 17:43:27.794577 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.794526 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Apr 16 17:43:27.794577 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.794552 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Apr 16 17:43:27.794754 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.794604 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-6z48b\"" Apr 16 17:43:27.794754 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.794630 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Apr 16 17:43:27.794869 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.794827 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Apr 16 17:43:27.803014 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.802995 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress/router-default-6c44cbbfb4-k2krf"] Apr 16 17:43:27.810399 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.810381 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/dbefec75-4a25-443d-8f9e-1fc4a14fce37-telemetry-config\") pod \"cluster-monitoring-operator-6667474d89-cr6kk\" (UID: \"dbefec75-4a25-443d-8f9e-1fc4a14fce37\") " pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-cr6kk" Apr 16 17:43:27.810487 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.810411 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbefec75-4a25-443d-8f9e-1fc4a14fce37-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6667474d89-cr6kk\" (UID: \"dbefec75-4a25-443d-8f9e-1fc4a14fce37\") " pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-cr6kk" Apr 16 17:43:27.810487 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.810441 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6b6c\" (UniqueName: \"kubernetes.io/projected/dbefec75-4a25-443d-8f9e-1fc4a14fce37-kube-api-access-f6b6c\") pod \"cluster-monitoring-operator-6667474d89-cr6kk\" (UID: \"dbefec75-4a25-443d-8f9e-1fc4a14fce37\") " pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-cr6kk" Apr 16 17:43:27.810487 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.810457 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611-trusted-ca-bundle\") pod \"insights-operator-5785d4fcdd-7pxpm\" (UID: \"25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611\") " pod="openshift-insights/insights-operator-5785d4fcdd-7pxpm" Apr 16 17:43:27.810487 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.810472 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611-service-ca-bundle\") pod \"insights-operator-5785d4fcdd-7pxpm\" (UID: \"25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611\") " pod="openshift-insights/insights-operator-5785d4fcdd-7pxpm" Apr 16 17:43:27.810639 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.810522 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6927\" (UniqueName: \"kubernetes.io/projected/25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611-kube-api-access-t6927\") pod \"insights-operator-5785d4fcdd-7pxpm\" (UID: \"25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611\") " pod="openshift-insights/insights-operator-5785d4fcdd-7pxpm" Apr 16 17:43:27.810639 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.810555 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611-snapshots\") pod \"insights-operator-5785d4fcdd-7pxpm\" (UID: \"25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611\") " pod="openshift-insights/insights-operator-5785d4fcdd-7pxpm" Apr 16 17:43:27.810639 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.810572 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611-serving-cert\") pod \"insights-operator-5785d4fcdd-7pxpm\" (UID: \"25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611\") " pod="openshift-insights/insights-operator-5785d4fcdd-7pxpm" Apr 16 17:43:27.810639 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.810588 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611-tmp\") pod \"insights-operator-5785d4fcdd-7pxpm\" (UID: \"25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611\") " pod="openshift-insights/insights-operator-5785d4fcdd-7pxpm" Apr 16 17:43:27.911788 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.911770 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fc88243-e880-437b-8e63-4cc27b6580f3-service-ca-bundle\") pod \"router-default-6c44cbbfb4-k2krf\" (UID: \"9fc88243-e880-437b-8e63-4cc27b6580f3\") " pod="openshift-ingress/router-default-6c44cbbfb4-k2krf" Apr 16 17:43:27.911904 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.911798 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9fc88243-e880-437b-8e63-4cc27b6580f3-default-certificate\") pod \"router-default-6c44cbbfb4-k2krf\" (UID: \"9fc88243-e880-437b-8e63-4cc27b6580f3\") " pod="openshift-ingress/router-default-6c44cbbfb4-k2krf" Apr 16 17:43:27.911904 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.911823 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611-serving-cert\") pod \"insights-operator-5785d4fcdd-7pxpm\" (UID: \"25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611\") " pod="openshift-insights/insights-operator-5785d4fcdd-7pxpm" Apr 16 17:43:27.911904 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.911846 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f6b6c\" (UniqueName: \"kubernetes.io/projected/dbefec75-4a25-443d-8f9e-1fc4a14fce37-kube-api-access-f6b6c\") pod \"cluster-monitoring-operator-6667474d89-cr6kk\" (UID: \"dbefec75-4a25-443d-8f9e-1fc4a14fce37\") " pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-cr6kk" Apr 16 17:43:27.911904 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.911888 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9fc88243-e880-437b-8e63-4cc27b6580f3-stats-auth\") pod \"router-default-6c44cbbfb4-k2krf\" (UID: \"9fc88243-e880-437b-8e63-4cc27b6580f3\") " pod="openshift-ingress/router-default-6c44cbbfb4-k2krf" Apr 16 17:43:27.911904 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.911905 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9fc88243-e880-437b-8e63-4cc27b6580f3-metrics-certs\") pod \"router-default-6c44cbbfb4-k2krf\" (UID: \"9fc88243-e880-437b-8e63-4cc27b6580f3\") " pod="openshift-ingress/router-default-6c44cbbfb4-k2krf" Apr 16 17:43:27.912087 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.911921 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t6927\" (UniqueName: \"kubernetes.io/projected/25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611-kube-api-access-t6927\") pod \"insights-operator-5785d4fcdd-7pxpm\" (UID: \"25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611\") " pod="openshift-insights/insights-operator-5785d4fcdd-7pxpm" Apr 16 17:43:27.912163 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.912144 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611-tmp\") pod \"insights-operator-5785d4fcdd-7pxpm\" (UID: \"25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611\") " pod="openshift-insights/insights-operator-5785d4fcdd-7pxpm" Apr 16 17:43:27.912215 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.912198 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2bpk\" (UniqueName: \"kubernetes.io/projected/9fc88243-e880-437b-8e63-4cc27b6580f3-kube-api-access-h2bpk\") pod \"router-default-6c44cbbfb4-k2krf\" (UID: \"9fc88243-e880-437b-8e63-4cc27b6580f3\") " pod="openshift-ingress/router-default-6c44cbbfb4-k2krf" Apr 16 17:43:27.912269 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.912220 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/dbefec75-4a25-443d-8f9e-1fc4a14fce37-telemetry-config\") pod \"cluster-monitoring-operator-6667474d89-cr6kk\" (UID: \"dbefec75-4a25-443d-8f9e-1fc4a14fce37\") " pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-cr6kk" Apr 16 17:43:27.912269 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.912241 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbefec75-4a25-443d-8f9e-1fc4a14fce37-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6667474d89-cr6kk\" (UID: \"dbefec75-4a25-443d-8f9e-1fc4a14fce37\") " pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-cr6kk" Apr 16 17:43:27.912368 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.912278 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611-service-ca-bundle\") pod \"insights-operator-5785d4fcdd-7pxpm\" (UID: \"25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611\") " pod="openshift-insights/insights-operator-5785d4fcdd-7pxpm" Apr 16 17:43:27.912368 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.912297 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611-trusted-ca-bundle\") pod \"insights-operator-5785d4fcdd-7pxpm\" (UID: \"25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611\") " pod="openshift-insights/insights-operator-5785d4fcdd-7pxpm" Apr 16 17:43:27.912368 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.912315 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611-snapshots\") pod \"insights-operator-5785d4fcdd-7pxpm\" (UID: \"25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611\") " pod="openshift-insights/insights-operator-5785d4fcdd-7pxpm" Apr 16 17:43:27.912520 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:27.912425 2571 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Apr 16 17:43:27.912520 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:27.912512 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbefec75-4a25-443d-8f9e-1fc4a14fce37-cluster-monitoring-operator-tls podName:dbefec75-4a25-443d-8f9e-1fc4a14fce37 nodeName:}" failed. No retries permitted until 2026-04-16 17:43:28.412493858 +0000 UTC m=+129.885723321 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/dbefec75-4a25-443d-8f9e-1fc4a14fce37-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6667474d89-cr6kk" (UID: "dbefec75-4a25-443d-8f9e-1fc4a14fce37") : secret "cluster-monitoring-operator-tls" not found Apr 16 17:43:27.912649 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.912576 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611-tmp\") pod \"insights-operator-5785d4fcdd-7pxpm\" (UID: \"25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611\") " pod="openshift-insights/insights-operator-5785d4fcdd-7pxpm" Apr 16 17:43:27.912902 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.912880 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611-service-ca-bundle\") pod \"insights-operator-5785d4fcdd-7pxpm\" (UID: \"25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611\") " pod="openshift-insights/insights-operator-5785d4fcdd-7pxpm" Apr 16 17:43:27.912945 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.912932 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611-snapshots\") pod \"insights-operator-5785d4fcdd-7pxpm\" (UID: \"25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611\") " pod="openshift-insights/insights-operator-5785d4fcdd-7pxpm" Apr 16 17:43:27.913016 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.912998 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/dbefec75-4a25-443d-8f9e-1fc4a14fce37-telemetry-config\") pod \"cluster-monitoring-operator-6667474d89-cr6kk\" (UID: \"dbefec75-4a25-443d-8f9e-1fc4a14fce37\") " pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-cr6kk" Apr 16 17:43:27.913300 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.913277 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611-trusted-ca-bundle\") pod \"insights-operator-5785d4fcdd-7pxpm\" (UID: \"25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611\") " pod="openshift-insights/insights-operator-5785d4fcdd-7pxpm" Apr 16 17:43:27.914390 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.914370 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611-serving-cert\") pod \"insights-operator-5785d4fcdd-7pxpm\" (UID: \"25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611\") " pod="openshift-insights/insights-operator-5785d4fcdd-7pxpm" Apr 16 17:43:27.919379 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.919353 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6927\" (UniqueName: \"kubernetes.io/projected/25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611-kube-api-access-t6927\") pod \"insights-operator-5785d4fcdd-7pxpm\" (UID: \"25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611\") " pod="openshift-insights/insights-operator-5785d4fcdd-7pxpm" Apr 16 17:43:27.923101 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:27.923078 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6b6c\" (UniqueName: \"kubernetes.io/projected/dbefec75-4a25-443d-8f9e-1fc4a14fce37-kube-api-access-f6b6c\") pod \"cluster-monitoring-operator-6667474d89-cr6kk\" (UID: \"dbefec75-4a25-443d-8f9e-1fc4a14fce37\") " pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-cr6kk" Apr 16 17:43:28.004050 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:28.004002 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-5785d4fcdd-7pxpm" Apr 16 17:43:28.012735 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:28.012714 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h2bpk\" (UniqueName: \"kubernetes.io/projected/9fc88243-e880-437b-8e63-4cc27b6580f3-kube-api-access-h2bpk\") pod \"router-default-6c44cbbfb4-k2krf\" (UID: \"9fc88243-e880-437b-8e63-4cc27b6580f3\") " pod="openshift-ingress/router-default-6c44cbbfb4-k2krf" Apr 16 17:43:28.012829 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:28.012766 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fc88243-e880-437b-8e63-4cc27b6580f3-service-ca-bundle\") pod \"router-default-6c44cbbfb4-k2krf\" (UID: \"9fc88243-e880-437b-8e63-4cc27b6580f3\") " pod="openshift-ingress/router-default-6c44cbbfb4-k2krf" Apr 16 17:43:28.012829 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:28.012808 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9fc88243-e880-437b-8e63-4cc27b6580f3-default-certificate\") pod \"router-default-6c44cbbfb4-k2krf\" (UID: \"9fc88243-e880-437b-8e63-4cc27b6580f3\") " pod="openshift-ingress/router-default-6c44cbbfb4-k2krf" Apr 16 17:43:28.012975 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:28.012850 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9fc88243-e880-437b-8e63-4cc27b6580f3-stats-auth\") pod \"router-default-6c44cbbfb4-k2krf\" (UID: \"9fc88243-e880-437b-8e63-4cc27b6580f3\") " pod="openshift-ingress/router-default-6c44cbbfb4-k2krf" Apr 16 17:43:28.012975 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:28.012896 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9fc88243-e880-437b-8e63-4cc27b6580f3-service-ca-bundle podName:9fc88243-e880-437b-8e63-4cc27b6580f3 nodeName:}" failed. No retries permitted until 2026-04-16 17:43:28.512878904 +0000 UTC m=+129.986108373 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/9fc88243-e880-437b-8e63-4cc27b6580f3-service-ca-bundle") pod "router-default-6c44cbbfb4-k2krf" (UID: "9fc88243-e880-437b-8e63-4cc27b6580f3") : configmap references non-existent config key: service-ca.crt Apr 16 17:43:28.012975 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:28.012916 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9fc88243-e880-437b-8e63-4cc27b6580f3-metrics-certs\") pod \"router-default-6c44cbbfb4-k2krf\" (UID: \"9fc88243-e880-437b-8e63-4cc27b6580f3\") " pod="openshift-ingress/router-default-6c44cbbfb4-k2krf" Apr 16 17:43:28.013147 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:28.013020 2571 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 16 17:43:28.013147 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:28.013057 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9fc88243-e880-437b-8e63-4cc27b6580f3-metrics-certs podName:9fc88243-e880-437b-8e63-4cc27b6580f3 nodeName:}" failed. No retries permitted until 2026-04-16 17:43:28.513043971 +0000 UTC m=+129.986273430 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9fc88243-e880-437b-8e63-4cc27b6580f3-metrics-certs") pod "router-default-6c44cbbfb4-k2krf" (UID: "9fc88243-e880-437b-8e63-4cc27b6580f3") : secret "router-metrics-certs-default" not found Apr 16 17:43:28.015166 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:28.015144 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9fc88243-e880-437b-8e63-4cc27b6580f3-stats-auth\") pod \"router-default-6c44cbbfb4-k2krf\" (UID: \"9fc88243-e880-437b-8e63-4cc27b6580f3\") " pod="openshift-ingress/router-default-6c44cbbfb4-k2krf" Apr 16 17:43:28.015302 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:28.015249 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9fc88243-e880-437b-8e63-4cc27b6580f3-default-certificate\") pod \"router-default-6c44cbbfb4-k2krf\" (UID: \"9fc88243-e880-437b-8e63-4cc27b6580f3\") " pod="openshift-ingress/router-default-6c44cbbfb4-k2krf" Apr 16 17:43:28.020776 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:28.020757 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2bpk\" (UniqueName: \"kubernetes.io/projected/9fc88243-e880-437b-8e63-4cc27b6580f3-kube-api-access-h2bpk\") pod \"router-default-6c44cbbfb4-k2krf\" (UID: \"9fc88243-e880-437b-8e63-4cc27b6580f3\") " pod="openshift-ingress/router-default-6c44cbbfb4-k2krf" Apr 16 17:43:28.117177 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:28.117146 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-5785d4fcdd-7pxpm"] Apr 16 17:43:28.121202 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:43:28.121173 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25d7e7ca_5b2b_4a80_8a62_70ab5d3ba611.slice/crio-f0633cb40df5345d918224a67bb600e8d975ac0f481606b8f0b3cf1ed8aadf56 WatchSource:0}: Error finding container f0633cb40df5345d918224a67bb600e8d975ac0f481606b8f0b3cf1ed8aadf56: Status 404 returned error can't find the container with id f0633cb40df5345d918224a67bb600e8d975ac0f481606b8f0b3cf1ed8aadf56 Apr 16 17:43:28.403337 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:28.403313 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-5785d4fcdd-7pxpm" event={"ID":"25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611","Type":"ContainerStarted","Data":"f0633cb40df5345d918224a67bb600e8d975ac0f481606b8f0b3cf1ed8aadf56"} Apr 16 17:43:28.414635 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:28.414613 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbefec75-4a25-443d-8f9e-1fc4a14fce37-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6667474d89-cr6kk\" (UID: \"dbefec75-4a25-443d-8f9e-1fc4a14fce37\") " pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-cr6kk" Apr 16 17:43:28.414747 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:28.414735 2571 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Apr 16 17:43:28.414796 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:28.414787 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbefec75-4a25-443d-8f9e-1fc4a14fce37-cluster-monitoring-operator-tls podName:dbefec75-4a25-443d-8f9e-1fc4a14fce37 nodeName:}" failed. No retries permitted until 2026-04-16 17:43:29.414773031 +0000 UTC m=+130.888002482 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/dbefec75-4a25-443d-8f9e-1fc4a14fce37-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6667474d89-cr6kk" (UID: "dbefec75-4a25-443d-8f9e-1fc4a14fce37") : secret "cluster-monitoring-operator-tls" not found Apr 16 17:43:28.515690 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:28.515662 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9fc88243-e880-437b-8e63-4cc27b6580f3-metrics-certs\") pod \"router-default-6c44cbbfb4-k2krf\" (UID: \"9fc88243-e880-437b-8e63-4cc27b6580f3\") " pod="openshift-ingress/router-default-6c44cbbfb4-k2krf" Apr 16 17:43:28.515781 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:28.515772 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fc88243-e880-437b-8e63-4cc27b6580f3-service-ca-bundle\") pod \"router-default-6c44cbbfb4-k2krf\" (UID: \"9fc88243-e880-437b-8e63-4cc27b6580f3\") " pod="openshift-ingress/router-default-6c44cbbfb4-k2krf" Apr 16 17:43:28.515825 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:28.515812 2571 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 16 17:43:28.515901 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:28.515890 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9fc88243-e880-437b-8e63-4cc27b6580f3-metrics-certs podName:9fc88243-e880-437b-8e63-4cc27b6580f3 nodeName:}" failed. No retries permitted until 2026-04-16 17:43:29.515871157 +0000 UTC m=+130.989100623 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9fc88243-e880-437b-8e63-4cc27b6580f3-metrics-certs") pod "router-default-6c44cbbfb4-k2krf" (UID: "9fc88243-e880-437b-8e63-4cc27b6580f3") : secret "router-metrics-certs-default" not found Apr 16 17:43:28.515948 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:28.515911 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9fc88243-e880-437b-8e63-4cc27b6580f3-service-ca-bundle podName:9fc88243-e880-437b-8e63-4cc27b6580f3 nodeName:}" failed. No retries permitted until 2026-04-16 17:43:29.51590139 +0000 UTC m=+130.989130841 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/9fc88243-e880-437b-8e63-4cc27b6580f3-service-ca-bundle") pod "router-default-6c44cbbfb4-k2krf" (UID: "9fc88243-e880-437b-8e63-4cc27b6580f3") : configmap references non-existent config key: service-ca.crt Apr 16 17:43:28.717232 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:28.717152 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b6b22f4-0d78-4198-b821-0f4f52115d9c-metrics-certs\") pod \"network-metrics-daemon-x4kh5\" (UID: \"5b6b22f4-0d78-4198-b821-0f4f52115d9c\") " pod="openshift-multus/network-metrics-daemon-x4kh5" Apr 16 17:43:28.717606 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:28.717273 2571 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Apr 16 17:43:28.717606 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:28.717329 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b6b22f4-0d78-4198-b821-0f4f52115d9c-metrics-certs podName:5b6b22f4-0d78-4198-b821-0f4f52115d9c nodeName:}" failed. No retries permitted until 2026-04-16 17:45:30.717315371 +0000 UTC m=+252.190544822 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b6b22f4-0d78-4198-b821-0f4f52115d9c-metrics-certs") pod "network-metrics-daemon-x4kh5" (UID: "5b6b22f4-0d78-4198-b821-0f4f52115d9c") : secret "metrics-daemon-secret" not found Apr 16 17:43:29.420943 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:29.420906 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbefec75-4a25-443d-8f9e-1fc4a14fce37-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6667474d89-cr6kk\" (UID: \"dbefec75-4a25-443d-8f9e-1fc4a14fce37\") " pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-cr6kk" Apr 16 17:43:29.421105 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:29.421066 2571 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Apr 16 17:43:29.421164 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:29.421146 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbefec75-4a25-443d-8f9e-1fc4a14fce37-cluster-monitoring-operator-tls podName:dbefec75-4a25-443d-8f9e-1fc4a14fce37 nodeName:}" failed. No retries permitted until 2026-04-16 17:43:31.421124008 +0000 UTC m=+132.894353462 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/dbefec75-4a25-443d-8f9e-1fc4a14fce37-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6667474d89-cr6kk" (UID: "dbefec75-4a25-443d-8f9e-1fc4a14fce37") : secret "cluster-monitoring-operator-tls" not found Apr 16 17:43:29.521678 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:29.521649 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9fc88243-e880-437b-8e63-4cc27b6580f3-metrics-certs\") pod \"router-default-6c44cbbfb4-k2krf\" (UID: \"9fc88243-e880-437b-8e63-4cc27b6580f3\") " pod="openshift-ingress/router-default-6c44cbbfb4-k2krf" Apr 16 17:43:29.521792 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:29.521765 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fc88243-e880-437b-8e63-4cc27b6580f3-service-ca-bundle\") pod \"router-default-6c44cbbfb4-k2krf\" (UID: \"9fc88243-e880-437b-8e63-4cc27b6580f3\") " pod="openshift-ingress/router-default-6c44cbbfb4-k2krf" Apr 16 17:43:29.521831 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:29.521795 2571 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 16 17:43:29.521886 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:29.521876 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9fc88243-e880-437b-8e63-4cc27b6580f3-metrics-certs podName:9fc88243-e880-437b-8e63-4cc27b6580f3 nodeName:}" failed. No retries permitted until 2026-04-16 17:43:31.521837814 +0000 UTC m=+132.995067279 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9fc88243-e880-437b-8e63-4cc27b6580f3-metrics-certs") pod "router-default-6c44cbbfb4-k2krf" (UID: "9fc88243-e880-437b-8e63-4cc27b6580f3") : secret "router-metrics-certs-default" not found Apr 16 17:43:29.521928 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:29.521907 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9fc88243-e880-437b-8e63-4cc27b6580f3-service-ca-bundle podName:9fc88243-e880-437b-8e63-4cc27b6580f3 nodeName:}" failed. No retries permitted until 2026-04-16 17:43:31.521896978 +0000 UTC m=+132.995126433 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/9fc88243-e880-437b-8e63-4cc27b6580f3-service-ca-bundle") pod "router-default-6c44cbbfb4-k2krf" (UID: "9fc88243-e880-437b-8e63-4cc27b6580f3") : configmap references non-existent config key: service-ca.crt Apr 16 17:43:30.407976 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:30.407939 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-5785d4fcdd-7pxpm" event={"ID":"25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611","Type":"ContainerStarted","Data":"80cce3d98c466c41fc7f1a7106f9f53137c1c1762442616bf19d07d461663c00"} Apr 16 17:43:30.423984 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:30.423946 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-operator-5785d4fcdd-7pxpm" podStartSLOduration=1.431150369 podStartE2EDuration="3.423932192s" podCreationTimestamp="2026-04-16 17:43:27 +0000 UTC" firstStartedPulling="2026-04-16 17:43:28.122836395 +0000 UTC m=+129.596065846" lastFinishedPulling="2026-04-16 17:43:30.115618219 +0000 UTC m=+131.588847669" observedRunningTime="2026-04-16 17:43:30.423404224 +0000 UTC m=+131.896633688" watchObservedRunningTime="2026-04-16 17:43:30.423932192 +0000 UTC m=+131.897161683" Apr 16 17:43:31.437150 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:31.437117 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbefec75-4a25-443d-8f9e-1fc4a14fce37-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6667474d89-cr6kk\" (UID: \"dbefec75-4a25-443d-8f9e-1fc4a14fce37\") " pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-cr6kk" Apr 16 17:43:31.437514 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:31.437246 2571 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Apr 16 17:43:31.437514 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:31.437305 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbefec75-4a25-443d-8f9e-1fc4a14fce37-cluster-monitoring-operator-tls podName:dbefec75-4a25-443d-8f9e-1fc4a14fce37 nodeName:}" failed. No retries permitted until 2026-04-16 17:43:35.43728968 +0000 UTC m=+136.910519131 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/dbefec75-4a25-443d-8f9e-1fc4a14fce37-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6667474d89-cr6kk" (UID: "dbefec75-4a25-443d-8f9e-1fc4a14fce37") : secret "cluster-monitoring-operator-tls" not found Apr 16 17:43:31.537954 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:31.537924 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9fc88243-e880-437b-8e63-4cc27b6580f3-metrics-certs\") pod \"router-default-6c44cbbfb4-k2krf\" (UID: \"9fc88243-e880-437b-8e63-4cc27b6580f3\") " pod="openshift-ingress/router-default-6c44cbbfb4-k2krf" Apr 16 17:43:31.538097 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:31.538054 2571 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 16 17:43:31.538097 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:31.538085 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fc88243-e880-437b-8e63-4cc27b6580f3-service-ca-bundle\") pod \"router-default-6c44cbbfb4-k2krf\" (UID: \"9fc88243-e880-437b-8e63-4cc27b6580f3\") " pod="openshift-ingress/router-default-6c44cbbfb4-k2krf" Apr 16 17:43:31.538199 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:31.538114 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9fc88243-e880-437b-8e63-4cc27b6580f3-metrics-certs podName:9fc88243-e880-437b-8e63-4cc27b6580f3 nodeName:}" failed. No retries permitted until 2026-04-16 17:43:35.538095235 +0000 UTC m=+137.011324686 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9fc88243-e880-437b-8e63-4cc27b6580f3-metrics-certs") pod "router-default-6c44cbbfb4-k2krf" (UID: "9fc88243-e880-437b-8e63-4cc27b6580f3") : secret "router-metrics-certs-default" not found Apr 16 17:43:31.538260 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:31.538223 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9fc88243-e880-437b-8e63-4cc27b6580f3-service-ca-bundle podName:9fc88243-e880-437b-8e63-4cc27b6580f3 nodeName:}" failed. No retries permitted until 2026-04-16 17:43:35.538206539 +0000 UTC m=+137.011435996 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/9fc88243-e880-437b-8e63-4cc27b6580f3-service-ca-bundle") pod "router-default-6c44cbbfb4-k2krf" (UID: "9fc88243-e880-437b-8e63-4cc27b6580f3") : configmap references non-existent config key: service-ca.crt Apr 16 17:43:32.758169 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:32.758140 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-cgldj_3d328985-f90b-469d-a8a4-9962d8311ef2/dns-node-resolver/0.log" Apr 16 17:43:33.758171 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:33.758148 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-hdwwl_dd4660ce-5c75-41dc-a8ed-0e0e55ac0a01/node-ca/0.log" Apr 16 17:43:35.465111 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:35.465080 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbefec75-4a25-443d-8f9e-1fc4a14fce37-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6667474d89-cr6kk\" (UID: \"dbefec75-4a25-443d-8f9e-1fc4a14fce37\") " pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-cr6kk" Apr 16 17:43:35.465480 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:35.465189 2571 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Apr 16 17:43:35.465480 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:35.465247 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbefec75-4a25-443d-8f9e-1fc4a14fce37-cluster-monitoring-operator-tls podName:dbefec75-4a25-443d-8f9e-1fc4a14fce37 nodeName:}" failed. No retries permitted until 2026-04-16 17:43:43.4652338 +0000 UTC m=+144.938463251 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/dbefec75-4a25-443d-8f9e-1fc4a14fce37-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6667474d89-cr6kk" (UID: "dbefec75-4a25-443d-8f9e-1fc4a14fce37") : secret "cluster-monitoring-operator-tls" not found Apr 16 17:43:35.565924 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:35.565888 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fc88243-e880-437b-8e63-4cc27b6580f3-service-ca-bundle\") pod \"router-default-6c44cbbfb4-k2krf\" (UID: \"9fc88243-e880-437b-8e63-4cc27b6580f3\") " pod="openshift-ingress/router-default-6c44cbbfb4-k2krf" Apr 16 17:43:35.566066 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:35.565938 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9fc88243-e880-437b-8e63-4cc27b6580f3-metrics-certs\") pod \"router-default-6c44cbbfb4-k2krf\" (UID: \"9fc88243-e880-437b-8e63-4cc27b6580f3\") " pod="openshift-ingress/router-default-6c44cbbfb4-k2krf" Apr 16 17:43:35.566066 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:35.566045 2571 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 16 17:43:35.566139 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:35.566070 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9fc88243-e880-437b-8e63-4cc27b6580f3-service-ca-bundle podName:9fc88243-e880-437b-8e63-4cc27b6580f3 nodeName:}" failed. No retries permitted until 2026-04-16 17:43:43.566052674 +0000 UTC m=+145.039282125 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/9fc88243-e880-437b-8e63-4cc27b6580f3-service-ca-bundle") pod "router-default-6c44cbbfb4-k2krf" (UID: "9fc88243-e880-437b-8e63-4cc27b6580f3") : configmap references non-existent config key: service-ca.crt Apr 16 17:43:35.566139 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:35.566093 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9fc88243-e880-437b-8e63-4cc27b6580f3-metrics-certs podName:9fc88243-e880-437b-8e63-4cc27b6580f3 nodeName:}" failed. No retries permitted until 2026-04-16 17:43:43.566086588 +0000 UTC m=+145.039316039 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9fc88243-e880-437b-8e63-4cc27b6580f3-metrics-certs") pod "router-default-6c44cbbfb4-k2krf" (UID: "9fc88243-e880-437b-8e63-4cc27b6580f3") : secret "router-metrics-certs-default" not found Apr 16 17:43:37.718391 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.718361 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-69965bb79d-dtpsn"] Apr 16 17:43:37.720110 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.720088 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-69965bb79d-dtpsn" Apr 16 17:43:37.721185 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.721158 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-756bb7d76f-gzcr7"] Apr 16 17:43:37.722129 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.722111 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Apr 16 17:43:37.722309 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.722292 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Apr 16 17:43:37.722360 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.722351 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Apr 16 17:43:37.722746 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.722729 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-756bb7d76f-gzcr7" Apr 16 17:43:37.722820 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.722736 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Apr 16 17:43:37.722820 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.722790 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-zmgrv\"" Apr 16 17:43:37.726787 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.726762 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-nm5lz\"" Apr 16 17:43:37.726935 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.726774 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Apr 16 17:43:37.726935 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.726825 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-667775844f-kqxzm"] Apr 16 17:43:37.726935 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.726841 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Apr 16 17:43:37.726935 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.726779 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Apr 16 17:43:37.726935 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.726784 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Apr 16 17:43:37.728531 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.728509 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-kqxzm" Apr 16 17:43:37.730376 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.730354 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Apr 16 17:43:37.730482 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.730379 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Apr 16 17:43:37.730482 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.730413 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-swcbj\"" Apr 16 17:43:37.730563 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.730480 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Apr 16 17:43:37.733927 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.733907 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-69965bb79d-dtpsn"] Apr 16 17:43:37.734885 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.734851 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-756bb7d76f-gzcr7"] Apr 16 17:43:37.740417 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.740398 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-667775844f-kqxzm"] Apr 16 17:43:37.883443 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.883422 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92chw\" (UniqueName: \"kubernetes.io/projected/27efa923-840a-4df7-8dcb-d30a622b5c3f-kube-api-access-92chw\") pod \"kube-storage-version-migrator-operator-756bb7d76f-gzcr7\" (UID: \"27efa923-840a-4df7-8dcb-d30a622b5c3f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-756bb7d76f-gzcr7" Apr 16 17:43:37.883550 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.883464 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96-samples-operator-tls\") pod \"cluster-samples-operator-667775844f-kqxzm\" (UID: \"0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-kqxzm" Apr 16 17:43:37.883550 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.883490 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13deaf04-20aa-41b3-8c14-653473a8ddc7-serving-cert\") pod \"service-ca-operator-69965bb79d-dtpsn\" (UID: \"13deaf04-20aa-41b3-8c14-653473a8ddc7\") " pod="openshift-service-ca-operator/service-ca-operator-69965bb79d-dtpsn" Apr 16 17:43:37.883624 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.883548 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxx7s\" (UniqueName: \"kubernetes.io/projected/0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96-kube-api-access-qxx7s\") pod \"cluster-samples-operator-667775844f-kqxzm\" (UID: \"0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-kqxzm" Apr 16 17:43:37.883624 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.883577 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13deaf04-20aa-41b3-8c14-653473a8ddc7-config\") pod \"service-ca-operator-69965bb79d-dtpsn\" (UID: \"13deaf04-20aa-41b3-8c14-653473a8ddc7\") " pod="openshift-service-ca-operator/service-ca-operator-69965bb79d-dtpsn" Apr 16 17:43:37.883624 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.883596 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kqnj\" (UniqueName: \"kubernetes.io/projected/13deaf04-20aa-41b3-8c14-653473a8ddc7-kube-api-access-6kqnj\") pod \"service-ca-operator-69965bb79d-dtpsn\" (UID: \"13deaf04-20aa-41b3-8c14-653473a8ddc7\") " pod="openshift-service-ca-operator/service-ca-operator-69965bb79d-dtpsn" Apr 16 17:43:37.883713 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.883625 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27efa923-840a-4df7-8dcb-d30a622b5c3f-config\") pod \"kube-storage-version-migrator-operator-756bb7d76f-gzcr7\" (UID: \"27efa923-840a-4df7-8dcb-d30a622b5c3f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-756bb7d76f-gzcr7" Apr 16 17:43:37.883713 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.883664 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27efa923-840a-4df7-8dcb-d30a622b5c3f-serving-cert\") pod \"kube-storage-version-migrator-operator-756bb7d76f-gzcr7\" (UID: \"27efa923-840a-4df7-8dcb-d30a622b5c3f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-756bb7d76f-gzcr7" Apr 16 17:43:37.984401 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.984345 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96-samples-operator-tls\") pod \"cluster-samples-operator-667775844f-kqxzm\" (UID: \"0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-kqxzm" Apr 16 17:43:37.984401 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.984378 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13deaf04-20aa-41b3-8c14-653473a8ddc7-serving-cert\") pod \"service-ca-operator-69965bb79d-dtpsn\" (UID: \"13deaf04-20aa-41b3-8c14-653473a8ddc7\") " pod="openshift-service-ca-operator/service-ca-operator-69965bb79d-dtpsn" Apr 16 17:43:37.984524 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.984403 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qxx7s\" (UniqueName: \"kubernetes.io/projected/0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96-kube-api-access-qxx7s\") pod \"cluster-samples-operator-667775844f-kqxzm\" (UID: \"0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-kqxzm" Apr 16 17:43:37.984524 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.984421 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13deaf04-20aa-41b3-8c14-653473a8ddc7-config\") pod \"service-ca-operator-69965bb79d-dtpsn\" (UID: \"13deaf04-20aa-41b3-8c14-653473a8ddc7\") " pod="openshift-service-ca-operator/service-ca-operator-69965bb79d-dtpsn" Apr 16 17:43:37.984524 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.984438 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6kqnj\" (UniqueName: \"kubernetes.io/projected/13deaf04-20aa-41b3-8c14-653473a8ddc7-kube-api-access-6kqnj\") pod \"service-ca-operator-69965bb79d-dtpsn\" (UID: \"13deaf04-20aa-41b3-8c14-653473a8ddc7\") " pod="openshift-service-ca-operator/service-ca-operator-69965bb79d-dtpsn" Apr 16 17:43:37.984524 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.984472 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27efa923-840a-4df7-8dcb-d30a622b5c3f-config\") pod \"kube-storage-version-migrator-operator-756bb7d76f-gzcr7\" (UID: \"27efa923-840a-4df7-8dcb-d30a622b5c3f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-756bb7d76f-gzcr7" Apr 16 17:43:37.984524 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.984492 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27efa923-840a-4df7-8dcb-d30a622b5c3f-serving-cert\") pod \"kube-storage-version-migrator-operator-756bb7d76f-gzcr7\" (UID: \"27efa923-840a-4df7-8dcb-d30a622b5c3f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-756bb7d76f-gzcr7" Apr 16 17:43:37.984750 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.984562 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-92chw\" (UniqueName: \"kubernetes.io/projected/27efa923-840a-4df7-8dcb-d30a622b5c3f-kube-api-access-92chw\") pod \"kube-storage-version-migrator-operator-756bb7d76f-gzcr7\" (UID: \"27efa923-840a-4df7-8dcb-d30a622b5c3f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-756bb7d76f-gzcr7" Apr 16 17:43:37.984750 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:37.984588 2571 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 16 17:43:37.984750 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:37.984650 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96-samples-operator-tls podName:0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96 nodeName:}" failed. No retries permitted until 2026-04-16 17:43:38.484630976 +0000 UTC m=+139.957860441 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96-samples-operator-tls") pod "cluster-samples-operator-667775844f-kqxzm" (UID: "0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96") : secret "samples-operator-tls" not found Apr 16 17:43:37.985054 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.985031 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13deaf04-20aa-41b3-8c14-653473a8ddc7-config\") pod \"service-ca-operator-69965bb79d-dtpsn\" (UID: \"13deaf04-20aa-41b3-8c14-653473a8ddc7\") " pod="openshift-service-ca-operator/service-ca-operator-69965bb79d-dtpsn" Apr 16 17:43:37.985125 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.985108 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27efa923-840a-4df7-8dcb-d30a622b5c3f-config\") pod \"kube-storage-version-migrator-operator-756bb7d76f-gzcr7\" (UID: \"27efa923-840a-4df7-8dcb-d30a622b5c3f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-756bb7d76f-gzcr7" Apr 16 17:43:37.986970 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.986949 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13deaf04-20aa-41b3-8c14-653473a8ddc7-serving-cert\") pod \"service-ca-operator-69965bb79d-dtpsn\" (UID: \"13deaf04-20aa-41b3-8c14-653473a8ddc7\") " pod="openshift-service-ca-operator/service-ca-operator-69965bb79d-dtpsn" Apr 16 17:43:37.987031 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.986956 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27efa923-840a-4df7-8dcb-d30a622b5c3f-serving-cert\") pod \"kube-storage-version-migrator-operator-756bb7d76f-gzcr7\" (UID: \"27efa923-840a-4df7-8dcb-d30a622b5c3f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-756bb7d76f-gzcr7" Apr 16 17:43:37.994895 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.994849 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxx7s\" (UniqueName: \"kubernetes.io/projected/0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96-kube-api-access-qxx7s\") pod \"cluster-samples-operator-667775844f-kqxzm\" (UID: \"0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-kqxzm" Apr 16 17:43:37.994983 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.994950 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-92chw\" (UniqueName: \"kubernetes.io/projected/27efa923-840a-4df7-8dcb-d30a622b5c3f-kube-api-access-92chw\") pod \"kube-storage-version-migrator-operator-756bb7d76f-gzcr7\" (UID: \"27efa923-840a-4df7-8dcb-d30a622b5c3f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-756bb7d76f-gzcr7" Apr 16 17:43:37.995023 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:37.995009 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kqnj\" (UniqueName: \"kubernetes.io/projected/13deaf04-20aa-41b3-8c14-653473a8ddc7-kube-api-access-6kqnj\") pod \"service-ca-operator-69965bb79d-dtpsn\" (UID: \"13deaf04-20aa-41b3-8c14-653473a8ddc7\") " pod="openshift-service-ca-operator/service-ca-operator-69965bb79d-dtpsn" Apr 16 17:43:38.030279 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:38.030251 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-69965bb79d-dtpsn" Apr 16 17:43:38.036865 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:38.036834 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-756bb7d76f-gzcr7" Apr 16 17:43:38.154046 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:38.154017 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-69965bb79d-dtpsn"] Apr 16 17:43:38.157465 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:43:38.157440 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13deaf04_20aa_41b3_8c14_653473a8ddc7.slice/crio-ee2cb81176f4af9c0bd90986538a94a78549324e234c924f94b5178dbb237ff9 WatchSource:0}: Error finding container ee2cb81176f4af9c0bd90986538a94a78549324e234c924f94b5178dbb237ff9: Status 404 returned error can't find the container with id ee2cb81176f4af9c0bd90986538a94a78549324e234c924f94b5178dbb237ff9 Apr 16 17:43:38.172810 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:38.172786 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-756bb7d76f-gzcr7"] Apr 16 17:43:38.175359 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:43:38.175338 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27efa923_840a_4df7_8dcb_d30a622b5c3f.slice/crio-5a5e5a5e714d035617b1e963ee1c92bed3ca65702c412b3aa44c2d0d7931a765 WatchSource:0}: Error finding container 5a5e5a5e714d035617b1e963ee1c92bed3ca65702c412b3aa44c2d0d7931a765: Status 404 returned error can't find the container with id 5a5e5a5e714d035617b1e963ee1c92bed3ca65702c412b3aa44c2d0d7931a765 Apr 16 17:43:38.421994 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:38.421962 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-756bb7d76f-gzcr7" event={"ID":"27efa923-840a-4df7-8dcb-d30a622b5c3f","Type":"ContainerStarted","Data":"5a5e5a5e714d035617b1e963ee1c92bed3ca65702c412b3aa44c2d0d7931a765"} Apr 16 17:43:38.422871 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:38.422829 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69965bb79d-dtpsn" event={"ID":"13deaf04-20aa-41b3-8c14-653473a8ddc7","Type":"ContainerStarted","Data":"ee2cb81176f4af9c0bd90986538a94a78549324e234c924f94b5178dbb237ff9"} Apr 16 17:43:38.489901 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:38.489875 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96-samples-operator-tls\") pod \"cluster-samples-operator-667775844f-kqxzm\" (UID: \"0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-kqxzm" Apr 16 17:43:38.490066 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:38.490046 2571 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 16 17:43:38.490136 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:38.490125 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96-samples-operator-tls podName:0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96 nodeName:}" failed. No retries permitted until 2026-04-16 17:43:39.490103257 +0000 UTC m=+140.963332723 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96-samples-operator-tls") pod "cluster-samples-operator-667775844f-kqxzm" (UID: "0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96") : secret "samples-operator-tls" not found Apr 16 17:43:39.498129 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:39.498094 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96-samples-operator-tls\") pod \"cluster-samples-operator-667775844f-kqxzm\" (UID: \"0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-kqxzm" Apr 16 17:43:39.498451 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:39.498204 2571 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 16 17:43:39.498451 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:39.498253 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96-samples-operator-tls podName:0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96 nodeName:}" failed. No retries permitted until 2026-04-16 17:43:41.498239005 +0000 UTC m=+142.971468456 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96-samples-operator-tls") pod "cluster-samples-operator-667775844f-kqxzm" (UID: "0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96") : secret "samples-operator-tls" not found Apr 16 17:43:41.429264 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:41.429228 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-756bb7d76f-gzcr7" event={"ID":"27efa923-840a-4df7-8dcb-d30a622b5c3f","Type":"ContainerStarted","Data":"c56bc181f645382c07709f24d507c7eb9847b9d9d2f222171fe17f5a100aded8"} Apr 16 17:43:41.430596 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:41.430571 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69965bb79d-dtpsn" event={"ID":"13deaf04-20aa-41b3-8c14-653473a8ddc7","Type":"ContainerStarted","Data":"d632addbcce4984b2520acc96355580aa0f9109908393e996864da1a0ef95701"} Apr 16 17:43:41.445944 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:41.445909 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-756bb7d76f-gzcr7" podStartSLOduration=2.114591911 podStartE2EDuration="4.445898927s" podCreationTimestamp="2026-04-16 17:43:37 +0000 UTC" firstStartedPulling="2026-04-16 17:43:38.177073305 +0000 UTC m=+139.650302759" lastFinishedPulling="2026-04-16 17:43:40.508380319 +0000 UTC m=+141.981609775" observedRunningTime="2026-04-16 17:43:41.445831877 +0000 UTC m=+142.919061351" watchObservedRunningTime="2026-04-16 17:43:41.445898927 +0000 UTC m=+142.919128400" Apr 16 17:43:41.463953 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:41.463909 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-69965bb79d-dtpsn" podStartSLOduration=2.115600144 podStartE2EDuration="4.463898352s" podCreationTimestamp="2026-04-16 17:43:37 +0000 UTC" firstStartedPulling="2026-04-16 17:43:38.159202387 +0000 UTC m=+139.632431841" lastFinishedPulling="2026-04-16 17:43:40.507500588 +0000 UTC m=+141.980730049" observedRunningTime="2026-04-16 17:43:41.46235175 +0000 UTC m=+142.935581234" watchObservedRunningTime="2026-04-16 17:43:41.463898352 +0000 UTC m=+142.937127825" Apr 16 17:43:41.515630 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:41.515609 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96-samples-operator-tls\") pod \"cluster-samples-operator-667775844f-kqxzm\" (UID: \"0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-kqxzm" Apr 16 17:43:41.515942 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:41.515917 2571 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 16 17:43:41.516013 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:41.515978 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96-samples-operator-tls podName:0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96 nodeName:}" failed. No retries permitted until 2026-04-16 17:43:45.515960741 +0000 UTC m=+146.989190206 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96-samples-operator-tls") pod "cluster-samples-operator-667775844f-kqxzm" (UID: "0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96") : secret "samples-operator-tls" not found Apr 16 17:43:43.531981 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:43.531941 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbefec75-4a25-443d-8f9e-1fc4a14fce37-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6667474d89-cr6kk\" (UID: \"dbefec75-4a25-443d-8f9e-1fc4a14fce37\") " pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-cr6kk" Apr 16 17:43:43.532356 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:43.532059 2571 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Apr 16 17:43:43.532356 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:43.532109 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbefec75-4a25-443d-8f9e-1fc4a14fce37-cluster-monitoring-operator-tls podName:dbefec75-4a25-443d-8f9e-1fc4a14fce37 nodeName:}" failed. No retries permitted until 2026-04-16 17:43:59.532095882 +0000 UTC m=+161.005325333 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/dbefec75-4a25-443d-8f9e-1fc4a14fce37-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6667474d89-cr6kk" (UID: "dbefec75-4a25-443d-8f9e-1fc4a14fce37") : secret "cluster-monitoring-operator-tls" not found Apr 16 17:43:43.633103 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:43.633068 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fc88243-e880-437b-8e63-4cc27b6580f3-service-ca-bundle\") pod \"router-default-6c44cbbfb4-k2krf\" (UID: \"9fc88243-e880-437b-8e63-4cc27b6580f3\") " pod="openshift-ingress/router-default-6c44cbbfb4-k2krf" Apr 16 17:43:43.633252 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:43.633124 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9fc88243-e880-437b-8e63-4cc27b6580f3-metrics-certs\") pod \"router-default-6c44cbbfb4-k2krf\" (UID: \"9fc88243-e880-437b-8e63-4cc27b6580f3\") " pod="openshift-ingress/router-default-6c44cbbfb4-k2krf" Apr 16 17:43:43.633252 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:43.633206 2571 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 16 17:43:43.633252 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:43.633251 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9fc88243-e880-437b-8e63-4cc27b6580f3-metrics-certs podName:9fc88243-e880-437b-8e63-4cc27b6580f3 nodeName:}" failed. No retries permitted until 2026-04-16 17:43:59.633238409 +0000 UTC m=+161.106467860 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9fc88243-e880-437b-8e63-4cc27b6580f3-metrics-certs") pod "router-default-6c44cbbfb4-k2krf" (UID: "9fc88243-e880-437b-8e63-4cc27b6580f3") : secret "router-metrics-certs-default" not found Apr 16 17:43:43.633380 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:43.633267 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9fc88243-e880-437b-8e63-4cc27b6580f3-service-ca-bundle podName:9fc88243-e880-437b-8e63-4cc27b6580f3 nodeName:}" failed. No retries permitted until 2026-04-16 17:43:59.633260383 +0000 UTC m=+161.106489833 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/9fc88243-e880-437b-8e63-4cc27b6580f3-service-ca-bundle") pod "router-default-6c44cbbfb4-k2krf" (UID: "9fc88243-e880-437b-8e63-4cc27b6580f3") : configmap references non-existent config key: service-ca.crt Apr 16 17:43:44.417704 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:44.417671 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-runtime-extractor-xkqb8"] Apr 16 17:43:44.419743 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:44.419722 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-xkqb8" Apr 16 17:43:44.421754 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:44.421727 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-tls\"" Apr 16 17:43:44.421893 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:44.421755 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-rbac-proxy\"" Apr 16 17:43:44.421893 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:44.421805 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-sa-dockercfg-48zkm\"" Apr 16 17:43:44.433594 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:44.433572 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-xkqb8"] Apr 16 17:43:44.539874 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:44.539835 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/737fd733-20da-4813-9abe-61f8e3e58fa2-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-xkqb8\" (UID: \"737fd733-20da-4813-9abe-61f8e3e58fa2\") " pod="openshift-insights/insights-runtime-extractor-xkqb8" Apr 16 17:43:44.540136 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:44.539876 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52dj6\" (UniqueName: \"kubernetes.io/projected/737fd733-20da-4813-9abe-61f8e3e58fa2-kube-api-access-52dj6\") pod \"insights-runtime-extractor-xkqb8\" (UID: \"737fd733-20da-4813-9abe-61f8e3e58fa2\") " pod="openshift-insights/insights-runtime-extractor-xkqb8" Apr 16 17:43:44.540136 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:44.539993 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/737fd733-20da-4813-9abe-61f8e3e58fa2-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-xkqb8\" (UID: \"737fd733-20da-4813-9abe-61f8e3e58fa2\") " pod="openshift-insights/insights-runtime-extractor-xkqb8" Apr 16 17:43:44.540136 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:44.540035 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/737fd733-20da-4813-9abe-61f8e3e58fa2-data-volume\") pod \"insights-runtime-extractor-xkqb8\" (UID: \"737fd733-20da-4813-9abe-61f8e3e58fa2\") " pod="openshift-insights/insights-runtime-extractor-xkqb8" Apr 16 17:43:44.540136 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:44.540056 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/737fd733-20da-4813-9abe-61f8e3e58fa2-crio-socket\") pod \"insights-runtime-extractor-xkqb8\" (UID: \"737fd733-20da-4813-9abe-61f8e3e58fa2\") " pod="openshift-insights/insights-runtime-extractor-xkqb8" Apr 16 17:43:44.640702 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:44.640673 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/737fd733-20da-4813-9abe-61f8e3e58fa2-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-xkqb8\" (UID: \"737fd733-20da-4813-9abe-61f8e3e58fa2\") " pod="openshift-insights/insights-runtime-extractor-xkqb8" Apr 16 17:43:44.640801 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:44.640711 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/737fd733-20da-4813-9abe-61f8e3e58fa2-data-volume\") pod \"insights-runtime-extractor-xkqb8\" (UID: \"737fd733-20da-4813-9abe-61f8e3e58fa2\") " pod="openshift-insights/insights-runtime-extractor-xkqb8" Apr 16 17:43:44.640801 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:44.640732 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/737fd733-20da-4813-9abe-61f8e3e58fa2-crio-socket\") pod \"insights-runtime-extractor-xkqb8\" (UID: \"737fd733-20da-4813-9abe-61f8e3e58fa2\") " pod="openshift-insights/insights-runtime-extractor-xkqb8" Apr 16 17:43:44.640933 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:44.640818 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/737fd733-20da-4813-9abe-61f8e3e58fa2-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-xkqb8\" (UID: \"737fd733-20da-4813-9abe-61f8e3e58fa2\") " pod="openshift-insights/insights-runtime-extractor-xkqb8" Apr 16 17:43:44.640933 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:44.640849 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-52dj6\" (UniqueName: \"kubernetes.io/projected/737fd733-20da-4813-9abe-61f8e3e58fa2-kube-api-access-52dj6\") pod \"insights-runtime-extractor-xkqb8\" (UID: \"737fd733-20da-4813-9abe-61f8e3e58fa2\") " pod="openshift-insights/insights-runtime-extractor-xkqb8" Apr 16 17:43:44.640933 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:44.640889 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/737fd733-20da-4813-9abe-61f8e3e58fa2-crio-socket\") pod \"insights-runtime-extractor-xkqb8\" (UID: \"737fd733-20da-4813-9abe-61f8e3e58fa2\") " pod="openshift-insights/insights-runtime-extractor-xkqb8" Apr 16 17:43:44.640933 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:44.640821 2571 secret.go:189] Couldn't get secret openshift-insights/insights-runtime-extractor-tls: secret "insights-runtime-extractor-tls" not found Apr 16 17:43:44.641114 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:44.640962 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/737fd733-20da-4813-9abe-61f8e3e58fa2-insights-runtime-extractor-tls podName:737fd733-20da-4813-9abe-61f8e3e58fa2 nodeName:}" failed. No retries permitted until 2026-04-16 17:43:45.140945772 +0000 UTC m=+146.614175223 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "insights-runtime-extractor-tls" (UniqueName: "kubernetes.io/secret/737fd733-20da-4813-9abe-61f8e3e58fa2-insights-runtime-extractor-tls") pod "insights-runtime-extractor-xkqb8" (UID: "737fd733-20da-4813-9abe-61f8e3e58fa2") : secret "insights-runtime-extractor-tls" not found Apr 16 17:43:44.641114 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:44.641081 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/737fd733-20da-4813-9abe-61f8e3e58fa2-data-volume\") pod \"insights-runtime-extractor-xkqb8\" (UID: \"737fd733-20da-4813-9abe-61f8e3e58fa2\") " pod="openshift-insights/insights-runtime-extractor-xkqb8" Apr 16 17:43:44.641399 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:44.641378 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/737fd733-20da-4813-9abe-61f8e3e58fa2-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-xkqb8\" (UID: \"737fd733-20da-4813-9abe-61f8e3e58fa2\") " pod="openshift-insights/insights-runtime-extractor-xkqb8" Apr 16 17:43:44.650888 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:44.650848 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-52dj6\" (UniqueName: \"kubernetes.io/projected/737fd733-20da-4813-9abe-61f8e3e58fa2-kube-api-access-52dj6\") pod \"insights-runtime-extractor-xkqb8\" (UID: \"737fd733-20da-4813-9abe-61f8e3e58fa2\") " pod="openshift-insights/insights-runtime-extractor-xkqb8" Apr 16 17:43:45.145821 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:45.145798 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/737fd733-20da-4813-9abe-61f8e3e58fa2-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-xkqb8\" (UID: \"737fd733-20da-4813-9abe-61f8e3e58fa2\") " pod="openshift-insights/insights-runtime-extractor-xkqb8" Apr 16 17:43:45.145957 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:45.145927 2571 secret.go:189] Couldn't get secret openshift-insights/insights-runtime-extractor-tls: secret "insights-runtime-extractor-tls" not found Apr 16 17:43:45.146002 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:45.145976 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/737fd733-20da-4813-9abe-61f8e3e58fa2-insights-runtime-extractor-tls podName:737fd733-20da-4813-9abe-61f8e3e58fa2 nodeName:}" failed. No retries permitted until 2026-04-16 17:43:46.145962631 +0000 UTC m=+147.619192082 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "insights-runtime-extractor-tls" (UniqueName: "kubernetes.io/secret/737fd733-20da-4813-9abe-61f8e3e58fa2-insights-runtime-extractor-tls") pod "insights-runtime-extractor-xkqb8" (UID: "737fd733-20da-4813-9abe-61f8e3e58fa2") : secret "insights-runtime-extractor-tls" not found Apr 16 17:43:45.549579 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:45.549504 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96-samples-operator-tls\") pod \"cluster-samples-operator-667775844f-kqxzm\" (UID: \"0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-kqxzm" Apr 16 17:43:45.549919 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:45.549609 2571 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 16 17:43:45.549919 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:45.549658 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96-samples-operator-tls podName:0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96 nodeName:}" failed. No retries permitted until 2026-04-16 17:43:53.549644843 +0000 UTC m=+155.022874294 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96-samples-operator-tls") pod "cluster-samples-operator-667775844f-kqxzm" (UID: "0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96") : secret "samples-operator-tls" not found Apr 16 17:43:46.154340 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:46.154303 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/737fd733-20da-4813-9abe-61f8e3e58fa2-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-xkqb8\" (UID: \"737fd733-20da-4813-9abe-61f8e3e58fa2\") " pod="openshift-insights/insights-runtime-extractor-xkqb8" Apr 16 17:43:46.154513 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:46.154469 2571 secret.go:189] Couldn't get secret openshift-insights/insights-runtime-extractor-tls: secret "insights-runtime-extractor-tls" not found Apr 16 17:43:46.154563 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:46.154550 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/737fd733-20da-4813-9abe-61f8e3e58fa2-insights-runtime-extractor-tls podName:737fd733-20da-4813-9abe-61f8e3e58fa2 nodeName:}" failed. No retries permitted until 2026-04-16 17:43:48.154530899 +0000 UTC m=+149.627760357 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "insights-runtime-extractor-tls" (UniqueName: "kubernetes.io/secret/737fd733-20da-4813-9abe-61f8e3e58fa2-insights-runtime-extractor-tls") pod "insights-runtime-extractor-xkqb8" (UID: "737fd733-20da-4813-9abe-61f8e3e58fa2") : secret "insights-runtime-extractor-tls" not found Apr 16 17:43:48.170875 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:48.170832 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/737fd733-20da-4813-9abe-61f8e3e58fa2-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-xkqb8\" (UID: \"737fd733-20da-4813-9abe-61f8e3e58fa2\") " pod="openshift-insights/insights-runtime-extractor-xkqb8" Apr 16 17:43:48.171311 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:48.170999 2571 secret.go:189] Couldn't get secret openshift-insights/insights-runtime-extractor-tls: secret "insights-runtime-extractor-tls" not found Apr 16 17:43:48.171311 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:48.171071 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/737fd733-20da-4813-9abe-61f8e3e58fa2-insights-runtime-extractor-tls podName:737fd733-20da-4813-9abe-61f8e3e58fa2 nodeName:}" failed. No retries permitted until 2026-04-16 17:43:52.171053711 +0000 UTC m=+153.644283161 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "insights-runtime-extractor-tls" (UniqueName: "kubernetes.io/secret/737fd733-20da-4813-9abe-61f8e3e58fa2-insights-runtime-extractor-tls") pod "insights-runtime-extractor-xkqb8" (UID: "737fd733-20da-4813-9abe-61f8e3e58fa2") : secret "insights-runtime-extractor-tls" not found Apr 16 17:43:52.199383 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:52.199339 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/737fd733-20da-4813-9abe-61f8e3e58fa2-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-xkqb8\" (UID: \"737fd733-20da-4813-9abe-61f8e3e58fa2\") " pod="openshift-insights/insights-runtime-extractor-xkqb8" Apr 16 17:43:52.199780 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:52.199481 2571 secret.go:189] Couldn't get secret openshift-insights/insights-runtime-extractor-tls: secret "insights-runtime-extractor-tls" not found Apr 16 17:43:52.199780 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:52.199549 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/737fd733-20da-4813-9abe-61f8e3e58fa2-insights-runtime-extractor-tls podName:737fd733-20da-4813-9abe-61f8e3e58fa2 nodeName:}" failed. No retries permitted until 2026-04-16 17:44:00.19953034 +0000 UTC m=+161.672759809 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "insights-runtime-extractor-tls" (UniqueName: "kubernetes.io/secret/737fd733-20da-4813-9abe-61f8e3e58fa2-insights-runtime-extractor-tls") pod "insights-runtime-extractor-xkqb8" (UID: "737fd733-20da-4813-9abe-61f8e3e58fa2") : secret "insights-runtime-extractor-tls" not found Apr 16 17:43:53.610492 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:53.610457 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96-samples-operator-tls\") pod \"cluster-samples-operator-667775844f-kqxzm\" (UID: \"0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-kqxzm" Apr 16 17:43:53.612893 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:53.612851 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96-samples-operator-tls\") pod \"cluster-samples-operator-667775844f-kqxzm\" (UID: \"0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-kqxzm" Apr 16 17:43:53.642724 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:53.642701 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-kqxzm" Apr 16 17:43:53.759822 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:53.759701 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-667775844f-kqxzm"] Apr 16 17:43:54.460481 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:54.460442 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-kqxzm" event={"ID":"0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96","Type":"ContainerStarted","Data":"58bfac5c3017a7c4fa5e738ed139e6a74e616b72be348a65f8f07ba545343138"} Apr 16 17:43:55.467437 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:55.467402 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-kqxzm" event={"ID":"0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96","Type":"ContainerStarted","Data":"4c8f89d8409f192ac2734686f997b799df939d0e4c7f9d4232d8667f1879d6bf"} Apr 16 17:43:55.468416 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:55.468386 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-kqxzm" event={"ID":"0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96","Type":"ContainerStarted","Data":"bae0133cf30397b97ec625750c82caaf4cc5fe988f9c8dc1a14ddf4b2e7ffee9"} Apr 16 17:43:55.485981 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:55.485926 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-kqxzm" podStartSLOduration=16.952147842 podStartE2EDuration="18.485914364s" podCreationTimestamp="2026-04-16 17:43:37 +0000 UTC" firstStartedPulling="2026-04-16 17:43:53.800193606 +0000 UTC m=+155.273423070" lastFinishedPulling="2026-04-16 17:43:55.333960139 +0000 UTC m=+156.807189592" observedRunningTime="2026-04-16 17:43:55.485277718 +0000 UTC m=+156.958507192" watchObservedRunningTime="2026-04-16 17:43:55.485914364 +0000 UTC m=+156.959143836" Apr 16 17:43:55.947209 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:55.947135 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[registry-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" podUID="bd43cd3e-442d-4c1a-9eab-890d57ec3a13" Apr 16 17:43:55.961287 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:55.961257 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[metrics-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-dns/dns-default-5pktg" podUID="971c52dd-85d8-47b8-b0b5-7369b7459c82" Apr 16 17:43:55.973418 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:55.973397 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-ingress-canary/ingress-canary-2l84s" podUID="b6580a74-c19a-4cd2-b8e2-e8a8423dc761" Apr 16 17:43:56.470415 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:56.470389 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5pktg" Apr 16 17:43:56.470796 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:56.470389 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:43:57.050320 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:43:57.050277 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[metrics-certs], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-multus/network-metrics-daemon-x4kh5" podUID="5b6b22f4-0d78-4198-b821-0f4f52115d9c" Apr 16 17:43:59.557996 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:59.557828 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbefec75-4a25-443d-8f9e-1fc4a14fce37-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6667474d89-cr6kk\" (UID: \"dbefec75-4a25-443d-8f9e-1fc4a14fce37\") " pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-cr6kk" Apr 16 17:43:59.562441 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:59.562415 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbefec75-4a25-443d-8f9e-1fc4a14fce37-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6667474d89-cr6kk\" (UID: \"dbefec75-4a25-443d-8f9e-1fc4a14fce37\") " pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-cr6kk" Apr 16 17:43:59.659435 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:59.659407 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9fc88243-e880-437b-8e63-4cc27b6580f3-metrics-certs\") pod \"router-default-6c44cbbfb4-k2krf\" (UID: \"9fc88243-e880-437b-8e63-4cc27b6580f3\") " pod="openshift-ingress/router-default-6c44cbbfb4-k2krf" Apr 16 17:43:59.659539 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:59.659503 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fc88243-e880-437b-8e63-4cc27b6580f3-service-ca-bundle\") pod \"router-default-6c44cbbfb4-k2krf\" (UID: \"9fc88243-e880-437b-8e63-4cc27b6580f3\") " pod="openshift-ingress/router-default-6c44cbbfb4-k2krf" Apr 16 17:43:59.660085 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:59.660070 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fc88243-e880-437b-8e63-4cc27b6580f3-service-ca-bundle\") pod \"router-default-6c44cbbfb4-k2krf\" (UID: \"9fc88243-e880-437b-8e63-4cc27b6580f3\") " pod="openshift-ingress/router-default-6c44cbbfb4-k2krf" Apr 16 17:43:59.661690 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:59.661672 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9fc88243-e880-437b-8e63-4cc27b6580f3-metrics-certs\") pod \"router-default-6c44cbbfb4-k2krf\" (UID: \"9fc88243-e880-437b-8e63-4cc27b6580f3\") " pod="openshift-ingress/router-default-6c44cbbfb4-k2krf" Apr 16 17:43:59.796766 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:59.796747 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-cr6kk" Apr 16 17:43:59.901335 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:59.901307 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-6c44cbbfb4-k2krf" Apr 16 17:43:59.908115 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:43:59.908078 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6667474d89-cr6kk"] Apr 16 17:43:59.912203 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:43:59.912176 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbefec75_4a25_443d_8f9e_1fc4a14fce37.slice/crio-931fa3a4e8c444be8e5f8f9160d3ae5a8a6d07159ca9f080af52250ff272d3cd WatchSource:0}: Error finding container 931fa3a4e8c444be8e5f8f9160d3ae5a8a6d07159ca9f080af52250ff272d3cd: Status 404 returned error can't find the container with id 931fa3a4e8c444be8e5f8f9160d3ae5a8a6d07159ca9f080af52250ff272d3cd Apr 16 17:44:00.021507 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:00.021485 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress/router-default-6c44cbbfb4-k2krf"] Apr 16 17:44:00.023713 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:44:00.023689 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fc88243_e880_437b_8e63_4cc27b6580f3.slice/crio-2f087c6fac925f4160b4346b67abd25a4d9279fbab371786b25c88b39f139e7c WatchSource:0}: Error finding container 2f087c6fac925f4160b4346b67abd25a4d9279fbab371786b25c88b39f139e7c: Status 404 returned error can't find the container with id 2f087c6fac925f4160b4346b67abd25a4d9279fbab371786b25c88b39f139e7c Apr 16 17:44:00.264755 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:00.264721 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/737fd733-20da-4813-9abe-61f8e3e58fa2-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-xkqb8\" (UID: \"737fd733-20da-4813-9abe-61f8e3e58fa2\") " pod="openshift-insights/insights-runtime-extractor-xkqb8" Apr 16 17:44:00.266849 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:00.266822 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/737fd733-20da-4813-9abe-61f8e3e58fa2-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-xkqb8\" (UID: \"737fd733-20da-4813-9abe-61f8e3e58fa2\") " pod="openshift-insights/insights-runtime-extractor-xkqb8" Apr 16 17:44:00.329962 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:00.329941 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-xkqb8" Apr 16 17:44:00.444324 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:00.444294 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-xkqb8"] Apr 16 17:44:00.447713 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:44:00.447683 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod737fd733_20da_4813_9abe_61f8e3e58fa2.slice/crio-e874f3fdff49ecfd68e3cd77ddbb2e9440d8f0395d0b1127b5d0875b98e039f6 WatchSource:0}: Error finding container e874f3fdff49ecfd68e3cd77ddbb2e9440d8f0395d0b1127b5d0875b98e039f6: Status 404 returned error can't find the container with id e874f3fdff49ecfd68e3cd77ddbb2e9440d8f0395d0b1127b5d0875b98e039f6 Apr 16 17:44:00.481714 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:00.481681 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-cr6kk" event={"ID":"dbefec75-4a25-443d-8f9e-1fc4a14fce37","Type":"ContainerStarted","Data":"931fa3a4e8c444be8e5f8f9160d3ae5a8a6d07159ca9f080af52250ff272d3cd"} Apr 16 17:44:00.482801 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:00.482775 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-xkqb8" event={"ID":"737fd733-20da-4813-9abe-61f8e3e58fa2","Type":"ContainerStarted","Data":"e874f3fdff49ecfd68e3cd77ddbb2e9440d8f0395d0b1127b5d0875b98e039f6"} Apr 16 17:44:00.483968 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:00.483947 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-6c44cbbfb4-k2krf" event={"ID":"9fc88243-e880-437b-8e63-4cc27b6580f3","Type":"ContainerStarted","Data":"fd38be282b64f7a1fc455ca34badb858c72ba49bd0b1e89963a69a0e9d16d135"} Apr 16 17:44:00.484053 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:00.484020 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-6c44cbbfb4-k2krf" event={"ID":"9fc88243-e880-437b-8e63-4cc27b6580f3","Type":"ContainerStarted","Data":"2f087c6fac925f4160b4346b67abd25a4d9279fbab371786b25c88b39f139e7c"} Apr 16 17:44:00.502559 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:00.502478 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-6c44cbbfb4-k2krf" podStartSLOduration=33.502462648 podStartE2EDuration="33.502462648s" podCreationTimestamp="2026-04-16 17:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 17:44:00.501763566 +0000 UTC m=+161.974993038" watchObservedRunningTime="2026-04-16 17:44:00.502462648 +0000 UTC m=+161.975692129" Apr 16 17:44:00.870068 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:00.869988 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b6580a74-c19a-4cd2-b8e2-e8a8423dc761-cert\") pod \"ingress-canary-2l84s\" (UID: \"b6580a74-c19a-4cd2-b8e2-e8a8423dc761\") " pod="openshift-ingress-canary/ingress-canary-2l84s" Apr 16 17:44:00.870068 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:00.870043 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-registry-tls\") pod \"image-registry-65c7c55b77-4pqmk\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:44:00.870550 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:00.870076 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/971c52dd-85d8-47b8-b0b5-7369b7459c82-metrics-tls\") pod \"dns-default-5pktg\" (UID: \"971c52dd-85d8-47b8-b0b5-7369b7459c82\") " pod="openshift-dns/dns-default-5pktg" Apr 16 17:44:00.872810 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:00.872764 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b6580a74-c19a-4cd2-b8e2-e8a8423dc761-cert\") pod \"ingress-canary-2l84s\" (UID: \"b6580a74-c19a-4cd2-b8e2-e8a8423dc761\") " pod="openshift-ingress-canary/ingress-canary-2l84s" Apr 16 17:44:00.872954 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:00.872898 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/971c52dd-85d8-47b8-b0b5-7369b7459c82-metrics-tls\") pod \"dns-default-5pktg\" (UID: \"971c52dd-85d8-47b8-b0b5-7369b7459c82\") " pod="openshift-dns/dns-default-5pktg" Apr 16 17:44:00.873019 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:00.872979 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-registry-tls\") pod \"image-registry-65c7c55b77-4pqmk\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:44:00.902451 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:00.902423 2571 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-6c44cbbfb4-k2krf" Apr 16 17:44:00.905146 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:00.905125 2571 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-6c44cbbfb4-k2krf" Apr 16 17:44:00.972939 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:00.972914 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-vf8ds\"" Apr 16 17:44:00.973065 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:00.972964 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-2vlxb\"" Apr 16 17:44:00.981617 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:00.981593 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5pktg" Apr 16 17:44:00.981729 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:00.981713 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:44:01.490193 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:01.490152 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-xkqb8" event={"ID":"737fd733-20da-4813-9abe-61f8e3e58fa2","Type":"ContainerStarted","Data":"836c478ca0db4d7539a9a0aa5bcbd1e2e0efde15fb5d2cd44b99708166c234aa"} Apr 16 17:44:01.490372 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:01.490355 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-6c44cbbfb4-k2krf" Apr 16 17:44:01.491671 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:01.491643 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-6c44cbbfb4-k2krf" Apr 16 17:44:01.566050 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:01.566004 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-65c7c55b77-4pqmk"] Apr 16 17:44:01.569984 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:01.569960 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5pktg"] Apr 16 17:44:01.573671 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:44:01.573634 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod971c52dd_85d8_47b8_b0b5_7369b7459c82.slice/crio-3e56c35ea4658444eb5b2db3801a1c3d1f82eabe9e3776c60aa35d6465099b53 WatchSource:0}: Error finding container 3e56c35ea4658444eb5b2db3801a1c3d1f82eabe9e3776c60aa35d6465099b53: Status 404 returned error can't find the container with id 3e56c35ea4658444eb5b2db3801a1c3d1f82eabe9e3776c60aa35d6465099b53 Apr 16 17:44:02.495284 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:02.495248 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-cr6kk" event={"ID":"dbefec75-4a25-443d-8f9e-1fc4a14fce37","Type":"ContainerStarted","Data":"f34a073dc585d1c706c98e5bcaa12a83988ca976b706d249272cf92d8ab5aa42"} Apr 16 17:44:02.496917 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:02.496837 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" event={"ID":"bd43cd3e-442d-4c1a-9eab-890d57ec3a13","Type":"ContainerStarted","Data":"4e4afb466289baae89acc0c4ef114597de8db5a374d5e4ceef3da7072d0cd301"} Apr 16 17:44:02.496917 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:02.496892 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" event={"ID":"bd43cd3e-442d-4c1a-9eab-890d57ec3a13","Type":"ContainerStarted","Data":"26e07b8cfa3f4a3b6048b4a77a51a83ca2214c7c0ab2f79f7484c9d2f7806649"} Apr 16 17:44:02.497107 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:02.497067 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:44:02.498741 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:02.498708 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-xkqb8" event={"ID":"737fd733-20da-4813-9abe-61f8e3e58fa2","Type":"ContainerStarted","Data":"088379bde21c88b7e1e761108025d11ee596bceb7288acdfb872ca06e223c432"} Apr 16 17:44:02.500072 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:02.500053 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5pktg" event={"ID":"971c52dd-85d8-47b8-b0b5-7369b7459c82","Type":"ContainerStarted","Data":"3e56c35ea4658444eb5b2db3801a1c3d1f82eabe9e3776c60aa35d6465099b53"} Apr 16 17:44:02.517263 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:02.516955 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-cr6kk" podStartSLOduration=33.956665162 podStartE2EDuration="35.516942067s" podCreationTimestamp="2026-04-16 17:43:27 +0000 UTC" firstStartedPulling="2026-04-16 17:43:59.913928342 +0000 UTC m=+161.387157797" lastFinishedPulling="2026-04-16 17:44:01.474205228 +0000 UTC m=+162.947434702" observedRunningTime="2026-04-16 17:44:02.515434631 +0000 UTC m=+163.988664104" watchObservedRunningTime="2026-04-16 17:44:02.516942067 +0000 UTC m=+163.990171543" Apr 16 17:44:02.540453 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:02.540171 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" podStartSLOduration=154.540155485 podStartE2EDuration="2m34.540155485s" podCreationTimestamp="2026-04-16 17:41:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 17:44:02.539052793 +0000 UTC m=+164.012282267" watchObservedRunningTime="2026-04-16 17:44:02.540155485 +0000 UTC m=+164.013384961" Apr 16 17:44:03.506059 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:03.506025 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-xkqb8" event={"ID":"737fd733-20da-4813-9abe-61f8e3e58fa2","Type":"ContainerStarted","Data":"443802f0a59015e21127c82c82073624c790c48744001aac2edcb3bade49d57c"} Apr 16 17:44:03.529318 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:03.529273 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-runtime-extractor-xkqb8" podStartSLOduration=16.672644275 podStartE2EDuration="19.529259275s" podCreationTimestamp="2026-04-16 17:43:44 +0000 UTC" firstStartedPulling="2026-04-16 17:44:00.550177546 +0000 UTC m=+162.023407011" lastFinishedPulling="2026-04-16 17:44:03.406792556 +0000 UTC m=+164.880022011" observedRunningTime="2026-04-16 17:44:03.52830449 +0000 UTC m=+165.001534040" watchObservedRunningTime="2026-04-16 17:44:03.529259275 +0000 UTC m=+165.002488748" Apr 16 17:44:04.511120 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:04.511079 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5pktg" event={"ID":"971c52dd-85d8-47b8-b0b5-7369b7459c82","Type":"ContainerStarted","Data":"1fb272241eb7af82d34469ed776db1df819d6a374df1a560503339d656805a10"} Apr 16 17:44:04.511120 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:04.511123 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5pktg" event={"ID":"971c52dd-85d8-47b8-b0b5-7369b7459c82","Type":"ContainerStarted","Data":"f1cafd6ba7a868ebf74fe5da8c78449f885c0cdbf12d8a58ce9074d71bd7c249"} Apr 16 17:44:04.535623 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:04.535564 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-5pktg" podStartSLOduration=130.701762199 podStartE2EDuration="2m12.535546626s" podCreationTimestamp="2026-04-16 17:41:52 +0000 UTC" firstStartedPulling="2026-04-16 17:44:01.575761603 +0000 UTC m=+163.048991059" lastFinishedPulling="2026-04-16 17:44:03.409546032 +0000 UTC m=+164.882775486" observedRunningTime="2026-04-16 17:44:04.535221968 +0000 UTC m=+166.008451442" watchObservedRunningTime="2026-04-16 17:44:04.535546626 +0000 UTC m=+166.008776100" Apr 16 17:44:05.515351 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:05.515322 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-5pktg" Apr 16 17:44:05.886149 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:05.886071 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-4hkn2"] Apr 16 17:44:05.889581 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:05.889563 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-4hkn2" Apr 16 17:44:05.893285 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:05.893260 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-admission-webhook-dockercfg-vqj9v\"" Apr 16 17:44:05.893397 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:05.893265 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-admission-webhook-tls\"" Apr 16 17:44:05.900361 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:05.900340 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-65c7c55b77-4pqmk"] Apr 16 17:44:05.903421 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:05.903399 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-4hkn2"] Apr 16 17:44:06.010229 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:06.010196 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/c5bf826c-3f1f-4726-8780-2c528e5a6bb0-tls-certificates\") pod \"prometheus-operator-admission-webhook-9cb97cd87-4hkn2\" (UID: \"c5bf826c-3f1f-4726-8780-2c528e5a6bb0\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-4hkn2" Apr 16 17:44:06.110791 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:06.110766 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/c5bf826c-3f1f-4726-8780-2c528e5a6bb0-tls-certificates\") pod \"prometheus-operator-admission-webhook-9cb97cd87-4hkn2\" (UID: \"c5bf826c-3f1f-4726-8780-2c528e5a6bb0\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-4hkn2" Apr 16 17:44:06.113026 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:06.113002 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/c5bf826c-3f1f-4726-8780-2c528e5a6bb0-tls-certificates\") pod \"prometheus-operator-admission-webhook-9cb97cd87-4hkn2\" (UID: \"c5bf826c-3f1f-4726-8780-2c528e5a6bb0\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-4hkn2" Apr 16 17:44:06.199869 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:06.199839 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-4hkn2" Apr 16 17:44:06.315832 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:06.315774 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-4hkn2"] Apr 16 17:44:06.319169 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:44:06.319146 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5bf826c_3f1f_4726_8780_2c528e5a6bb0.slice/crio-a87fd9bf8c75eba267140af5dcd0658248c9d52a6f5c605502e2c755cd74ffee WatchSource:0}: Error finding container a87fd9bf8c75eba267140af5dcd0658248c9d52a6f5c605502e2c755cd74ffee: Status 404 returned error can't find the container with id a87fd9bf8c75eba267140af5dcd0658248c9d52a6f5c605502e2c755cd74ffee Apr 16 17:44:06.518523 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:06.518464 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-4hkn2" event={"ID":"c5bf826c-3f1f-4726-8780-2c528e5a6bb0","Type":"ContainerStarted","Data":"a87fd9bf8c75eba267140af5dcd0658248c9d52a6f5c605502e2c755cd74ffee"} Apr 16 17:44:07.521767 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:07.521731 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-4hkn2" event={"ID":"c5bf826c-3f1f-4726-8780-2c528e5a6bb0","Type":"ContainerStarted","Data":"0ba720e98a46308b86646ea57c2b5c5d05037d3827f6dc4e9f0b5b91c91bcbdc"} Apr 16 17:44:07.522152 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:07.521927 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-4hkn2" Apr 16 17:44:07.527089 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:07.527060 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-4hkn2" Apr 16 17:44:07.536957 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:07.536924 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-4hkn2" podStartSLOduration=1.5162278420000002 podStartE2EDuration="2.5369143s" podCreationTimestamp="2026-04-16 17:44:05 +0000 UTC" firstStartedPulling="2026-04-16 17:44:06.32106542 +0000 UTC m=+167.794294873" lastFinishedPulling="2026-04-16 17:44:07.34175188 +0000 UTC m=+168.814981331" observedRunningTime="2026-04-16 17:44:07.536397521 +0000 UTC m=+169.009626998" watchObservedRunningTime="2026-04-16 17:44:07.5369143 +0000 UTC m=+169.010143773" Apr 16 17:44:08.061439 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:08.061408 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-78f957474d-sh7h8"] Apr 16 17:44:08.064600 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:08.064584 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-78f957474d-sh7h8" Apr 16 17:44:08.067621 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:08.067595 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-tls\"" Apr 16 17:44:08.067621 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:08.067604 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"metrics-client-ca\"" Apr 16 17:44:08.067811 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:08.067605 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-kube-rbac-proxy-config\"" Apr 16 17:44:08.067811 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:08.067608 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-dockercfg-kzwnr\"" Apr 16 17:44:08.074108 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:08.074086 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-78f957474d-sh7h8"] Apr 16 17:44:08.125167 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:08.125137 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b097f1da-b001-41dc-86d1-53dd5913ed6e-metrics-client-ca\") pod \"prometheus-operator-78f957474d-sh7h8\" (UID: \"b097f1da-b001-41dc-86d1-53dd5913ed6e\") " pod="openshift-monitoring/prometheus-operator-78f957474d-sh7h8" Apr 16 17:44:08.125167 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:08.125166 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jskn\" (UniqueName: \"kubernetes.io/projected/b097f1da-b001-41dc-86d1-53dd5913ed6e-kube-api-access-2jskn\") pod \"prometheus-operator-78f957474d-sh7h8\" (UID: \"b097f1da-b001-41dc-86d1-53dd5913ed6e\") " pod="openshift-monitoring/prometheus-operator-78f957474d-sh7h8" Apr 16 17:44:08.125289 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:08.125206 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b097f1da-b001-41dc-86d1-53dd5913ed6e-prometheus-operator-tls\") pod \"prometheus-operator-78f957474d-sh7h8\" (UID: \"b097f1da-b001-41dc-86d1-53dd5913ed6e\") " pod="openshift-monitoring/prometheus-operator-78f957474d-sh7h8" Apr 16 17:44:08.125335 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:08.125296 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b097f1da-b001-41dc-86d1-53dd5913ed6e-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-78f957474d-sh7h8\" (UID: \"b097f1da-b001-41dc-86d1-53dd5913ed6e\") " pod="openshift-monitoring/prometheus-operator-78f957474d-sh7h8" Apr 16 17:44:08.225812 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:08.225785 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b097f1da-b001-41dc-86d1-53dd5913ed6e-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-78f957474d-sh7h8\" (UID: \"b097f1da-b001-41dc-86d1-53dd5913ed6e\") " pod="openshift-monitoring/prometheus-operator-78f957474d-sh7h8" Apr 16 17:44:08.225914 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:08.225833 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b097f1da-b001-41dc-86d1-53dd5913ed6e-metrics-client-ca\") pod \"prometheus-operator-78f957474d-sh7h8\" (UID: \"b097f1da-b001-41dc-86d1-53dd5913ed6e\") " pod="openshift-monitoring/prometheus-operator-78f957474d-sh7h8" Apr 16 17:44:08.225914 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:08.225866 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2jskn\" (UniqueName: \"kubernetes.io/projected/b097f1da-b001-41dc-86d1-53dd5913ed6e-kube-api-access-2jskn\") pod \"prometheus-operator-78f957474d-sh7h8\" (UID: \"b097f1da-b001-41dc-86d1-53dd5913ed6e\") " pod="openshift-monitoring/prometheus-operator-78f957474d-sh7h8" Apr 16 17:44:08.225994 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:08.225976 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b097f1da-b001-41dc-86d1-53dd5913ed6e-prometheus-operator-tls\") pod \"prometheus-operator-78f957474d-sh7h8\" (UID: \"b097f1da-b001-41dc-86d1-53dd5913ed6e\") " pod="openshift-monitoring/prometheus-operator-78f957474d-sh7h8" Apr 16 17:44:08.226427 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:08.226408 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b097f1da-b001-41dc-86d1-53dd5913ed6e-metrics-client-ca\") pod \"prometheus-operator-78f957474d-sh7h8\" (UID: \"b097f1da-b001-41dc-86d1-53dd5913ed6e\") " pod="openshift-monitoring/prometheus-operator-78f957474d-sh7h8" Apr 16 17:44:08.228837 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:08.228815 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b097f1da-b001-41dc-86d1-53dd5913ed6e-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-78f957474d-sh7h8\" (UID: \"b097f1da-b001-41dc-86d1-53dd5913ed6e\") " pod="openshift-monitoring/prometheus-operator-78f957474d-sh7h8" Apr 16 17:44:08.228945 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:08.228921 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b097f1da-b001-41dc-86d1-53dd5913ed6e-prometheus-operator-tls\") pod \"prometheus-operator-78f957474d-sh7h8\" (UID: \"b097f1da-b001-41dc-86d1-53dd5913ed6e\") " pod="openshift-monitoring/prometheus-operator-78f957474d-sh7h8" Apr 16 17:44:08.233018 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:08.232995 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jskn\" (UniqueName: \"kubernetes.io/projected/b097f1da-b001-41dc-86d1-53dd5913ed6e-kube-api-access-2jskn\") pod \"prometheus-operator-78f957474d-sh7h8\" (UID: \"b097f1da-b001-41dc-86d1-53dd5913ed6e\") " pod="openshift-monitoring/prometheus-operator-78f957474d-sh7h8" Apr 16 17:44:08.373713 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:08.373646 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-78f957474d-sh7h8" Apr 16 17:44:08.488609 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:08.488584 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-78f957474d-sh7h8"] Apr 16 17:44:08.491761 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:44:08.491736 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb097f1da_b001_41dc_86d1_53dd5913ed6e.slice/crio-a358004b62bd21caafcc4d6a2c4b4e0f3ef73956fc50d59bbd68a26989aceba0 WatchSource:0}: Error finding container a358004b62bd21caafcc4d6a2c4b4e0f3ef73956fc50d59bbd68a26989aceba0: Status 404 returned error can't find the container with id a358004b62bd21caafcc4d6a2c4b4e0f3ef73956fc50d59bbd68a26989aceba0 Apr 16 17:44:08.525225 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:08.525197 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-78f957474d-sh7h8" event={"ID":"b097f1da-b001-41dc-86d1-53dd5913ed6e","Type":"ContainerStarted","Data":"a358004b62bd21caafcc4d6a2c4b4e0f3ef73956fc50d59bbd68a26989aceba0"} Apr 16 17:44:09.034829 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:09.034803 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x4kh5" Apr 16 17:44:10.030359 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:10.030325 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2l84s" Apr 16 17:44:10.032580 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:10.032562 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-w8phs\"" Apr 16 17:44:10.041351 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:10.041327 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2l84s" Apr 16 17:44:10.153407 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:10.153384 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-2l84s"] Apr 16 17:44:10.156579 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:44:10.156555 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6580a74_c19a_4cd2_b8e2_e8a8423dc761.slice/crio-161f26b03cf72f293a6f4de8539b28a044456f70504352b11f5d59dd9857091d WatchSource:0}: Error finding container 161f26b03cf72f293a6f4de8539b28a044456f70504352b11f5d59dd9857091d: Status 404 returned error can't find the container with id 161f26b03cf72f293a6f4de8539b28a044456f70504352b11f5d59dd9857091d Apr 16 17:44:10.532744 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:10.532706 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-78f957474d-sh7h8" event={"ID":"b097f1da-b001-41dc-86d1-53dd5913ed6e","Type":"ContainerStarted","Data":"ed5853eeb6f5c2314b2912fbc4a5f8658e0c9843cd8e81c17137df947a499e7a"} Apr 16 17:44:10.532744 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:10.532747 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-78f957474d-sh7h8" event={"ID":"b097f1da-b001-41dc-86d1-53dd5913ed6e","Type":"ContainerStarted","Data":"7163483fa35c21ba0e2e68ecea9ef6ee5ba2cb446e5c3ec9fe652f268507e28a"} Apr 16 17:44:10.533754 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:10.533731 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2l84s" event={"ID":"b6580a74-c19a-4cd2-b8e2-e8a8423dc761","Type":"ContainerStarted","Data":"161f26b03cf72f293a6f4de8539b28a044456f70504352b11f5d59dd9857091d"} Apr 16 17:44:10.550981 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:10.550938 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-78f957474d-sh7h8" podStartSLOduration=1.446050112 podStartE2EDuration="2.550924198s" podCreationTimestamp="2026-04-16 17:44:08 +0000 UTC" firstStartedPulling="2026-04-16 17:44:08.493562415 +0000 UTC m=+169.966791868" lastFinishedPulling="2026-04-16 17:44:09.598436496 +0000 UTC m=+171.071665954" observedRunningTime="2026-04-16 17:44:10.549970751 +0000 UTC m=+172.023200253" watchObservedRunningTime="2026-04-16 17:44:10.550924198 +0000 UTC m=+172.024153671" Apr 16 17:44:12.419630 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.419599 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-5669946b84-pxgnm"] Apr 16 17:44:12.422802 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.422787 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-5669946b84-pxgnm" Apr 16 17:44:12.424826 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.424798 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"openshift-state-metrics-tls\"" Apr 16 17:44:12.424952 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.424836 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"openshift-state-metrics-dockercfg-nqkr5\"" Apr 16 17:44:12.424952 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.424905 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"openshift-state-metrics-kube-rbac-proxy-config\"" Apr 16 17:44:12.436465 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.436439 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-5669946b84-pxgnm"] Apr 16 17:44:12.440082 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.440064 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-7479c89684-tqm8x"] Apr 16 17:44:12.443049 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.443034 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7479c89684-tqm8x" Apr 16 17:44:12.445578 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.445561 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"kube-state-metrics-tls\"" Apr 16 17:44:12.445674 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.445609 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kube-state-metrics-custom-resource-state-configmap\"" Apr 16 17:44:12.445724 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.445703 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"kube-state-metrics-dockercfg-kpnrz\"" Apr 16 17:44:12.445985 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.445969 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"kube-state-metrics-kube-rbac-proxy-config\"" Apr 16 17:44:12.456038 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.456020 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-7479c89684-tqm8x"] Apr 16 17:44:12.457007 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.456990 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-q9rn9"] Apr 16 17:44:12.460399 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.460384 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-q9rn9" Apr 16 17:44:12.462463 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.462445 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-kube-rbac-proxy-config\"" Apr 16 17:44:12.462667 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.462643 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"node-exporter-accelerators-collector-config\"" Apr 16 17:44:12.462756 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.462693 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-dockercfg-gp5hh\"" Apr 16 17:44:12.462756 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.462659 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-tls\"" Apr 16 17:44:12.540700 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.540662 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2l84s" event={"ID":"b6580a74-c19a-4cd2-b8e2-e8a8423dc761","Type":"ContainerStarted","Data":"43d2d8d4e8b8ae7ede96f96d31049d97488c5fc14d5e70d990d7c299f8d13985"} Apr 16 17:44:12.559911 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.559884 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf4q9\" (UniqueName: \"kubernetes.io/projected/3d1c5352-c486-4390-8ad5-dc9351d290a0-kube-api-access-sf4q9\") pod \"kube-state-metrics-7479c89684-tqm8x\" (UID: \"3d1c5352-c486-4390-8ad5-dc9351d290a0\") " pod="openshift-monitoring/kube-state-metrics-7479c89684-tqm8x" Apr 16 17:44:12.559911 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.559912 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/4b6b506a-5c22-4f23-ae73-9f8da5854996-node-exporter-tls\") pod \"node-exporter-q9rn9\" (UID: \"4b6b506a-5c22-4f23-ae73-9f8da5854996\") " pod="openshift-monitoring/node-exporter-q9rn9" Apr 16 17:44:12.560061 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.559934 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4b6b506a-5c22-4f23-ae73-9f8da5854996-sys\") pod \"node-exporter-q9rn9\" (UID: \"4b6b506a-5c22-4f23-ae73-9f8da5854996\") " pod="openshift-monitoring/node-exporter-q9rn9" Apr 16 17:44:12.560061 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.559950 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/4b6b506a-5c22-4f23-ae73-9f8da5854996-node-exporter-accelerators-collector-config\") pod \"node-exporter-q9rn9\" (UID: \"4b6b506a-5c22-4f23-ae73-9f8da5854996\") " pod="openshift-monitoring/node-exporter-q9rn9" Apr 16 17:44:12.560061 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.559986 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/de996f4c-ad08-48ba-bbdf-ff07993fa471-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5669946b84-pxgnm\" (UID: \"de996f4c-ad08-48ba-bbdf-ff07993fa471\") " pod="openshift-monitoring/openshift-state-metrics-5669946b84-pxgnm" Apr 16 17:44:12.560061 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.560016 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/3d1c5352-c486-4390-8ad5-dc9351d290a0-volume-directive-shadow\") pod \"kube-state-metrics-7479c89684-tqm8x\" (UID: \"3d1c5352-c486-4390-8ad5-dc9351d290a0\") " pod="openshift-monitoring/kube-state-metrics-7479c89684-tqm8x" Apr 16 17:44:12.560061 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.560033 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3d1c5352-c486-4390-8ad5-dc9351d290a0-kube-state-metrics-tls\") pod \"kube-state-metrics-7479c89684-tqm8x\" (UID: \"3d1c5352-c486-4390-8ad5-dc9351d290a0\") " pod="openshift-monitoring/kube-state-metrics-7479c89684-tqm8x" Apr 16 17:44:12.560061 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.560048 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/4b6b506a-5c22-4f23-ae73-9f8da5854996-root\") pod \"node-exporter-q9rn9\" (UID: \"4b6b506a-5c22-4f23-ae73-9f8da5854996\") " pod="openshift-monitoring/node-exporter-q9rn9" Apr 16 17:44:12.560252 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.560065 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/de996f4c-ad08-48ba-bbdf-ff07993fa471-metrics-client-ca\") pod \"openshift-state-metrics-5669946b84-pxgnm\" (UID: \"de996f4c-ad08-48ba-bbdf-ff07993fa471\") " pod="openshift-monitoring/openshift-state-metrics-5669946b84-pxgnm" Apr 16 17:44:12.560252 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.560083 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/3d1c5352-c486-4390-8ad5-dc9351d290a0-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7479c89684-tqm8x\" (UID: \"3d1c5352-c486-4390-8ad5-dc9351d290a0\") " pod="openshift-monitoring/kube-state-metrics-7479c89684-tqm8x" Apr 16 17:44:12.560252 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.560097 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxfvg\" (UniqueName: \"kubernetes.io/projected/4b6b506a-5c22-4f23-ae73-9f8da5854996-kube-api-access-sxfvg\") pod \"node-exporter-q9rn9\" (UID: \"4b6b506a-5c22-4f23-ae73-9f8da5854996\") " pod="openshift-monitoring/node-exporter-q9rn9" Apr 16 17:44:12.560252 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.560144 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3d1c5352-c486-4390-8ad5-dc9351d290a0-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7479c89684-tqm8x\" (UID: \"3d1c5352-c486-4390-8ad5-dc9351d290a0\") " pod="openshift-monitoring/kube-state-metrics-7479c89684-tqm8x" Apr 16 17:44:12.560252 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.560184 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3d1c5352-c486-4390-8ad5-dc9351d290a0-metrics-client-ca\") pod \"kube-state-metrics-7479c89684-tqm8x\" (UID: \"3d1c5352-c486-4390-8ad5-dc9351d290a0\") " pod="openshift-monitoring/kube-state-metrics-7479c89684-tqm8x" Apr 16 17:44:12.560252 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.560206 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4b6b506a-5c22-4f23-ae73-9f8da5854996-metrics-client-ca\") pod \"node-exporter-q9rn9\" (UID: \"4b6b506a-5c22-4f23-ae73-9f8da5854996\") " pod="openshift-monitoring/node-exporter-q9rn9" Apr 16 17:44:12.560434 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.560262 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/4b6b506a-5c22-4f23-ae73-9f8da5854996-node-exporter-wtmp\") pod \"node-exporter-q9rn9\" (UID: \"4b6b506a-5c22-4f23-ae73-9f8da5854996\") " pod="openshift-monitoring/node-exporter-q9rn9" Apr 16 17:44:12.560434 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.560288 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/4b6b506a-5c22-4f23-ae73-9f8da5854996-node-exporter-textfile\") pod \"node-exporter-q9rn9\" (UID: \"4b6b506a-5c22-4f23-ae73-9f8da5854996\") " pod="openshift-monitoring/node-exporter-q9rn9" Apr 16 17:44:12.560434 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.560325 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/de996f4c-ad08-48ba-bbdf-ff07993fa471-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5669946b84-pxgnm\" (UID: \"de996f4c-ad08-48ba-bbdf-ff07993fa471\") " pod="openshift-monitoring/openshift-state-metrics-5669946b84-pxgnm" Apr 16 17:44:12.560434 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.560348 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk99n\" (UniqueName: \"kubernetes.io/projected/de996f4c-ad08-48ba-bbdf-ff07993fa471-kube-api-access-vk99n\") pod \"openshift-state-metrics-5669946b84-pxgnm\" (UID: \"de996f4c-ad08-48ba-bbdf-ff07993fa471\") " pod="openshift-monitoring/openshift-state-metrics-5669946b84-pxgnm" Apr 16 17:44:12.560434 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.560365 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4b6b506a-5c22-4f23-ae73-9f8da5854996-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-q9rn9\" (UID: \"4b6b506a-5c22-4f23-ae73-9f8da5854996\") " pod="openshift-monitoring/node-exporter-q9rn9" Apr 16 17:44:12.560974 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.560930 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-2l84s" podStartSLOduration=138.958925292 podStartE2EDuration="2m20.560917819s" podCreationTimestamp="2026-04-16 17:41:52 +0000 UTC" firstStartedPulling="2026-04-16 17:44:10.158515749 +0000 UTC m=+171.631745203" lastFinishedPulling="2026-04-16 17:44:11.760508266 +0000 UTC m=+173.233737730" observedRunningTime="2026-04-16 17:44:12.560759 +0000 UTC m=+174.033988472" watchObservedRunningTime="2026-04-16 17:44:12.560917819 +0000 UTC m=+174.034147292" Apr 16 17:44:12.661098 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.661066 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sf4q9\" (UniqueName: \"kubernetes.io/projected/3d1c5352-c486-4390-8ad5-dc9351d290a0-kube-api-access-sf4q9\") pod \"kube-state-metrics-7479c89684-tqm8x\" (UID: \"3d1c5352-c486-4390-8ad5-dc9351d290a0\") " pod="openshift-monitoring/kube-state-metrics-7479c89684-tqm8x" Apr 16 17:44:12.661098 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.661098 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/4b6b506a-5c22-4f23-ae73-9f8da5854996-node-exporter-tls\") pod \"node-exporter-q9rn9\" (UID: \"4b6b506a-5c22-4f23-ae73-9f8da5854996\") " pod="openshift-monitoring/node-exporter-q9rn9" Apr 16 17:44:12.661311 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:44:12.661194 2571 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Apr 16 17:44:12.661311 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:44:12.661252 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b6b506a-5c22-4f23-ae73-9f8da5854996-node-exporter-tls podName:4b6b506a-5c22-4f23-ae73-9f8da5854996 nodeName:}" failed. No retries permitted until 2026-04-16 17:44:13.161232574 +0000 UTC m=+174.634462042 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/4b6b506a-5c22-4f23-ae73-9f8da5854996-node-exporter-tls") pod "node-exporter-q9rn9" (UID: "4b6b506a-5c22-4f23-ae73-9f8da5854996") : secret "node-exporter-tls" not found Apr 16 17:44:12.661311 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.661270 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4b6b506a-5c22-4f23-ae73-9f8da5854996-sys\") pod \"node-exporter-q9rn9\" (UID: \"4b6b506a-5c22-4f23-ae73-9f8da5854996\") " pod="openshift-monitoring/node-exporter-q9rn9" Apr 16 17:44:12.661311 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.661297 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/4b6b506a-5c22-4f23-ae73-9f8da5854996-node-exporter-accelerators-collector-config\") pod \"node-exporter-q9rn9\" (UID: \"4b6b506a-5c22-4f23-ae73-9f8da5854996\") " pod="openshift-monitoring/node-exporter-q9rn9" Apr 16 17:44:12.661522 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.661330 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/de996f4c-ad08-48ba-bbdf-ff07993fa471-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5669946b84-pxgnm\" (UID: \"de996f4c-ad08-48ba-bbdf-ff07993fa471\") " pod="openshift-monitoring/openshift-state-metrics-5669946b84-pxgnm" Apr 16 17:44:12.661522 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.661364 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/3d1c5352-c486-4390-8ad5-dc9351d290a0-volume-directive-shadow\") pod \"kube-state-metrics-7479c89684-tqm8x\" (UID: \"3d1c5352-c486-4390-8ad5-dc9351d290a0\") " pod="openshift-monitoring/kube-state-metrics-7479c89684-tqm8x" Apr 16 17:44:12.661522 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.661380 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4b6b506a-5c22-4f23-ae73-9f8da5854996-sys\") pod \"node-exporter-q9rn9\" (UID: \"4b6b506a-5c22-4f23-ae73-9f8da5854996\") " pod="openshift-monitoring/node-exporter-q9rn9" Apr 16 17:44:12.661522 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.661395 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3d1c5352-c486-4390-8ad5-dc9351d290a0-kube-state-metrics-tls\") pod \"kube-state-metrics-7479c89684-tqm8x\" (UID: \"3d1c5352-c486-4390-8ad5-dc9351d290a0\") " pod="openshift-monitoring/kube-state-metrics-7479c89684-tqm8x" Apr 16 17:44:12.661522 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.661459 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/4b6b506a-5c22-4f23-ae73-9f8da5854996-root\") pod \"node-exporter-q9rn9\" (UID: \"4b6b506a-5c22-4f23-ae73-9f8da5854996\") " pod="openshift-monitoring/node-exporter-q9rn9" Apr 16 17:44:12.661522 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.661491 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/de996f4c-ad08-48ba-bbdf-ff07993fa471-metrics-client-ca\") pod \"openshift-state-metrics-5669946b84-pxgnm\" (UID: \"de996f4c-ad08-48ba-bbdf-ff07993fa471\") " pod="openshift-monitoring/openshift-state-metrics-5669946b84-pxgnm" Apr 16 17:44:12.661810 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.661544 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/3d1c5352-c486-4390-8ad5-dc9351d290a0-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7479c89684-tqm8x\" (UID: \"3d1c5352-c486-4390-8ad5-dc9351d290a0\") " pod="openshift-monitoring/kube-state-metrics-7479c89684-tqm8x" Apr 16 17:44:12.661810 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.661567 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/4b6b506a-5c22-4f23-ae73-9f8da5854996-root\") pod \"node-exporter-q9rn9\" (UID: \"4b6b506a-5c22-4f23-ae73-9f8da5854996\") " pod="openshift-monitoring/node-exporter-q9rn9" Apr 16 17:44:12.661810 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.661574 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sxfvg\" (UniqueName: \"kubernetes.io/projected/4b6b506a-5c22-4f23-ae73-9f8da5854996-kube-api-access-sxfvg\") pod \"node-exporter-q9rn9\" (UID: \"4b6b506a-5c22-4f23-ae73-9f8da5854996\") " pod="openshift-monitoring/node-exporter-q9rn9" Apr 16 17:44:12.661810 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.661618 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3d1c5352-c486-4390-8ad5-dc9351d290a0-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7479c89684-tqm8x\" (UID: \"3d1c5352-c486-4390-8ad5-dc9351d290a0\") " pod="openshift-monitoring/kube-state-metrics-7479c89684-tqm8x" Apr 16 17:44:12.661810 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.661664 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3d1c5352-c486-4390-8ad5-dc9351d290a0-metrics-client-ca\") pod \"kube-state-metrics-7479c89684-tqm8x\" (UID: \"3d1c5352-c486-4390-8ad5-dc9351d290a0\") " pod="openshift-monitoring/kube-state-metrics-7479c89684-tqm8x" Apr 16 17:44:12.661810 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.661699 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4b6b506a-5c22-4f23-ae73-9f8da5854996-metrics-client-ca\") pod \"node-exporter-q9rn9\" (UID: \"4b6b506a-5c22-4f23-ae73-9f8da5854996\") " pod="openshift-monitoring/node-exporter-q9rn9" Apr 16 17:44:12.661810 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.661724 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/3d1c5352-c486-4390-8ad5-dc9351d290a0-volume-directive-shadow\") pod \"kube-state-metrics-7479c89684-tqm8x\" (UID: \"3d1c5352-c486-4390-8ad5-dc9351d290a0\") " pod="openshift-monitoring/kube-state-metrics-7479c89684-tqm8x" Apr 16 17:44:12.661810 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.661732 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/4b6b506a-5c22-4f23-ae73-9f8da5854996-node-exporter-wtmp\") pod \"node-exporter-q9rn9\" (UID: \"4b6b506a-5c22-4f23-ae73-9f8da5854996\") " pod="openshift-monitoring/node-exporter-q9rn9" Apr 16 17:44:12.661810 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.661762 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/4b6b506a-5c22-4f23-ae73-9f8da5854996-node-exporter-textfile\") pod \"node-exporter-q9rn9\" (UID: \"4b6b506a-5c22-4f23-ae73-9f8da5854996\") " pod="openshift-monitoring/node-exporter-q9rn9" Apr 16 17:44:12.662274 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.661826 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/de996f4c-ad08-48ba-bbdf-ff07993fa471-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5669946b84-pxgnm\" (UID: \"de996f4c-ad08-48ba-bbdf-ff07993fa471\") " pod="openshift-monitoring/openshift-state-metrics-5669946b84-pxgnm" Apr 16 17:44:12.662274 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.661913 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vk99n\" (UniqueName: \"kubernetes.io/projected/de996f4c-ad08-48ba-bbdf-ff07993fa471-kube-api-access-vk99n\") pod \"openshift-state-metrics-5669946b84-pxgnm\" (UID: \"de996f4c-ad08-48ba-bbdf-ff07993fa471\") " pod="openshift-monitoring/openshift-state-metrics-5669946b84-pxgnm" Apr 16 17:44:12.662274 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.661941 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4b6b506a-5c22-4f23-ae73-9f8da5854996-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-q9rn9\" (UID: \"4b6b506a-5c22-4f23-ae73-9f8da5854996\") " pod="openshift-monitoring/node-exporter-q9rn9" Apr 16 17:44:12.662274 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.661956 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/4b6b506a-5c22-4f23-ae73-9f8da5854996-node-exporter-accelerators-collector-config\") pod \"node-exporter-q9rn9\" (UID: \"4b6b506a-5c22-4f23-ae73-9f8da5854996\") " pod="openshift-monitoring/node-exporter-q9rn9" Apr 16 17:44:12.662919 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.662525 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3d1c5352-c486-4390-8ad5-dc9351d290a0-metrics-client-ca\") pod \"kube-state-metrics-7479c89684-tqm8x\" (UID: \"3d1c5352-c486-4390-8ad5-dc9351d290a0\") " pod="openshift-monitoring/kube-state-metrics-7479c89684-tqm8x" Apr 16 17:44:12.662919 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.662560 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/4b6b506a-5c22-4f23-ae73-9f8da5854996-node-exporter-textfile\") pod \"node-exporter-q9rn9\" (UID: \"4b6b506a-5c22-4f23-ae73-9f8da5854996\") " pod="openshift-monitoring/node-exporter-q9rn9" Apr 16 17:44:12.662919 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.662661 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/4b6b506a-5c22-4f23-ae73-9f8da5854996-node-exporter-wtmp\") pod \"node-exporter-q9rn9\" (UID: \"4b6b506a-5c22-4f23-ae73-9f8da5854996\") " pod="openshift-monitoring/node-exporter-q9rn9" Apr 16 17:44:12.662919 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.662675 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/3d1c5352-c486-4390-8ad5-dc9351d290a0-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7479c89684-tqm8x\" (UID: \"3d1c5352-c486-4390-8ad5-dc9351d290a0\") " pod="openshift-monitoring/kube-state-metrics-7479c89684-tqm8x" Apr 16 17:44:12.662919 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.662842 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4b6b506a-5c22-4f23-ae73-9f8da5854996-metrics-client-ca\") pod \"node-exporter-q9rn9\" (UID: \"4b6b506a-5c22-4f23-ae73-9f8da5854996\") " pod="openshift-monitoring/node-exporter-q9rn9" Apr 16 17:44:12.663192 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.662964 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/de996f4c-ad08-48ba-bbdf-ff07993fa471-metrics-client-ca\") pod \"openshift-state-metrics-5669946b84-pxgnm\" (UID: \"de996f4c-ad08-48ba-bbdf-ff07993fa471\") " pod="openshift-monitoring/openshift-state-metrics-5669946b84-pxgnm" Apr 16 17:44:12.664289 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.664258 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3d1c5352-c486-4390-8ad5-dc9351d290a0-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7479c89684-tqm8x\" (UID: \"3d1c5352-c486-4390-8ad5-dc9351d290a0\") " pod="openshift-monitoring/kube-state-metrics-7479c89684-tqm8x" Apr 16 17:44:12.664395 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.664367 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/de996f4c-ad08-48ba-bbdf-ff07993fa471-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5669946b84-pxgnm\" (UID: \"de996f4c-ad08-48ba-bbdf-ff07993fa471\") " pod="openshift-monitoring/openshift-state-metrics-5669946b84-pxgnm" Apr 16 17:44:12.664395 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.664366 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3d1c5352-c486-4390-8ad5-dc9351d290a0-kube-state-metrics-tls\") pod \"kube-state-metrics-7479c89684-tqm8x\" (UID: \"3d1c5352-c486-4390-8ad5-dc9351d290a0\") " pod="openshift-monitoring/kube-state-metrics-7479c89684-tqm8x" Apr 16 17:44:12.664851 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.664831 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4b6b506a-5c22-4f23-ae73-9f8da5854996-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-q9rn9\" (UID: \"4b6b506a-5c22-4f23-ae73-9f8da5854996\") " pod="openshift-monitoring/node-exporter-q9rn9" Apr 16 17:44:12.665077 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.665057 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/de996f4c-ad08-48ba-bbdf-ff07993fa471-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5669946b84-pxgnm\" (UID: \"de996f4c-ad08-48ba-bbdf-ff07993fa471\") " pod="openshift-monitoring/openshift-state-metrics-5669946b84-pxgnm" Apr 16 17:44:12.671555 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.671499 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxfvg\" (UniqueName: \"kubernetes.io/projected/4b6b506a-5c22-4f23-ae73-9f8da5854996-kube-api-access-sxfvg\") pod \"node-exporter-q9rn9\" (UID: \"4b6b506a-5c22-4f23-ae73-9f8da5854996\") " pod="openshift-monitoring/node-exporter-q9rn9" Apr 16 17:44:12.671630 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.671613 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf4q9\" (UniqueName: \"kubernetes.io/projected/3d1c5352-c486-4390-8ad5-dc9351d290a0-kube-api-access-sf4q9\") pod \"kube-state-metrics-7479c89684-tqm8x\" (UID: \"3d1c5352-c486-4390-8ad5-dc9351d290a0\") " pod="openshift-monitoring/kube-state-metrics-7479c89684-tqm8x" Apr 16 17:44:12.672580 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.672560 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk99n\" (UniqueName: \"kubernetes.io/projected/de996f4c-ad08-48ba-bbdf-ff07993fa471-kube-api-access-vk99n\") pod \"openshift-state-metrics-5669946b84-pxgnm\" (UID: \"de996f4c-ad08-48ba-bbdf-ff07993fa471\") " pod="openshift-monitoring/openshift-state-metrics-5669946b84-pxgnm" Apr 16 17:44:12.731441 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.731421 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-5669946b84-pxgnm" Apr 16 17:44:12.751187 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.751166 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7479c89684-tqm8x" Apr 16 17:44:12.882356 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.882317 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-5669946b84-pxgnm"] Apr 16 17:44:12.910537 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:12.910514 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-7479c89684-tqm8x"] Apr 16 17:44:12.915827 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:44:12.915799 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d1c5352_c486_4390_8ad5_dc9351d290a0.slice/crio-8f8b418c24f0c5104fd12405bbbe2933e88d4f732fe7d54b4ae50b6880a057c0 WatchSource:0}: Error finding container 8f8b418c24f0c5104fd12405bbbe2933e88d4f732fe7d54b4ae50b6880a057c0: Status 404 returned error can't find the container with id 8f8b418c24f0c5104fd12405bbbe2933e88d4f732fe7d54b4ae50b6880a057c0 Apr 16 17:44:13.167114 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:13.167082 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/4b6b506a-5c22-4f23-ae73-9f8da5854996-node-exporter-tls\") pod \"node-exporter-q9rn9\" (UID: \"4b6b506a-5c22-4f23-ae73-9f8da5854996\") " pod="openshift-monitoring/node-exporter-q9rn9" Apr 16 17:44:13.169484 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:13.169460 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/4b6b506a-5c22-4f23-ae73-9f8da5854996-node-exporter-tls\") pod \"node-exporter-q9rn9\" (UID: \"4b6b506a-5c22-4f23-ae73-9f8da5854996\") " pod="openshift-monitoring/node-exporter-q9rn9" Apr 16 17:44:13.369046 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:13.368973 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-q9rn9" Apr 16 17:44:13.380248 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:44:13.380222 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b6b506a_5c22_4f23_ae73_9f8da5854996.slice/crio-ff2d3e473638c5d690d7335cdac980a023c163a22aec3c4d64a0afb05fa930da WatchSource:0}: Error finding container ff2d3e473638c5d690d7335cdac980a023c163a22aec3c4d64a0afb05fa930da: Status 404 returned error can't find the container with id ff2d3e473638c5d690d7335cdac980a023c163a22aec3c4d64a0afb05fa930da Apr 16 17:44:13.546329 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:13.546292 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-q9rn9" event={"ID":"4b6b506a-5c22-4f23-ae73-9f8da5854996","Type":"ContainerStarted","Data":"ff2d3e473638c5d690d7335cdac980a023c163a22aec3c4d64a0afb05fa930da"} Apr 16 17:44:13.549534 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:13.549499 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7479c89684-tqm8x" event={"ID":"3d1c5352-c486-4390-8ad5-dc9351d290a0","Type":"ContainerStarted","Data":"8f8b418c24f0c5104fd12405bbbe2933e88d4f732fe7d54b4ae50b6880a057c0"} Apr 16 17:44:13.554208 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:13.554142 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5669946b84-pxgnm" event={"ID":"de996f4c-ad08-48ba-bbdf-ff07993fa471","Type":"ContainerStarted","Data":"d61fbc5a191986860c7f988f1e20113bc3be402538593c4e1aca53376919b494"} Apr 16 17:44:13.554208 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:13.554176 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5669946b84-pxgnm" event={"ID":"de996f4c-ad08-48ba-bbdf-ff07993fa471","Type":"ContainerStarted","Data":"a9cb34e4fa93a039330865850f508c24748e7e327ebf0ae5eb0906bd0d8e28c8"} Apr 16 17:44:13.554208 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:13.554189 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5669946b84-pxgnm" event={"ID":"de996f4c-ad08-48ba-bbdf-ff07993fa471","Type":"ContainerStarted","Data":"e677542ca60751df3b67183894e29ed32a8c231d37ffb7bc70e55a617c86f5da"} Apr 16 17:44:14.489687 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:14.489656 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-6f975cbf96-5wv4r"] Apr 16 17:44:14.493122 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:14.493102 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-6f975cbf96-5wv4r" Apr 16 17:44:14.495334 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:14.495309 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-kube-rbac-proxy-metrics\"" Apr 16 17:44:14.495488 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:14.495402 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-kube-rbac-proxy-rules\"" Apr 16 17:44:14.495600 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:14.495527 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-grpc-tls-aaghogdlocivk\"" Apr 16 17:44:14.495683 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:14.495655 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-dockercfg-4jjnr\"" Apr 16 17:44:14.495742 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:14.495707 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-kube-rbac-proxy-web\"" Apr 16 17:44:14.496162 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:14.496139 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-kube-rbac-proxy\"" Apr 16 17:44:14.496162 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:14.496156 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-tls\"" Apr 16 17:44:14.504778 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:14.504759 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-6f975cbf96-5wv4r"] Apr 16 17:44:14.580700 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:14.580674 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/925b7d14-293d-4b38-9183-53e2d8a5d716-metrics-client-ca\") pod \"thanos-querier-6f975cbf96-5wv4r\" (UID: \"925b7d14-293d-4b38-9183-53e2d8a5d716\") " pod="openshift-monitoring/thanos-querier-6f975cbf96-5wv4r" Apr 16 17:44:14.581059 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:14.580712 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lprm8\" (UniqueName: \"kubernetes.io/projected/925b7d14-293d-4b38-9183-53e2d8a5d716-kube-api-access-lprm8\") pod \"thanos-querier-6f975cbf96-5wv4r\" (UID: \"925b7d14-293d-4b38-9183-53e2d8a5d716\") " pod="openshift-monitoring/thanos-querier-6f975cbf96-5wv4r" Apr 16 17:44:14.581059 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:14.580741 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/925b7d14-293d-4b38-9183-53e2d8a5d716-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6f975cbf96-5wv4r\" (UID: \"925b7d14-293d-4b38-9183-53e2d8a5d716\") " pod="openshift-monitoring/thanos-querier-6f975cbf96-5wv4r" Apr 16 17:44:14.581059 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:14.580761 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/925b7d14-293d-4b38-9183-53e2d8a5d716-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6f975cbf96-5wv4r\" (UID: \"925b7d14-293d-4b38-9183-53e2d8a5d716\") " pod="openshift-monitoring/thanos-querier-6f975cbf96-5wv4r" Apr 16 17:44:14.581059 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:14.580788 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/925b7d14-293d-4b38-9183-53e2d8a5d716-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6f975cbf96-5wv4r\" (UID: \"925b7d14-293d-4b38-9183-53e2d8a5d716\") " pod="openshift-monitoring/thanos-querier-6f975cbf96-5wv4r" Apr 16 17:44:14.581059 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:14.580893 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/925b7d14-293d-4b38-9183-53e2d8a5d716-secret-thanos-querier-tls\") pod \"thanos-querier-6f975cbf96-5wv4r\" (UID: \"925b7d14-293d-4b38-9183-53e2d8a5d716\") " pod="openshift-monitoring/thanos-querier-6f975cbf96-5wv4r" Apr 16 17:44:14.581059 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:14.580925 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/925b7d14-293d-4b38-9183-53e2d8a5d716-secret-grpc-tls\") pod \"thanos-querier-6f975cbf96-5wv4r\" (UID: \"925b7d14-293d-4b38-9183-53e2d8a5d716\") " pod="openshift-monitoring/thanos-querier-6f975cbf96-5wv4r" Apr 16 17:44:14.581059 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:14.580962 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/925b7d14-293d-4b38-9183-53e2d8a5d716-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6f975cbf96-5wv4r\" (UID: \"925b7d14-293d-4b38-9183-53e2d8a5d716\") " pod="openshift-monitoring/thanos-querier-6f975cbf96-5wv4r" Apr 16 17:44:14.682117 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:14.681382 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/925b7d14-293d-4b38-9183-53e2d8a5d716-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6f975cbf96-5wv4r\" (UID: \"925b7d14-293d-4b38-9183-53e2d8a5d716\") " pod="openshift-monitoring/thanos-querier-6f975cbf96-5wv4r" Apr 16 17:44:14.682117 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:14.681456 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/925b7d14-293d-4b38-9183-53e2d8a5d716-secret-thanos-querier-tls\") pod \"thanos-querier-6f975cbf96-5wv4r\" (UID: \"925b7d14-293d-4b38-9183-53e2d8a5d716\") " pod="openshift-monitoring/thanos-querier-6f975cbf96-5wv4r" Apr 16 17:44:14.682117 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:14.681486 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/925b7d14-293d-4b38-9183-53e2d8a5d716-secret-grpc-tls\") pod \"thanos-querier-6f975cbf96-5wv4r\" (UID: \"925b7d14-293d-4b38-9183-53e2d8a5d716\") " pod="openshift-monitoring/thanos-querier-6f975cbf96-5wv4r" Apr 16 17:44:14.682117 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:14.681519 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/925b7d14-293d-4b38-9183-53e2d8a5d716-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6f975cbf96-5wv4r\" (UID: \"925b7d14-293d-4b38-9183-53e2d8a5d716\") " pod="openshift-monitoring/thanos-querier-6f975cbf96-5wv4r" Apr 16 17:44:14.682117 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:14.681548 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/925b7d14-293d-4b38-9183-53e2d8a5d716-metrics-client-ca\") pod \"thanos-querier-6f975cbf96-5wv4r\" (UID: \"925b7d14-293d-4b38-9183-53e2d8a5d716\") " pod="openshift-monitoring/thanos-querier-6f975cbf96-5wv4r" Apr 16 17:44:14.682117 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:14.681597 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lprm8\" (UniqueName: \"kubernetes.io/projected/925b7d14-293d-4b38-9183-53e2d8a5d716-kube-api-access-lprm8\") pod \"thanos-querier-6f975cbf96-5wv4r\" (UID: \"925b7d14-293d-4b38-9183-53e2d8a5d716\") " pod="openshift-monitoring/thanos-querier-6f975cbf96-5wv4r" Apr 16 17:44:14.682117 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:14.681653 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/925b7d14-293d-4b38-9183-53e2d8a5d716-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6f975cbf96-5wv4r\" (UID: \"925b7d14-293d-4b38-9183-53e2d8a5d716\") " pod="openshift-monitoring/thanos-querier-6f975cbf96-5wv4r" Apr 16 17:44:14.682117 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:14.681688 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/925b7d14-293d-4b38-9183-53e2d8a5d716-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6f975cbf96-5wv4r\" (UID: \"925b7d14-293d-4b38-9183-53e2d8a5d716\") " pod="openshift-monitoring/thanos-querier-6f975cbf96-5wv4r" Apr 16 17:44:14.683559 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:14.683503 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/925b7d14-293d-4b38-9183-53e2d8a5d716-metrics-client-ca\") pod \"thanos-querier-6f975cbf96-5wv4r\" (UID: \"925b7d14-293d-4b38-9183-53e2d8a5d716\") " pod="openshift-monitoring/thanos-querier-6f975cbf96-5wv4r" Apr 16 17:44:14.686983 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:14.686932 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/925b7d14-293d-4b38-9183-53e2d8a5d716-secret-thanos-querier-tls\") pod \"thanos-querier-6f975cbf96-5wv4r\" (UID: \"925b7d14-293d-4b38-9183-53e2d8a5d716\") " pod="openshift-monitoring/thanos-querier-6f975cbf96-5wv4r" Apr 16 17:44:14.687249 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:14.687198 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/925b7d14-293d-4b38-9183-53e2d8a5d716-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6f975cbf96-5wv4r\" (UID: \"925b7d14-293d-4b38-9183-53e2d8a5d716\") " pod="openshift-monitoring/thanos-querier-6f975cbf96-5wv4r" Apr 16 17:44:14.688337 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:14.687820 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/925b7d14-293d-4b38-9183-53e2d8a5d716-secret-grpc-tls\") pod \"thanos-querier-6f975cbf96-5wv4r\" (UID: \"925b7d14-293d-4b38-9183-53e2d8a5d716\") " pod="openshift-monitoring/thanos-querier-6f975cbf96-5wv4r" Apr 16 17:44:14.688337 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:14.688286 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/925b7d14-293d-4b38-9183-53e2d8a5d716-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6f975cbf96-5wv4r\" (UID: \"925b7d14-293d-4b38-9183-53e2d8a5d716\") " pod="openshift-monitoring/thanos-querier-6f975cbf96-5wv4r" Apr 16 17:44:14.690156 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:14.690114 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/925b7d14-293d-4b38-9183-53e2d8a5d716-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6f975cbf96-5wv4r\" (UID: \"925b7d14-293d-4b38-9183-53e2d8a5d716\") " pod="openshift-monitoring/thanos-querier-6f975cbf96-5wv4r" Apr 16 17:44:14.690957 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:14.690900 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/925b7d14-293d-4b38-9183-53e2d8a5d716-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6f975cbf96-5wv4r\" (UID: \"925b7d14-293d-4b38-9183-53e2d8a5d716\") " pod="openshift-monitoring/thanos-querier-6f975cbf96-5wv4r" Apr 16 17:44:14.709567 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:14.709465 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lprm8\" (UniqueName: \"kubernetes.io/projected/925b7d14-293d-4b38-9183-53e2d8a5d716-kube-api-access-lprm8\") pod \"thanos-querier-6f975cbf96-5wv4r\" (UID: \"925b7d14-293d-4b38-9183-53e2d8a5d716\") " pod="openshift-monitoring/thanos-querier-6f975cbf96-5wv4r" Apr 16 17:44:14.803010 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:14.802986 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-6f975cbf96-5wv4r" Apr 16 17:44:14.931189 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:14.931164 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-6f975cbf96-5wv4r"] Apr 16 17:44:14.933247 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:44:14.933218 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod925b7d14_293d_4b38_9183_53e2d8a5d716.slice/crio-2f5a8b896a8bd479c7dd8046177d9e9e83cef9de8905fa5bf816623bd380981b WatchSource:0}: Error finding container 2f5a8b896a8bd479c7dd8046177d9e9e83cef9de8905fa5bf816623bd380981b: Status 404 returned error can't find the container with id 2f5a8b896a8bd479c7dd8046177d9e9e83cef9de8905fa5bf816623bd380981b Apr 16 17:44:15.520589 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:15.520564 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-5pktg" Apr 16 17:44:15.564800 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:15.564767 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7479c89684-tqm8x" event={"ID":"3d1c5352-c486-4390-8ad5-dc9351d290a0","Type":"ContainerStarted","Data":"897c7dbaff9ec8e78dc95744e0ab9724e2ef69b42955f97f3796824d8a977bed"} Apr 16 17:44:15.564972 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:15.564808 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7479c89684-tqm8x" event={"ID":"3d1c5352-c486-4390-8ad5-dc9351d290a0","Type":"ContainerStarted","Data":"b2168c053d098736d620d8656df53d253d4a09193fe641512fe35dbbb7e1de56"} Apr 16 17:44:15.564972 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:15.564824 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7479c89684-tqm8x" event={"ID":"3d1c5352-c486-4390-8ad5-dc9351d290a0","Type":"ContainerStarted","Data":"044d5260d905912f4d70d01641802b09aaaa01930038895a556a6c0248c8f4f7"} Apr 16 17:44:15.566405 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:15.566380 2571 generic.go:358] "Generic (PLEG): container finished" podID="4b6b506a-5c22-4f23-ae73-9f8da5854996" containerID="14316ec2bb989904712e4e7e19b80ad2e86818525c854bc2d28c973c065042a1" exitCode=0 Apr 16 17:44:15.566538 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:15.566454 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-q9rn9" event={"ID":"4b6b506a-5c22-4f23-ae73-9f8da5854996","Type":"ContainerDied","Data":"14316ec2bb989904712e4e7e19b80ad2e86818525c854bc2d28c973c065042a1"} Apr 16 17:44:15.568578 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:15.568551 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5669946b84-pxgnm" event={"ID":"de996f4c-ad08-48ba-bbdf-ff07993fa471","Type":"ContainerStarted","Data":"edf9893977838e82f3c0e2b8b58d8081f7b59fb146c91abc4de681ef2d09c154"} Apr 16 17:44:15.570104 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:15.570078 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6f975cbf96-5wv4r" event={"ID":"925b7d14-293d-4b38-9183-53e2d8a5d716","Type":"ContainerStarted","Data":"2f5a8b896a8bd479c7dd8046177d9e9e83cef9de8905fa5bf816623bd380981b"} Apr 16 17:44:15.584996 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:15.584949 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-7479c89684-tqm8x" podStartSLOduration=1.95679479 podStartE2EDuration="3.584934238s" podCreationTimestamp="2026-04-16 17:44:12 +0000 UTC" firstStartedPulling="2026-04-16 17:44:12.917655013 +0000 UTC m=+174.390884479" lastFinishedPulling="2026-04-16 17:44:14.545794463 +0000 UTC m=+176.019023927" observedRunningTime="2026-04-16 17:44:15.582636971 +0000 UTC m=+177.055866470" watchObservedRunningTime="2026-04-16 17:44:15.584934238 +0000 UTC m=+177.058163712" Apr 16 17:44:15.600212 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:15.600167 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-5669946b84-pxgnm" podStartSLOduration=2.075029378 podStartE2EDuration="3.600151995s" podCreationTimestamp="2026-04-16 17:44:12 +0000 UTC" firstStartedPulling="2026-04-16 17:44:13.020846476 +0000 UTC m=+174.494075928" lastFinishedPulling="2026-04-16 17:44:14.545969082 +0000 UTC m=+176.019198545" observedRunningTime="2026-04-16 17:44:15.59945264 +0000 UTC m=+177.072682137" watchObservedRunningTime="2026-04-16 17:44:15.600151995 +0000 UTC m=+177.073381468" Apr 16 17:44:16.576288 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:16.576231 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-q9rn9" event={"ID":"4b6b506a-5c22-4f23-ae73-9f8da5854996","Type":"ContainerStarted","Data":"b3f0d95c2cb8cde7d07e824a865b8a001b4243498618755c3ce47a7e58feeb88"} Apr 16 17:44:16.576288 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:16.576289 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-q9rn9" event={"ID":"4b6b506a-5c22-4f23-ae73-9f8da5854996","Type":"ContainerStarted","Data":"6eac5986a21d241119e7cd921910e87eb62fc56fd24f749225903e044ae68f86"} Apr 16 17:44:16.599805 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:16.599746 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-q9rn9" podStartSLOduration=3.43396235 podStartE2EDuration="4.599729176s" podCreationTimestamp="2026-04-16 17:44:12 +0000 UTC" firstStartedPulling="2026-04-16 17:44:13.382065778 +0000 UTC m=+174.855295237" lastFinishedPulling="2026-04-16 17:44:14.547832609 +0000 UTC m=+176.021062063" observedRunningTime="2026-04-16 17:44:16.597808323 +0000 UTC m=+178.071037795" watchObservedRunningTime="2026-04-16 17:44:16.599729176 +0000 UTC m=+178.072958650" Apr 16 17:44:16.820620 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:16.820597 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-76fcbb669c-ql7h9"] Apr 16 17:44:16.825136 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:16.825116 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-76fcbb669c-ql7h9" Apr 16 17:44:16.827195 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:16.827140 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"metrics-server-dockercfg-4b92g\"" Apr 16 17:44:16.827195 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:16.827157 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"metrics-server-audit-profiles\"" Apr 16 17:44:16.827333 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:16.827176 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"metrics-server-f3r3q9adg0pk1\"" Apr 16 17:44:16.827417 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:16.827395 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"metrics-server-client-certs\"" Apr 16 17:44:16.827518 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:16.827402 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kubelet-serving-ca-bundle\"" Apr 16 17:44:16.827518 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:16.827508 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"metrics-server-tls\"" Apr 16 17:44:16.836180 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:16.836093 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-76fcbb669c-ql7h9"] Apr 16 17:44:16.900696 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:16.900669 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629-client-ca-bundle\") pod \"metrics-server-76fcbb669c-ql7h9\" (UID: \"c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629\") " pod="openshift-monitoring/metrics-server-76fcbb669c-ql7h9" Apr 16 17:44:16.900791 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:16.900709 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629-audit-log\") pod \"metrics-server-76fcbb669c-ql7h9\" (UID: \"c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629\") " pod="openshift-monitoring/metrics-server-76fcbb669c-ql7h9" Apr 16 17:44:16.900791 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:16.900735 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-client-certs\" (UniqueName: \"kubernetes.io/secret/c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629-secret-metrics-server-client-certs\") pod \"metrics-server-76fcbb669c-ql7h9\" (UID: \"c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629\") " pod="openshift-monitoring/metrics-server-76fcbb669c-ql7h9" Apr 16 17:44:16.900884 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:16.900792 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-76fcbb669c-ql7h9\" (UID: \"c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629\") " pod="openshift-monitoring/metrics-server-76fcbb669c-ql7h9" Apr 16 17:44:16.900884 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:16.900840 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkrdh\" (UniqueName: \"kubernetes.io/projected/c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629-kube-api-access-dkrdh\") pod \"metrics-server-76fcbb669c-ql7h9\" (UID: \"c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629\") " pod="openshift-monitoring/metrics-server-76fcbb669c-ql7h9" Apr 16 17:44:16.900952 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:16.900902 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629-metrics-server-audit-profiles\") pod \"metrics-server-76fcbb669c-ql7h9\" (UID: \"c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629\") " pod="openshift-monitoring/metrics-server-76fcbb669c-ql7h9" Apr 16 17:44:16.900952 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:16.900948 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629-secret-metrics-server-tls\") pod \"metrics-server-76fcbb669c-ql7h9\" (UID: \"c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629\") " pod="openshift-monitoring/metrics-server-76fcbb669c-ql7h9" Apr 16 17:44:17.001532 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:17.001511 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629-metrics-server-audit-profiles\") pod \"metrics-server-76fcbb669c-ql7h9\" (UID: \"c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629\") " pod="openshift-monitoring/metrics-server-76fcbb669c-ql7h9" Apr 16 17:44:17.001651 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:17.001557 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629-secret-metrics-server-tls\") pod \"metrics-server-76fcbb669c-ql7h9\" (UID: \"c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629\") " pod="openshift-monitoring/metrics-server-76fcbb669c-ql7h9" Apr 16 17:44:17.001651 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:17.001581 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629-client-ca-bundle\") pod \"metrics-server-76fcbb669c-ql7h9\" (UID: \"c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629\") " pod="openshift-monitoring/metrics-server-76fcbb669c-ql7h9" Apr 16 17:44:17.001651 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:17.001602 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629-audit-log\") pod \"metrics-server-76fcbb669c-ql7h9\" (UID: \"c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629\") " pod="openshift-monitoring/metrics-server-76fcbb669c-ql7h9" Apr 16 17:44:17.001785 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:17.001763 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-metrics-server-client-certs\" (UniqueName: \"kubernetes.io/secret/c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629-secret-metrics-server-client-certs\") pod \"metrics-server-76fcbb669c-ql7h9\" (UID: \"c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629\") " pod="openshift-monitoring/metrics-server-76fcbb669c-ql7h9" Apr 16 17:44:17.001840 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:17.001814 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-76fcbb669c-ql7h9\" (UID: \"c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629\") " pod="openshift-monitoring/metrics-server-76fcbb669c-ql7h9" Apr 16 17:44:17.001918 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:17.001891 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dkrdh\" (UniqueName: \"kubernetes.io/projected/c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629-kube-api-access-dkrdh\") pod \"metrics-server-76fcbb669c-ql7h9\" (UID: \"c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629\") " pod="openshift-monitoring/metrics-server-76fcbb669c-ql7h9" Apr 16 17:44:17.001990 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:17.001972 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629-audit-log\") pod \"metrics-server-76fcbb669c-ql7h9\" (UID: \"c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629\") " pod="openshift-monitoring/metrics-server-76fcbb669c-ql7h9" Apr 16 17:44:17.002514 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:17.002489 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-76fcbb669c-ql7h9\" (UID: \"c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629\") " pod="openshift-monitoring/metrics-server-76fcbb669c-ql7h9" Apr 16 17:44:17.002514 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:17.002506 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629-metrics-server-audit-profiles\") pod \"metrics-server-76fcbb669c-ql7h9\" (UID: \"c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629\") " pod="openshift-monitoring/metrics-server-76fcbb669c-ql7h9" Apr 16 17:44:17.004021 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:17.003994 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629-secret-metrics-server-tls\") pod \"metrics-server-76fcbb669c-ql7h9\" (UID: \"c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629\") " pod="openshift-monitoring/metrics-server-76fcbb669c-ql7h9" Apr 16 17:44:17.004200 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:17.004180 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-client-certs\" (UniqueName: \"kubernetes.io/secret/c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629-secret-metrics-server-client-certs\") pod \"metrics-server-76fcbb669c-ql7h9\" (UID: \"c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629\") " pod="openshift-monitoring/metrics-server-76fcbb669c-ql7h9" Apr 16 17:44:17.004242 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:17.004189 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629-client-ca-bundle\") pod \"metrics-server-76fcbb669c-ql7h9\" (UID: \"c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629\") " pod="openshift-monitoring/metrics-server-76fcbb669c-ql7h9" Apr 16 17:44:17.009154 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:17.009132 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkrdh\" (UniqueName: \"kubernetes.io/projected/c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629-kube-api-access-dkrdh\") pod \"metrics-server-76fcbb669c-ql7h9\" (UID: \"c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629\") " pod="openshift-monitoring/metrics-server-76fcbb669c-ql7h9" Apr 16 17:44:17.159087 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:17.159064 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-76fcbb669c-ql7h9" Apr 16 17:44:17.280562 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:17.280537 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-76fcbb669c-ql7h9"] Apr 16 17:44:17.283483 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:44:17.283458 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc69c0a1b_8ad3_4ab8_8a36_483c4c9a1629.slice/crio-c39e2fa7dd233151797755d817d42a035fba1202fd4ccfca3754131a289d97d9 WatchSource:0}: Error finding container c39e2fa7dd233151797755d817d42a035fba1202fd4ccfca3754131a289d97d9: Status 404 returned error can't find the container with id c39e2fa7dd233151797755d817d42a035fba1202fd4ccfca3754131a289d97d9 Apr 16 17:44:17.584038 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:17.583963 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6f975cbf96-5wv4r" event={"ID":"925b7d14-293d-4b38-9183-53e2d8a5d716","Type":"ContainerStarted","Data":"726ded097368256806a25611cd22db424ab27db2b508e47292d675402a7b6ec0"} Apr 16 17:44:17.584038 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:17.584007 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6f975cbf96-5wv4r" event={"ID":"925b7d14-293d-4b38-9183-53e2d8a5d716","Type":"ContainerStarted","Data":"c1524d4bdf42300eec71113595edd985f91107682f1833ff38eb2d24406b0641"} Apr 16 17:44:17.584038 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:17.584024 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6f975cbf96-5wv4r" event={"ID":"925b7d14-293d-4b38-9183-53e2d8a5d716","Type":"ContainerStarted","Data":"4e1d50dc675113da140aa1b53b6425dc6738aff33a612e82e2551481bf41f86f"} Apr 16 17:44:17.585402 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:17.585361 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-76fcbb669c-ql7h9" event={"ID":"c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629","Type":"ContainerStarted","Data":"c39e2fa7dd233151797755d817d42a035fba1202fd4ccfca3754131a289d97d9"} Apr 16 17:44:18.591984 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:18.591946 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6f975cbf96-5wv4r" event={"ID":"925b7d14-293d-4b38-9183-53e2d8a5d716","Type":"ContainerStarted","Data":"21d0d2f1ff576c0c8c0040b836fc0c191572c91060a2f435baae70d8fb93e64d"} Apr 16 17:44:18.591984 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:18.591987 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6f975cbf96-5wv4r" event={"ID":"925b7d14-293d-4b38-9183-53e2d8a5d716","Type":"ContainerStarted","Data":"da30a1ce3bf1a749896036d870e65075d6b659580fa0036b84beebd45af69cd4"} Apr 16 17:44:18.592803 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:18.592003 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6f975cbf96-5wv4r" event={"ID":"925b7d14-293d-4b38-9183-53e2d8a5d716","Type":"ContainerStarted","Data":"c4f98c9850c69a9714a5d4d36aa52f670cece6b3cc45a0e93ba8c9831cf9f930"} Apr 16 17:44:18.592803 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:18.592215 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-monitoring/thanos-querier-6f975cbf96-5wv4r" Apr 16 17:44:18.620544 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:18.620491 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-6f975cbf96-5wv4r" podStartSLOduration=1.9317143570000002 podStartE2EDuration="4.620472668s" podCreationTimestamp="2026-04-16 17:44:14 +0000 UTC" firstStartedPulling="2026-04-16 17:44:14.935213816 +0000 UTC m=+176.408443283" lastFinishedPulling="2026-04-16 17:44:17.623972143 +0000 UTC m=+179.097201594" observedRunningTime="2026-04-16 17:44:18.617034499 +0000 UTC m=+180.090263987" watchObservedRunningTime="2026-04-16 17:44:18.620472668 +0000 UTC m=+180.093702142" Apr 16 17:44:19.596516 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:19.596479 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-76fcbb669c-ql7h9" event={"ID":"c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629","Type":"ContainerStarted","Data":"fb8204f2359c4eaaba82e7e57ad317ea1ca7b0cea1b93c1e59deec661b587b5c"} Apr 16 17:44:19.614939 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:19.614878 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-76fcbb669c-ql7h9" podStartSLOduration=2.290920666 podStartE2EDuration="3.614840558s" podCreationTimestamp="2026-04-16 17:44:16 +0000 UTC" firstStartedPulling="2026-04-16 17:44:17.285455831 +0000 UTC m=+178.758685282" lastFinishedPulling="2026-04-16 17:44:18.609375708 +0000 UTC m=+180.082605174" observedRunningTime="2026-04-16 17:44:19.614343234 +0000 UTC m=+181.087572749" watchObservedRunningTime="2026-04-16 17:44:19.614840558 +0000 UTC m=+181.088070032" Apr 16 17:44:22.318140 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:22.318102 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-586b57c7b4-r4f69"] Apr 16 17:44:22.322668 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:22.322648 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-586b57c7b4-r4f69" Apr 16 17:44:22.324939 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:22.324916 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Apr 16 17:44:22.325031 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:22.324916 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Apr 16 17:44:22.325467 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:22.325449 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-z9h7n\"" Apr 16 17:44:22.333050 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:22.333032 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-586b57c7b4-r4f69"] Apr 16 17:44:22.450892 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:22.450865 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zb5d\" (UniqueName: \"kubernetes.io/projected/42ff6ad5-c964-401b-813a-00dfd31def98-kube-api-access-2zb5d\") pod \"downloads-586b57c7b4-r4f69\" (UID: \"42ff6ad5-c964-401b-813a-00dfd31def98\") " pod="openshift-console/downloads-586b57c7b4-r4f69" Apr 16 17:44:22.551750 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:22.551727 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2zb5d\" (UniqueName: \"kubernetes.io/projected/42ff6ad5-c964-401b-813a-00dfd31def98-kube-api-access-2zb5d\") pod \"downloads-586b57c7b4-r4f69\" (UID: \"42ff6ad5-c964-401b-813a-00dfd31def98\") " pod="openshift-console/downloads-586b57c7b4-r4f69" Apr 16 17:44:22.559979 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:22.559960 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zb5d\" (UniqueName: \"kubernetes.io/projected/42ff6ad5-c964-401b-813a-00dfd31def98-kube-api-access-2zb5d\") pod \"downloads-586b57c7b4-r4f69\" (UID: \"42ff6ad5-c964-401b-813a-00dfd31def98\") " pod="openshift-console/downloads-586b57c7b4-r4f69" Apr 16 17:44:22.632735 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:22.632682 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-586b57c7b4-r4f69" Apr 16 17:44:22.752552 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:22.752424 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-586b57c7b4-r4f69"] Apr 16 17:44:22.755886 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:44:22.755837 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42ff6ad5_c964_401b_813a_00dfd31def98.slice/crio-9efdcdfac91e94d52ce89a2ac48e8a8505e4ca7fc543c2cc6086bcd98a71abfd WatchSource:0}: Error finding container 9efdcdfac91e94d52ce89a2ac48e8a8505e4ca7fc543c2cc6086bcd98a71abfd: Status 404 returned error can't find the container with id 9efdcdfac91e94d52ce89a2ac48e8a8505e4ca7fc543c2cc6086bcd98a71abfd Apr 16 17:44:23.610268 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:23.610229 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-586b57c7b4-r4f69" event={"ID":"42ff6ad5-c964-401b-813a-00dfd31def98","Type":"ContainerStarted","Data":"9efdcdfac91e94d52ce89a2ac48e8a8505e4ca7fc543c2cc6086bcd98a71abfd"} Apr 16 17:44:24.603913 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:24.603878 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-6f975cbf96-5wv4r" Apr 16 17:44:25.907352 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:25.907324 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:44:30.920453 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:30.920380 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" podUID="bd43cd3e-442d-4c1a-9eab-890d57ec3a13" containerName="registry" containerID="cri-o://4e4afb466289baae89acc0c4ef114597de8db5a374d5e4ceef3da7072d0cd301" gracePeriod=30 Apr 16 17:44:31.180699 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:31.180644 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:44:31.333955 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:31.333926 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b58j9\" (UniqueName: \"kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-kube-api-access-b58j9\") pod \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " Apr 16 17:44:31.334121 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:31.333977 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-registry-tls\") pod \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " Apr 16 17:44:31.334121 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:31.334095 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-registry-certificates\") pod \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " Apr 16 17:44:31.334256 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:31.334179 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-installation-pull-secrets\") pod \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " Apr 16 17:44:31.334256 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:31.334209 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-bound-sa-token\") pod \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " Apr 16 17:44:31.334256 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:31.334252 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-trusted-ca\") pod \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " Apr 16 17:44:31.334399 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:31.334283 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-image-registry-private-configuration\") pod \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " Apr 16 17:44:31.334399 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:31.334312 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-ca-trust-extracted\") pod \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\" (UID: \"bd43cd3e-442d-4c1a-9eab-890d57ec3a13\") " Apr 16 17:44:31.335062 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:31.334543 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "bd43cd3e-442d-4c1a-9eab-890d57ec3a13" (UID: "bd43cd3e-442d-4c1a-9eab-890d57ec3a13"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 17:44:31.335182 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:31.335073 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bd43cd3e-442d-4c1a-9eab-890d57ec3a13" (UID: "bd43cd3e-442d-4c1a-9eab-890d57ec3a13"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 17:44:31.336716 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:31.336693 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "bd43cd3e-442d-4c1a-9eab-890d57ec3a13" (UID: "bd43cd3e-442d-4c1a-9eab-890d57ec3a13"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 17:44:31.336821 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:31.336689 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "bd43cd3e-442d-4c1a-9eab-890d57ec3a13" (UID: "bd43cd3e-442d-4c1a-9eab-890d57ec3a13"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 17:44:31.337333 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:31.337305 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bd43cd3e-442d-4c1a-9eab-890d57ec3a13" (UID: "bd43cd3e-442d-4c1a-9eab-890d57ec3a13"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 17:44:31.337889 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:31.337848 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-kube-api-access-b58j9" (OuterVolumeSpecName: "kube-api-access-b58j9") pod "bd43cd3e-442d-4c1a-9eab-890d57ec3a13" (UID: "bd43cd3e-442d-4c1a-9eab-890d57ec3a13"). InnerVolumeSpecName "kube-api-access-b58j9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 17:44:31.338140 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:31.338115 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-image-registry-private-configuration" (OuterVolumeSpecName: "image-registry-private-configuration") pod "bd43cd3e-442d-4c1a-9eab-890d57ec3a13" (UID: "bd43cd3e-442d-4c1a-9eab-890d57ec3a13"). InnerVolumeSpecName "image-registry-private-configuration". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 17:44:31.344838 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:31.344817 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "bd43cd3e-442d-4c1a-9eab-890d57ec3a13" (UID: "bd43cd3e-442d-4c1a-9eab-890d57ec3a13"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 17:44:31.435683 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:31.435621 2571 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b58j9\" (UniqueName: \"kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-kube-api-access-b58j9\") on node \"ip-10-0-138-134.ec2.internal\" DevicePath \"\"" Apr 16 17:44:31.435683 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:31.435646 2571 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-registry-tls\") on node \"ip-10-0-138-134.ec2.internal\" DevicePath \"\"" Apr 16 17:44:31.435683 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:31.435657 2571 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-registry-certificates\") on node \"ip-10-0-138-134.ec2.internal\" DevicePath \"\"" Apr 16 17:44:31.435683 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:31.435675 2571 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-installation-pull-secrets\") on node \"ip-10-0-138-134.ec2.internal\" DevicePath \"\"" Apr 16 17:44:31.435972 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:31.435689 2571 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-bound-sa-token\") on node \"ip-10-0-138-134.ec2.internal\" DevicePath \"\"" Apr 16 17:44:31.435972 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:31.435705 2571 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-trusted-ca\") on node \"ip-10-0-138-134.ec2.internal\" DevicePath \"\"" Apr 16 17:44:31.435972 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:31.435719 2571 reconciler_common.go:299] "Volume detached for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-image-registry-private-configuration\") on node \"ip-10-0-138-134.ec2.internal\" DevicePath \"\"" Apr 16 17:44:31.435972 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:31.435734 2571 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bd43cd3e-442d-4c1a-9eab-890d57ec3a13-ca-trust-extracted\") on node \"ip-10-0-138-134.ec2.internal\" DevicePath \"\"" Apr 16 17:44:31.635440 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:31.635407 2571 generic.go:358] "Generic (PLEG): container finished" podID="bd43cd3e-442d-4c1a-9eab-890d57ec3a13" containerID="4e4afb466289baae89acc0c4ef114597de8db5a374d5e4ceef3da7072d0cd301" exitCode=0 Apr 16 17:44:31.635607 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:31.635464 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" Apr 16 17:44:31.635607 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:31.635496 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" event={"ID":"bd43cd3e-442d-4c1a-9eab-890d57ec3a13","Type":"ContainerDied","Data":"4e4afb466289baae89acc0c4ef114597de8db5a374d5e4ceef3da7072d0cd301"} Apr 16 17:44:31.635607 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:31.635528 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-65c7c55b77-4pqmk" event={"ID":"bd43cd3e-442d-4c1a-9eab-890d57ec3a13","Type":"ContainerDied","Data":"26e07b8cfa3f4a3b6048b4a77a51a83ca2214c7c0ab2f79f7484c9d2f7806649"} Apr 16 17:44:31.635607 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:31.635543 2571 scope.go:117] "RemoveContainer" containerID="4e4afb466289baae89acc0c4ef114597de8db5a374d5e4ceef3da7072d0cd301" Apr 16 17:44:31.667262 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:31.667224 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-65c7c55b77-4pqmk"] Apr 16 17:44:31.683574 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:31.683552 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-65c7c55b77-4pqmk"] Apr 16 17:44:33.035316 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:33.035275 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd43cd3e-442d-4c1a-9eab-890d57ec3a13" path="/var/lib/kubelet/pods/bd43cd3e-442d-4c1a-9eab-890d57ec3a13/volumes" Apr 16 17:44:37.159847 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:37.159815 2571 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-76fcbb669c-ql7h9" Apr 16 17:44:37.160305 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:37.159874 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-monitoring/metrics-server-76fcbb669c-ql7h9" Apr 16 17:44:38.374131 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:38.374073 2571 scope.go:117] "RemoveContainer" containerID="4e4afb466289baae89acc0c4ef114597de8db5a374d5e4ceef3da7072d0cd301" Apr 16 17:44:38.374395 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:44:38.374374 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e4afb466289baae89acc0c4ef114597de8db5a374d5e4ceef3da7072d0cd301\": container with ID starting with 4e4afb466289baae89acc0c4ef114597de8db5a374d5e4ceef3da7072d0cd301 not found: ID does not exist" containerID="4e4afb466289baae89acc0c4ef114597de8db5a374d5e4ceef3da7072d0cd301" Apr 16 17:44:38.374436 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:38.374407 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e4afb466289baae89acc0c4ef114597de8db5a374d5e4ceef3da7072d0cd301"} err="failed to get container status \"4e4afb466289baae89acc0c4ef114597de8db5a374d5e4ceef3da7072d0cd301\": rpc error: code = NotFound desc = could not find container \"4e4afb466289baae89acc0c4ef114597de8db5a374d5e4ceef3da7072d0cd301\": container with ID starting with 4e4afb466289baae89acc0c4ef114597de8db5a374d5e4ceef3da7072d0cd301 not found: ID does not exist" Apr 16 17:44:38.662596 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:38.662556 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-586b57c7b4-r4f69" event={"ID":"42ff6ad5-c964-401b-813a-00dfd31def98","Type":"ContainerStarted","Data":"9c8ebab4fb4068a3a73a24c218389cc93205677279fc5ac0dc8261271f7adec8"} Apr 16 17:44:38.662780 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:38.662761 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-586b57c7b4-r4f69" Apr 16 17:44:38.664621 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:38.664589 2571 patch_prober.go:28] interesting pod/downloads-586b57c7b4-r4f69 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.133.0.22:8080/\": dial tcp 10.133.0.22:8080: connect: connection refused" start-of-body= Apr 16 17:44:38.664729 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:38.664653 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-586b57c7b4-r4f69" podUID="42ff6ad5-c964-401b-813a-00dfd31def98" containerName="download-server" probeResult="failure" output="Get \"http://10.133.0.22:8080/\": dial tcp 10.133.0.22:8080: connect: connection refused" Apr 16 17:44:38.681213 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:38.681164 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-586b57c7b4-r4f69" podStartSLOduration=0.960591299 podStartE2EDuration="16.681149996s" podCreationTimestamp="2026-04-16 17:44:22 +0000 UTC" firstStartedPulling="2026-04-16 17:44:22.757734305 +0000 UTC m=+184.230963759" lastFinishedPulling="2026-04-16 17:44:38.478293006 +0000 UTC m=+199.951522456" observedRunningTime="2026-04-16 17:44:38.6797731 +0000 UTC m=+200.153002571" watchObservedRunningTime="2026-04-16 17:44:38.681149996 +0000 UTC m=+200.154379469" Apr 16 17:44:39.683873 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:39.683829 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-586b57c7b4-r4f69" Apr 16 17:44:40.671546 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:40.671516 2571 generic.go:358] "Generic (PLEG): container finished" podID="25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611" containerID="80cce3d98c466c41fc7f1a7106f9f53137c1c1762442616bf19d07d461663c00" exitCode=0 Apr 16 17:44:40.671674 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:40.671621 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-5785d4fcdd-7pxpm" event={"ID":"25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611","Type":"ContainerDied","Data":"80cce3d98c466c41fc7f1a7106f9f53137c1c1762442616bf19d07d461663c00"} Apr 16 17:44:40.672117 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:40.672093 2571 scope.go:117] "RemoveContainer" containerID="80cce3d98c466c41fc7f1a7106f9f53137c1c1762442616bf19d07d461663c00" Apr 16 17:44:41.013440 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.013404 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-7c7b57fc98-j6pcb"] Apr 16 17:44:41.013901 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.013827 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bd43cd3e-442d-4c1a-9eab-890d57ec3a13" containerName="registry" Apr 16 17:44:41.013901 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.013844 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd43cd3e-442d-4c1a-9eab-890d57ec3a13" containerName="registry" Apr 16 17:44:41.014012 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.013958 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="bd43cd3e-442d-4c1a-9eab-890d57ec3a13" containerName="registry" Apr 16 17:44:41.028149 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.028030 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7c7b57fc98-j6pcb"] Apr 16 17:44:41.028332 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.028155 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7c7b57fc98-j6pcb" Apr 16 17:44:41.030677 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.030654 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Apr 16 17:44:41.031194 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.030901 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-ghc5z\"" Apr 16 17:44:41.031321 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.031243 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Apr 16 17:44:41.031385 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.031364 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Apr 16 17:44:41.031602 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.031572 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Apr 16 17:44:41.031688 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.031679 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Apr 16 17:44:41.036086 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.036068 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Apr 16 17:44:41.218686 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.218658 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0c68a3de-2220-417d-9231-0f60cc4eef6d-oauth-serving-cert\") pod \"console-7c7b57fc98-j6pcb\" (UID: \"0c68a3de-2220-417d-9231-0f60cc4eef6d\") " pod="openshift-console/console-7c7b57fc98-j6pcb" Apr 16 17:44:41.218945 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.218747 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0c68a3de-2220-417d-9231-0f60cc4eef6d-console-oauth-config\") pod \"console-7c7b57fc98-j6pcb\" (UID: \"0c68a3de-2220-417d-9231-0f60cc4eef6d\") " pod="openshift-console/console-7c7b57fc98-j6pcb" Apr 16 17:44:41.218945 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.218777 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0c68a3de-2220-417d-9231-0f60cc4eef6d-console-config\") pod \"console-7c7b57fc98-j6pcb\" (UID: \"0c68a3de-2220-417d-9231-0f60cc4eef6d\") " pod="openshift-console/console-7c7b57fc98-j6pcb" Apr 16 17:44:41.218945 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.218803 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c68a3de-2220-417d-9231-0f60cc4eef6d-console-serving-cert\") pod \"console-7c7b57fc98-j6pcb\" (UID: \"0c68a3de-2220-417d-9231-0f60cc4eef6d\") " pod="openshift-console/console-7c7b57fc98-j6pcb" Apr 16 17:44:41.218945 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.218917 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c68a3de-2220-417d-9231-0f60cc4eef6d-trusted-ca-bundle\") pod \"console-7c7b57fc98-j6pcb\" (UID: \"0c68a3de-2220-417d-9231-0f60cc4eef6d\") " pod="openshift-console/console-7c7b57fc98-j6pcb" Apr 16 17:44:41.219166 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.218948 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8wnw\" (UniqueName: \"kubernetes.io/projected/0c68a3de-2220-417d-9231-0f60cc4eef6d-kube-api-access-s8wnw\") pod \"console-7c7b57fc98-j6pcb\" (UID: \"0c68a3de-2220-417d-9231-0f60cc4eef6d\") " pod="openshift-console/console-7c7b57fc98-j6pcb" Apr 16 17:44:41.219166 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.218976 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0c68a3de-2220-417d-9231-0f60cc4eef6d-service-ca\") pod \"console-7c7b57fc98-j6pcb\" (UID: \"0c68a3de-2220-417d-9231-0f60cc4eef6d\") " pod="openshift-console/console-7c7b57fc98-j6pcb" Apr 16 17:44:41.319940 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.319846 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0c68a3de-2220-417d-9231-0f60cc4eef6d-service-ca\") pod \"console-7c7b57fc98-j6pcb\" (UID: \"0c68a3de-2220-417d-9231-0f60cc4eef6d\") " pod="openshift-console/console-7c7b57fc98-j6pcb" Apr 16 17:44:41.319940 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.319932 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0c68a3de-2220-417d-9231-0f60cc4eef6d-oauth-serving-cert\") pod \"console-7c7b57fc98-j6pcb\" (UID: \"0c68a3de-2220-417d-9231-0f60cc4eef6d\") " pod="openshift-console/console-7c7b57fc98-j6pcb" Apr 16 17:44:41.320217 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.319999 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0c68a3de-2220-417d-9231-0f60cc4eef6d-console-oauth-config\") pod \"console-7c7b57fc98-j6pcb\" (UID: \"0c68a3de-2220-417d-9231-0f60cc4eef6d\") " pod="openshift-console/console-7c7b57fc98-j6pcb" Apr 16 17:44:41.320217 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.320026 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0c68a3de-2220-417d-9231-0f60cc4eef6d-console-config\") pod \"console-7c7b57fc98-j6pcb\" (UID: \"0c68a3de-2220-417d-9231-0f60cc4eef6d\") " pod="openshift-console/console-7c7b57fc98-j6pcb" Apr 16 17:44:41.320217 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.320050 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c68a3de-2220-417d-9231-0f60cc4eef6d-console-serving-cert\") pod \"console-7c7b57fc98-j6pcb\" (UID: \"0c68a3de-2220-417d-9231-0f60cc4eef6d\") " pod="openshift-console/console-7c7b57fc98-j6pcb" Apr 16 17:44:41.320217 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.320109 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c68a3de-2220-417d-9231-0f60cc4eef6d-trusted-ca-bundle\") pod \"console-7c7b57fc98-j6pcb\" (UID: \"0c68a3de-2220-417d-9231-0f60cc4eef6d\") " pod="openshift-console/console-7c7b57fc98-j6pcb" Apr 16 17:44:41.320217 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.320138 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s8wnw\" (UniqueName: \"kubernetes.io/projected/0c68a3de-2220-417d-9231-0f60cc4eef6d-kube-api-access-s8wnw\") pod \"console-7c7b57fc98-j6pcb\" (UID: \"0c68a3de-2220-417d-9231-0f60cc4eef6d\") " pod="openshift-console/console-7c7b57fc98-j6pcb" Apr 16 17:44:41.320715 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.320690 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0c68a3de-2220-417d-9231-0f60cc4eef6d-service-ca\") pod \"console-7c7b57fc98-j6pcb\" (UID: \"0c68a3de-2220-417d-9231-0f60cc4eef6d\") " pod="openshift-console/console-7c7b57fc98-j6pcb" Apr 16 17:44:41.320878 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.320697 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0c68a3de-2220-417d-9231-0f60cc4eef6d-oauth-serving-cert\") pod \"console-7c7b57fc98-j6pcb\" (UID: \"0c68a3de-2220-417d-9231-0f60cc4eef6d\") " pod="openshift-console/console-7c7b57fc98-j6pcb" Apr 16 17:44:41.321077 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.321034 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0c68a3de-2220-417d-9231-0f60cc4eef6d-console-config\") pod \"console-7c7b57fc98-j6pcb\" (UID: \"0c68a3de-2220-417d-9231-0f60cc4eef6d\") " pod="openshift-console/console-7c7b57fc98-j6pcb" Apr 16 17:44:41.321237 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.321196 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c68a3de-2220-417d-9231-0f60cc4eef6d-trusted-ca-bundle\") pod \"console-7c7b57fc98-j6pcb\" (UID: \"0c68a3de-2220-417d-9231-0f60cc4eef6d\") " pod="openshift-console/console-7c7b57fc98-j6pcb" Apr 16 17:44:41.322741 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.322724 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0c68a3de-2220-417d-9231-0f60cc4eef6d-console-oauth-config\") pod \"console-7c7b57fc98-j6pcb\" (UID: \"0c68a3de-2220-417d-9231-0f60cc4eef6d\") " pod="openshift-console/console-7c7b57fc98-j6pcb" Apr 16 17:44:41.323158 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.323136 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c68a3de-2220-417d-9231-0f60cc4eef6d-console-serving-cert\") pod \"console-7c7b57fc98-j6pcb\" (UID: \"0c68a3de-2220-417d-9231-0f60cc4eef6d\") " pod="openshift-console/console-7c7b57fc98-j6pcb" Apr 16 17:44:41.328463 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.328441 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8wnw\" (UniqueName: \"kubernetes.io/projected/0c68a3de-2220-417d-9231-0f60cc4eef6d-kube-api-access-s8wnw\") pod \"console-7c7b57fc98-j6pcb\" (UID: \"0c68a3de-2220-417d-9231-0f60cc4eef6d\") " pod="openshift-console/console-7c7b57fc98-j6pcb" Apr 16 17:44:41.342186 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.342162 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7c7b57fc98-j6pcb" Apr 16 17:44:41.487187 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.485959 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7c7b57fc98-j6pcb"] Apr 16 17:44:41.676704 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.676664 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-5785d4fcdd-7pxpm" event={"ID":"25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611","Type":"ContainerStarted","Data":"bf68013598569a9d518e5c03856024eec1308fcc33728aa263f558fdb0bf57cb"} Apr 16 17:44:41.677946 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:41.677908 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7c7b57fc98-j6pcb" event={"ID":"0c68a3de-2220-417d-9231-0f60cc4eef6d","Type":"ContainerStarted","Data":"98db45f9b1293fc7e6c76db5bac13ed71d34df2793940b5fddc92912dd8a4bc1"} Apr 16 17:44:45.693679 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:45.693639 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7c7b57fc98-j6pcb" event={"ID":"0c68a3de-2220-417d-9231-0f60cc4eef6d","Type":"ContainerStarted","Data":"92fd0915233fe4979df644f56bf5300984af52865a917dfd7d1fb394f00568dc"} Apr 16 17:44:45.712379 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:45.712331 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7c7b57fc98-j6pcb" podStartSLOduration=2.443891364 podStartE2EDuration="5.712315369s" podCreationTimestamp="2026-04-16 17:44:40 +0000 UTC" firstStartedPulling="2026-04-16 17:44:41.491934221 +0000 UTC m=+202.965163689" lastFinishedPulling="2026-04-16 17:44:44.76035823 +0000 UTC m=+206.233587694" observedRunningTime="2026-04-16 17:44:45.710802441 +0000 UTC m=+207.184031916" watchObservedRunningTime="2026-04-16 17:44:45.712315369 +0000 UTC m=+207.185544844" Apr 16 17:44:46.698439 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:46.698408 2571 generic.go:358] "Generic (PLEG): container finished" podID="27efa923-840a-4df7-8dcb-d30a622b5c3f" containerID="c56bc181f645382c07709f24d507c7eb9847b9d9d2f222171fe17f5a100aded8" exitCode=0 Apr 16 17:44:46.698925 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:46.698491 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-756bb7d76f-gzcr7" event={"ID":"27efa923-840a-4df7-8dcb-d30a622b5c3f","Type":"ContainerDied","Data":"c56bc181f645382c07709f24d507c7eb9847b9d9d2f222171fe17f5a100aded8"} Apr 16 17:44:46.699041 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:46.699025 2571 scope.go:117] "RemoveContainer" containerID="c56bc181f645382c07709f24d507c7eb9847b9d9d2f222171fe17f5a100aded8" Apr 16 17:44:47.703668 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:47.703631 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-756bb7d76f-gzcr7" event={"ID":"27efa923-840a-4df7-8dcb-d30a622b5c3f","Type":"ContainerStarted","Data":"3919bd4d20724b3d6e780793a8391e6be3ccaeea570932e1b458075097541814"} Apr 16 17:44:51.343193 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:51.343155 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-7c7b57fc98-j6pcb" Apr 16 17:44:51.343697 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:51.343313 2571 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7c7b57fc98-j6pcb" Apr 16 17:44:51.348405 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:51.348382 2571 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7c7b57fc98-j6pcb" Apr 16 17:44:51.718610 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:51.718587 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7c7b57fc98-j6pcb" Apr 16 17:44:57.164101 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:57.164074 2571 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-76fcbb669c-ql7h9" Apr 16 17:44:57.167961 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:44:57.167938 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-76fcbb669c-ql7h9" Apr 16 17:45:16.793401 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:45:16.793367 2571 generic.go:358] "Generic (PLEG): container finished" podID="13deaf04-20aa-41b3-8c14-653473a8ddc7" containerID="d632addbcce4984b2520acc96355580aa0f9109908393e996864da1a0ef95701" exitCode=0 Apr 16 17:45:16.793744 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:45:16.793441 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69965bb79d-dtpsn" event={"ID":"13deaf04-20aa-41b3-8c14-653473a8ddc7","Type":"ContainerDied","Data":"d632addbcce4984b2520acc96355580aa0f9109908393e996864da1a0ef95701"} Apr 16 17:45:16.793744 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:45:16.793736 2571 scope.go:117] "RemoveContainer" containerID="d632addbcce4984b2520acc96355580aa0f9109908393e996864da1a0ef95701" Apr 16 17:45:17.798442 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:45:17.798407 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69965bb79d-dtpsn" event={"ID":"13deaf04-20aa-41b3-8c14-653473a8ddc7","Type":"ContainerStarted","Data":"00cfbf5ed4b3761ef5feedf06499c309e8d6106da3a0bec99d4b89396db11623"} Apr 16 17:45:30.811879 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:45:30.811821 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b6b22f4-0d78-4198-b821-0f4f52115d9c-metrics-certs\") pod \"network-metrics-daemon-x4kh5\" (UID: \"5b6b22f4-0d78-4198-b821-0f4f52115d9c\") " pod="openshift-multus/network-metrics-daemon-x4kh5" Apr 16 17:45:30.814451 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:45:30.814424 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b6b22f4-0d78-4198-b821-0f4f52115d9c-metrics-certs\") pod \"network-metrics-daemon-x4kh5\" (UID: \"5b6b22f4-0d78-4198-b821-0f4f52115d9c\") " pod="openshift-multus/network-metrics-daemon-x4kh5" Apr 16 17:45:30.938345 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:45:30.938319 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-8xx6l\"" Apr 16 17:45:30.946173 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:45:30.946156 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x4kh5" Apr 16 17:45:31.064422 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:45:31.064353 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-x4kh5"] Apr 16 17:45:31.068145 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:45:31.068115 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b6b22f4_0d78_4198_b821_0f4f52115d9c.slice/crio-801c461eb854657379e6adc4aab2c28c799211664de1c1d10894dece7fe1ff10 WatchSource:0}: Error finding container 801c461eb854657379e6adc4aab2c28c799211664de1c1d10894dece7fe1ff10: Status 404 returned error can't find the container with id 801c461eb854657379e6adc4aab2c28c799211664de1c1d10894dece7fe1ff10 Apr 16 17:45:31.842932 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:45:31.842893 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-x4kh5" event={"ID":"5b6b22f4-0d78-4198-b821-0f4f52115d9c","Type":"ContainerStarted","Data":"801c461eb854657379e6adc4aab2c28c799211664de1c1d10894dece7fe1ff10"} Apr 16 17:45:32.847953 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:45:32.847914 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-x4kh5" event={"ID":"5b6b22f4-0d78-4198-b821-0f4f52115d9c","Type":"ContainerStarted","Data":"78c827ae7a207948ee2894712c1756b1ca21fb5c0999882645d4e4b34b0d777d"} Apr 16 17:45:32.847953 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:45:32.847955 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-x4kh5" event={"ID":"5b6b22f4-0d78-4198-b821-0f4f52115d9c","Type":"ContainerStarted","Data":"b4157f93c6db444a398578e5cfb952faa5eb76c901d33392391cb50f6e240f16"} Apr 16 17:45:32.864518 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:45:32.864468 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-x4kh5" podStartSLOduration=252.830757444 podStartE2EDuration="4m13.864449628s" podCreationTimestamp="2026-04-16 17:41:19 +0000 UTC" firstStartedPulling="2026-04-16 17:45:31.069992218 +0000 UTC m=+252.543221676" lastFinishedPulling="2026-04-16 17:45:32.103684391 +0000 UTC m=+253.576913860" observedRunningTime="2026-04-16 17:45:32.863408617 +0000 UTC m=+254.336638090" watchObservedRunningTime="2026-04-16 17:45:32.864449628 +0000 UTC m=+254.337679102" Apr 16 17:45:39.981080 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:45:39.981033 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7c7b57fc98-j6pcb"] Apr 16 17:46:05.006714 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:05.006651 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-console/console-7c7b57fc98-j6pcb" podUID="0c68a3de-2220-417d-9231-0f60cc4eef6d" containerName="console" containerID="cri-o://92fd0915233fe4979df644f56bf5300984af52865a917dfd7d1fb394f00568dc" gracePeriod=15 Apr 16 17:46:05.243586 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:05.243568 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7c7b57fc98-j6pcb_0c68a3de-2220-417d-9231-0f60cc4eef6d/console/0.log" Apr 16 17:46:05.243681 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:05.243623 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7c7b57fc98-j6pcb" Apr 16 17:46:05.251836 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:05.251814 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8wnw\" (UniqueName: \"kubernetes.io/projected/0c68a3de-2220-417d-9231-0f60cc4eef6d-kube-api-access-s8wnw\") pod \"0c68a3de-2220-417d-9231-0f60cc4eef6d\" (UID: \"0c68a3de-2220-417d-9231-0f60cc4eef6d\") " Apr 16 17:46:05.251913 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:05.251873 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0c68a3de-2220-417d-9231-0f60cc4eef6d-oauth-serving-cert\") pod \"0c68a3de-2220-417d-9231-0f60cc4eef6d\" (UID: \"0c68a3de-2220-417d-9231-0f60cc4eef6d\") " Apr 16 17:46:05.251913 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:05.251901 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c68a3de-2220-417d-9231-0f60cc4eef6d-console-serving-cert\") pod \"0c68a3de-2220-417d-9231-0f60cc4eef6d\" (UID: \"0c68a3de-2220-417d-9231-0f60cc4eef6d\") " Apr 16 17:46:05.251989 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:05.251929 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c68a3de-2220-417d-9231-0f60cc4eef6d-trusted-ca-bundle\") pod \"0c68a3de-2220-417d-9231-0f60cc4eef6d\" (UID: \"0c68a3de-2220-417d-9231-0f60cc4eef6d\") " Apr 16 17:46:05.251989 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:05.251954 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0c68a3de-2220-417d-9231-0f60cc4eef6d-console-oauth-config\") pod \"0c68a3de-2220-417d-9231-0f60cc4eef6d\" (UID: \"0c68a3de-2220-417d-9231-0f60cc4eef6d\") " Apr 16 17:46:05.251989 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:05.251973 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0c68a3de-2220-417d-9231-0f60cc4eef6d-console-config\") pod \"0c68a3de-2220-417d-9231-0f60cc4eef6d\" (UID: \"0c68a3de-2220-417d-9231-0f60cc4eef6d\") " Apr 16 17:46:05.252119 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:05.252006 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0c68a3de-2220-417d-9231-0f60cc4eef6d-service-ca\") pod \"0c68a3de-2220-417d-9231-0f60cc4eef6d\" (UID: \"0c68a3de-2220-417d-9231-0f60cc4eef6d\") " Apr 16 17:46:05.252324 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:05.252298 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c68a3de-2220-417d-9231-0f60cc4eef6d-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "0c68a3de-2220-417d-9231-0f60cc4eef6d" (UID: "0c68a3de-2220-417d-9231-0f60cc4eef6d"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 17:46:05.252445 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:05.252411 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c68a3de-2220-417d-9231-0f60cc4eef6d-console-config" (OuterVolumeSpecName: "console-config") pod "0c68a3de-2220-417d-9231-0f60cc4eef6d" (UID: "0c68a3de-2220-417d-9231-0f60cc4eef6d"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 17:46:05.252514 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:05.252439 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c68a3de-2220-417d-9231-0f60cc4eef6d-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "0c68a3de-2220-417d-9231-0f60cc4eef6d" (UID: "0c68a3de-2220-417d-9231-0f60cc4eef6d"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 17:46:05.252514 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:05.252465 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c68a3de-2220-417d-9231-0f60cc4eef6d-service-ca" (OuterVolumeSpecName: "service-ca") pod "0c68a3de-2220-417d-9231-0f60cc4eef6d" (UID: "0c68a3de-2220-417d-9231-0f60cc4eef6d"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 17:46:05.253960 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:05.253941 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c68a3de-2220-417d-9231-0f60cc4eef6d-kube-api-access-s8wnw" (OuterVolumeSpecName: "kube-api-access-s8wnw") pod "0c68a3de-2220-417d-9231-0f60cc4eef6d" (UID: "0c68a3de-2220-417d-9231-0f60cc4eef6d"). InnerVolumeSpecName "kube-api-access-s8wnw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 17:46:05.254312 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:05.254285 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c68a3de-2220-417d-9231-0f60cc4eef6d-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "0c68a3de-2220-417d-9231-0f60cc4eef6d" (UID: "0c68a3de-2220-417d-9231-0f60cc4eef6d"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 17:46:05.254353 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:05.254299 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c68a3de-2220-417d-9231-0f60cc4eef6d-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "0c68a3de-2220-417d-9231-0f60cc4eef6d" (UID: "0c68a3de-2220-417d-9231-0f60cc4eef6d"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 17:46:05.352630 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:05.352612 2571 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c68a3de-2220-417d-9231-0f60cc4eef6d-console-serving-cert\") on node \"ip-10-0-138-134.ec2.internal\" DevicePath \"\"" Apr 16 17:46:05.352630 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:05.352631 2571 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c68a3de-2220-417d-9231-0f60cc4eef6d-trusted-ca-bundle\") on node \"ip-10-0-138-134.ec2.internal\" DevicePath \"\"" Apr 16 17:46:05.352754 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:05.352640 2571 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0c68a3de-2220-417d-9231-0f60cc4eef6d-console-oauth-config\") on node \"ip-10-0-138-134.ec2.internal\" DevicePath \"\"" Apr 16 17:46:05.352754 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:05.352649 2571 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0c68a3de-2220-417d-9231-0f60cc4eef6d-console-config\") on node \"ip-10-0-138-134.ec2.internal\" DevicePath \"\"" Apr 16 17:46:05.352754 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:05.352659 2571 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0c68a3de-2220-417d-9231-0f60cc4eef6d-service-ca\") on node \"ip-10-0-138-134.ec2.internal\" DevicePath \"\"" Apr 16 17:46:05.352754 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:05.352667 2571 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s8wnw\" (UniqueName: \"kubernetes.io/projected/0c68a3de-2220-417d-9231-0f60cc4eef6d-kube-api-access-s8wnw\") on node \"ip-10-0-138-134.ec2.internal\" DevicePath \"\"" Apr 16 17:46:05.352754 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:05.352676 2571 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0c68a3de-2220-417d-9231-0f60cc4eef6d-oauth-serving-cert\") on node \"ip-10-0-138-134.ec2.internal\" DevicePath \"\"" Apr 16 17:46:05.967464 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:05.967438 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7c7b57fc98-j6pcb_0c68a3de-2220-417d-9231-0f60cc4eef6d/console/0.log" Apr 16 17:46:05.967631 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:05.967476 2571 generic.go:358] "Generic (PLEG): container finished" podID="0c68a3de-2220-417d-9231-0f60cc4eef6d" containerID="92fd0915233fe4979df644f56bf5300984af52865a917dfd7d1fb394f00568dc" exitCode=2 Apr 16 17:46:05.967631 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:05.967527 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7c7b57fc98-j6pcb" event={"ID":"0c68a3de-2220-417d-9231-0f60cc4eef6d","Type":"ContainerDied","Data":"92fd0915233fe4979df644f56bf5300984af52865a917dfd7d1fb394f00568dc"} Apr 16 17:46:05.967631 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:05.967544 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7c7b57fc98-j6pcb" Apr 16 17:46:05.967631 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:05.967561 2571 scope.go:117] "RemoveContainer" containerID="92fd0915233fe4979df644f56bf5300984af52865a917dfd7d1fb394f00568dc" Apr 16 17:46:05.967811 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:05.967550 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7c7b57fc98-j6pcb" event={"ID":"0c68a3de-2220-417d-9231-0f60cc4eef6d","Type":"ContainerDied","Data":"98db45f9b1293fc7e6c76db5bac13ed71d34df2793940b5fddc92912dd8a4bc1"} Apr 16 17:46:05.976513 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:05.976496 2571 scope.go:117] "RemoveContainer" containerID="92fd0915233fe4979df644f56bf5300984af52865a917dfd7d1fb394f00568dc" Apr 16 17:46:05.976759 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:46:05.976742 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92fd0915233fe4979df644f56bf5300984af52865a917dfd7d1fb394f00568dc\": container with ID starting with 92fd0915233fe4979df644f56bf5300984af52865a917dfd7d1fb394f00568dc not found: ID does not exist" containerID="92fd0915233fe4979df644f56bf5300984af52865a917dfd7d1fb394f00568dc" Apr 16 17:46:05.976801 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:05.976767 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92fd0915233fe4979df644f56bf5300984af52865a917dfd7d1fb394f00568dc"} err="failed to get container status \"92fd0915233fe4979df644f56bf5300984af52865a917dfd7d1fb394f00568dc\": rpc error: code = NotFound desc = could not find container \"92fd0915233fe4979df644f56bf5300984af52865a917dfd7d1fb394f00568dc\": container with ID starting with 92fd0915233fe4979df644f56bf5300984af52865a917dfd7d1fb394f00568dc not found: ID does not exist" Apr 16 17:46:05.988812 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:05.988787 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7c7b57fc98-j6pcb"] Apr 16 17:46:05.993903 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:05.993881 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7c7b57fc98-j6pcb"] Apr 16 17:46:07.035290 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:07.035249 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c68a3de-2220-417d-9231-0f60cc4eef6d" path="/var/lib/kubelet/pods/0c68a3de-2220-417d-9231-0f60cc4eef6d/volumes" Apr 16 17:46:18.932796 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:18.932773 2571 kubelet.go:1628] "Image garbage collection succeeded" Apr 16 17:46:38.295647 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:38.295616 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29csptc6"] Apr 16 17:46:38.298081 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:38.295961 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0c68a3de-2220-417d-9231-0f60cc4eef6d" containerName="console" Apr 16 17:46:38.298081 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:38.295972 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c68a3de-2220-417d-9231-0f60cc4eef6d" containerName="console" Apr 16 17:46:38.298081 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:38.296025 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="0c68a3de-2220-417d-9231-0f60cc4eef6d" containerName="console" Apr 16 17:46:38.299025 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:38.299007 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29csptc6" Apr 16 17:46:38.301044 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:38.301028 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Apr 16 17:46:38.301589 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:38.301572 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-zbtvt\"" Apr 16 17:46:38.301647 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:38.301600 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Apr 16 17:46:38.308038 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:38.308015 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29csptc6"] Apr 16 17:46:38.397716 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:38.397691 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9w5n\" (UniqueName: \"kubernetes.io/projected/a4ca1367-d5d6-409f-b757-9cb05f7c29f1-kube-api-access-r9w5n\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29csptc6\" (UID: \"a4ca1367-d5d6-409f-b757-9cb05f7c29f1\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29csptc6" Apr 16 17:46:38.397828 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:38.397744 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4ca1367-d5d6-409f-b757-9cb05f7c29f1-bundle\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29csptc6\" (UID: \"a4ca1367-d5d6-409f-b757-9cb05f7c29f1\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29csptc6" Apr 16 17:46:38.397828 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:38.397785 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4ca1367-d5d6-409f-b757-9cb05f7c29f1-util\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29csptc6\" (UID: \"a4ca1367-d5d6-409f-b757-9cb05f7c29f1\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29csptc6" Apr 16 17:46:38.498897 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:38.498843 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4ca1367-d5d6-409f-b757-9cb05f7c29f1-bundle\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29csptc6\" (UID: \"a4ca1367-d5d6-409f-b757-9cb05f7c29f1\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29csptc6" Apr 16 17:46:38.498897 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:38.498897 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4ca1367-d5d6-409f-b757-9cb05f7c29f1-util\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29csptc6\" (UID: \"a4ca1367-d5d6-409f-b757-9cb05f7c29f1\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29csptc6" Apr 16 17:46:38.499080 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:38.499032 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r9w5n\" (UniqueName: \"kubernetes.io/projected/a4ca1367-d5d6-409f-b757-9cb05f7c29f1-kube-api-access-r9w5n\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29csptc6\" (UID: \"a4ca1367-d5d6-409f-b757-9cb05f7c29f1\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29csptc6" Apr 16 17:46:38.499277 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:38.499259 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4ca1367-d5d6-409f-b757-9cb05f7c29f1-util\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29csptc6\" (UID: \"a4ca1367-d5d6-409f-b757-9cb05f7c29f1\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29csptc6" Apr 16 17:46:38.499314 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:38.499302 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4ca1367-d5d6-409f-b757-9cb05f7c29f1-bundle\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29csptc6\" (UID: \"a4ca1367-d5d6-409f-b757-9cb05f7c29f1\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29csptc6" Apr 16 17:46:38.506813 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:38.506785 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9w5n\" (UniqueName: \"kubernetes.io/projected/a4ca1367-d5d6-409f-b757-9cb05f7c29f1-kube-api-access-r9w5n\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29csptc6\" (UID: \"a4ca1367-d5d6-409f-b757-9cb05f7c29f1\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29csptc6" Apr 16 17:46:38.608638 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:38.608563 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29csptc6" Apr 16 17:46:38.726307 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:38.726278 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29csptc6"] Apr 16 17:46:38.729323 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:46:38.729299 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4ca1367_d5d6_409f_b757_9cb05f7c29f1.slice/crio-2d0d07abe2be8d47518a6a3b85264b44fc6f71961dac2270f046690cc00ab7c4 WatchSource:0}: Error finding container 2d0d07abe2be8d47518a6a3b85264b44fc6f71961dac2270f046690cc00ab7c4: Status 404 returned error can't find the container with id 2d0d07abe2be8d47518a6a3b85264b44fc6f71961dac2270f046690cc00ab7c4 Apr 16 17:46:38.731189 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:38.731172 2571 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 17:46:39.074726 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:39.074701 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29csptc6" event={"ID":"a4ca1367-d5d6-409f-b757-9cb05f7c29f1","Type":"ContainerStarted","Data":"2d0d07abe2be8d47518a6a3b85264b44fc6f71961dac2270f046690cc00ab7c4"} Apr 16 17:46:44.092700 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:44.092668 2571 generic.go:358] "Generic (PLEG): container finished" podID="a4ca1367-d5d6-409f-b757-9cb05f7c29f1" containerID="8274874ed46fec852042d578e3eecf5bf3a18b0cdb7c8f2485d4e7e62283a364" exitCode=0 Apr 16 17:46:44.093063 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:44.092764 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29csptc6" event={"ID":"a4ca1367-d5d6-409f-b757-9cb05f7c29f1","Type":"ContainerDied","Data":"8274874ed46fec852042d578e3eecf5bf3a18b0cdb7c8f2485d4e7e62283a364"} Apr 16 17:46:47.101568 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:47.101534 2571 generic.go:358] "Generic (PLEG): container finished" podID="a4ca1367-d5d6-409f-b757-9cb05f7c29f1" containerID="8597b7a220ab6eb294e885f6b9384304a85ebf3003fcf7b03d1bac50b3fcb4ea" exitCode=0 Apr 16 17:46:47.101931 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:47.101575 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29csptc6" event={"ID":"a4ca1367-d5d6-409f-b757-9cb05f7c29f1","Type":"ContainerDied","Data":"8597b7a220ab6eb294e885f6b9384304a85ebf3003fcf7b03d1bac50b3fcb4ea"} Apr 16 17:46:53.127132 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:53.127099 2571 generic.go:358] "Generic (PLEG): container finished" podID="a4ca1367-d5d6-409f-b757-9cb05f7c29f1" containerID="9ae1bb138d196580b72d044cc9625287827aded9c133f6c70d23cb04c4f038dc" exitCode=0 Apr 16 17:46:53.127594 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:53.127130 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29csptc6" event={"ID":"a4ca1367-d5d6-409f-b757-9cb05f7c29f1","Type":"ContainerDied","Data":"9ae1bb138d196580b72d044cc9625287827aded9c133f6c70d23cb04c4f038dc"} Apr 16 17:46:54.244843 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:54.244822 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29csptc6" Apr 16 17:46:54.335006 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:54.334981 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4ca1367-d5d6-409f-b757-9cb05f7c29f1-bundle\") pod \"a4ca1367-d5d6-409f-b757-9cb05f7c29f1\" (UID: \"a4ca1367-d5d6-409f-b757-9cb05f7c29f1\") " Apr 16 17:46:54.335117 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:54.335063 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4ca1367-d5d6-409f-b757-9cb05f7c29f1-util\") pod \"a4ca1367-d5d6-409f-b757-9cb05f7c29f1\" (UID: \"a4ca1367-d5d6-409f-b757-9cb05f7c29f1\") " Apr 16 17:46:54.335117 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:54.335107 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9w5n\" (UniqueName: \"kubernetes.io/projected/a4ca1367-d5d6-409f-b757-9cb05f7c29f1-kube-api-access-r9w5n\") pod \"a4ca1367-d5d6-409f-b757-9cb05f7c29f1\" (UID: \"a4ca1367-d5d6-409f-b757-9cb05f7c29f1\") " Apr 16 17:46:54.335569 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:54.335539 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4ca1367-d5d6-409f-b757-9cb05f7c29f1-bundle" (OuterVolumeSpecName: "bundle") pod "a4ca1367-d5d6-409f-b757-9cb05f7c29f1" (UID: "a4ca1367-d5d6-409f-b757-9cb05f7c29f1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 17:46:54.337217 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:54.337196 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4ca1367-d5d6-409f-b757-9cb05f7c29f1-kube-api-access-r9w5n" (OuterVolumeSpecName: "kube-api-access-r9w5n") pod "a4ca1367-d5d6-409f-b757-9cb05f7c29f1" (UID: "a4ca1367-d5d6-409f-b757-9cb05f7c29f1"). InnerVolumeSpecName "kube-api-access-r9w5n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 17:46:54.339232 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:54.339212 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4ca1367-d5d6-409f-b757-9cb05f7c29f1-util" (OuterVolumeSpecName: "util") pod "a4ca1367-d5d6-409f-b757-9cb05f7c29f1" (UID: "a4ca1367-d5d6-409f-b757-9cb05f7c29f1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 17:46:54.436280 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:54.436256 2571 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4ca1367-d5d6-409f-b757-9cb05f7c29f1-util\") on node \"ip-10-0-138-134.ec2.internal\" DevicePath \"\"" Apr 16 17:46:54.436280 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:54.436279 2571 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r9w5n\" (UniqueName: \"kubernetes.io/projected/a4ca1367-d5d6-409f-b757-9cb05f7c29f1-kube-api-access-r9w5n\") on node \"ip-10-0-138-134.ec2.internal\" DevicePath \"\"" Apr 16 17:46:54.436409 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:54.436292 2571 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4ca1367-d5d6-409f-b757-9cb05f7c29f1-bundle\") on node \"ip-10-0-138-134.ec2.internal\" DevicePath \"\"" Apr 16 17:46:55.133596 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:55.133568 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29csptc6" event={"ID":"a4ca1367-d5d6-409f-b757-9cb05f7c29f1","Type":"ContainerDied","Data":"2d0d07abe2be8d47518a6a3b85264b44fc6f71961dac2270f046690cc00ab7c4"} Apr 16 17:46:55.133728 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:55.133601 2571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d0d07abe2be8d47518a6a3b85264b44fc6f71961dac2270f046690cc00ab7c4" Apr 16 17:46:55.133728 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:46:55.133571 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29csptc6" Apr 16 17:47:04.390959 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.390882 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/keda-operator-ffbb595cb-nbxrg"] Apr 16 17:47:04.391387 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.391177 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a4ca1367-d5d6-409f-b757-9cb05f7c29f1" containerName="pull" Apr 16 17:47:04.391387 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.391188 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4ca1367-d5d6-409f-b757-9cb05f7c29f1" containerName="pull" Apr 16 17:47:04.391387 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.391208 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a4ca1367-d5d6-409f-b757-9cb05f7c29f1" containerName="extract" Apr 16 17:47:04.391387 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.391213 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4ca1367-d5d6-409f-b757-9cb05f7c29f1" containerName="extract" Apr 16 17:47:04.391387 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.391221 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a4ca1367-d5d6-409f-b757-9cb05f7c29f1" containerName="util" Apr 16 17:47:04.391387 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.391226 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4ca1367-d5d6-409f-b757-9cb05f7c29f1" containerName="util" Apr 16 17:47:04.391387 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.391276 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="a4ca1367-d5d6-409f-b757-9cb05f7c29f1" containerName="extract" Apr 16 17:47:04.394011 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.393996 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-operator-ffbb595cb-nbxrg" Apr 16 17:47:04.396368 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.396344 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"keda-ocp-cabundle\"" Apr 16 17:47:04.396509 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.396414 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"kube-root-ca.crt\"" Apr 16 17:47:04.396509 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.396431 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"kedaorg-certs\"" Apr 16 17:47:04.396509 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.396502 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"openshift-service-ca.crt\"" Apr 16 17:47:04.396707 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.396506 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-operator-dockercfg-5hc7x\"" Apr 16 17:47:04.397107 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.397069 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-operator-certs\"" Apr 16 17:47:04.405998 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.405978 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-operator-ffbb595cb-nbxrg"] Apr 16 17:47:04.509093 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.509072 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxh5c\" (UniqueName: \"kubernetes.io/projected/cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05-kube-api-access-xxh5c\") pod \"keda-operator-ffbb595cb-nbxrg\" (UID: \"cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05\") " pod="openshift-keda/keda-operator-ffbb595cb-nbxrg" Apr 16 17:47:04.509192 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.509115 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05-certificates\") pod \"keda-operator-ffbb595cb-nbxrg\" (UID: \"cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05\") " pod="openshift-keda/keda-operator-ffbb595cb-nbxrg" Apr 16 17:47:04.509192 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.509139 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cabundle0\" (UniqueName: \"kubernetes.io/configmap/cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05-cabundle0\") pod \"keda-operator-ffbb595cb-nbxrg\" (UID: \"cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05\") " pod="openshift-keda/keda-operator-ffbb595cb-nbxrg" Apr 16 17:47:04.609887 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.609839 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xxh5c\" (UniqueName: \"kubernetes.io/projected/cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05-kube-api-access-xxh5c\") pod \"keda-operator-ffbb595cb-nbxrg\" (UID: \"cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05\") " pod="openshift-keda/keda-operator-ffbb595cb-nbxrg" Apr 16 17:47:04.610008 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.609943 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05-certificates\") pod \"keda-operator-ffbb595cb-nbxrg\" (UID: \"cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05\") " pod="openshift-keda/keda-operator-ffbb595cb-nbxrg" Apr 16 17:47:04.610008 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.609987 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cabundle0\" (UniqueName: \"kubernetes.io/configmap/cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05-cabundle0\") pod \"keda-operator-ffbb595cb-nbxrg\" (UID: \"cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05\") " pod="openshift-keda/keda-operator-ffbb595cb-nbxrg" Apr 16 17:47:04.610117 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:47:04.610076 2571 projected.go:264] Couldn't get secret openshift-keda/keda-operator-certs: secret "keda-operator-certs" not found Apr 16 17:47:04.610117 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:47:04.610095 2571 secret.go:281] references non-existent secret key: ca.crt Apr 16 17:47:04.610117 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:47:04.610102 2571 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: ca.crt Apr 16 17:47:04.610117 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:47:04.610114 2571 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-operator-ffbb595cb-nbxrg: [secret "keda-operator-certs" not found, references non-existent secret key: ca.crt] Apr 16 17:47:04.610269 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:47:04.610170 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05-certificates podName:cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05 nodeName:}" failed. No retries permitted until 2026-04-16 17:47:05.110154858 +0000 UTC m=+346.583384309 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05-certificates") pod "keda-operator-ffbb595cb-nbxrg" (UID: "cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05") : [secret "keda-operator-certs" not found, references non-existent secret key: ca.crt] Apr 16 17:47:04.610737 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.610713 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cabundle0\" (UniqueName: \"kubernetes.io/configmap/cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05-cabundle0\") pod \"keda-operator-ffbb595cb-nbxrg\" (UID: \"cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05\") " pod="openshift-keda/keda-operator-ffbb595cb-nbxrg" Apr 16 17:47:04.621650 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.621630 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxh5c\" (UniqueName: \"kubernetes.io/projected/cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05-kube-api-access-xxh5c\") pod \"keda-operator-ffbb595cb-nbxrg\" (UID: \"cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05\") " pod="openshift-keda/keda-operator-ffbb595cb-nbxrg" Apr 16 17:47:04.701380 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.701317 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/keda-metrics-apiserver-7c9f485588-qvtc9"] Apr 16 17:47:04.708010 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.707979 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-qvtc9" Apr 16 17:47:04.710214 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.710195 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-metrics-apiserver-certs\"" Apr 16 17:47:04.718531 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.718511 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-metrics-apiserver-7c9f485588-qvtc9"] Apr 16 17:47:04.811216 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.811192 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/60138c66-5b15-46e1-949c-003ae842b87a-certificates\") pod \"keda-metrics-apiserver-7c9f485588-qvtc9\" (UID: \"60138c66-5b15-46e1-949c-003ae842b87a\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-qvtc9" Apr 16 17:47:04.811308 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.811229 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rqj2\" (UniqueName: \"kubernetes.io/projected/60138c66-5b15-46e1-949c-003ae842b87a-kube-api-access-5rqj2\") pod \"keda-metrics-apiserver-7c9f485588-qvtc9\" (UID: \"60138c66-5b15-46e1-949c-003ae842b87a\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-qvtc9" Apr 16 17:47:04.811348 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.811314 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"temp-vol\" (UniqueName: \"kubernetes.io/empty-dir/60138c66-5b15-46e1-949c-003ae842b87a-temp-vol\") pod \"keda-metrics-apiserver-7c9f485588-qvtc9\" (UID: \"60138c66-5b15-46e1-949c-003ae842b87a\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-qvtc9" Apr 16 17:47:04.911774 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.911745 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/60138c66-5b15-46e1-949c-003ae842b87a-certificates\") pod \"keda-metrics-apiserver-7c9f485588-qvtc9\" (UID: \"60138c66-5b15-46e1-949c-003ae842b87a\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-qvtc9" Apr 16 17:47:04.911898 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.911800 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5rqj2\" (UniqueName: \"kubernetes.io/projected/60138c66-5b15-46e1-949c-003ae842b87a-kube-api-access-5rqj2\") pod \"keda-metrics-apiserver-7c9f485588-qvtc9\" (UID: \"60138c66-5b15-46e1-949c-003ae842b87a\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-qvtc9" Apr 16 17:47:04.911898 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.911849 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"temp-vol\" (UniqueName: \"kubernetes.io/empty-dir/60138c66-5b15-46e1-949c-003ae842b87a-temp-vol\") pod \"keda-metrics-apiserver-7c9f485588-qvtc9\" (UID: \"60138c66-5b15-46e1-949c-003ae842b87a\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-qvtc9" Apr 16 17:47:04.911898 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:47:04.911892 2571 secret.go:281] references non-existent secret key: tls.crt Apr 16 17:47:04.912003 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:47:04.911908 2571 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 16 17:47:04.912003 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:47:04.911926 2571 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-qvtc9: references non-existent secret key: tls.crt Apr 16 17:47:04.912003 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:47:04.911988 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/60138c66-5b15-46e1-949c-003ae842b87a-certificates podName:60138c66-5b15-46e1-949c-003ae842b87a nodeName:}" failed. No retries permitted until 2026-04-16 17:47:05.411973918 +0000 UTC m=+346.885203376 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/60138c66-5b15-46e1-949c-003ae842b87a-certificates") pod "keda-metrics-apiserver-7c9f485588-qvtc9" (UID: "60138c66-5b15-46e1-949c-003ae842b87a") : references non-existent secret key: tls.crt Apr 16 17:47:04.912250 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.912234 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"temp-vol\" (UniqueName: \"kubernetes.io/empty-dir/60138c66-5b15-46e1-949c-003ae842b87a-temp-vol\") pod \"keda-metrics-apiserver-7c9f485588-qvtc9\" (UID: \"60138c66-5b15-46e1-949c-003ae842b87a\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-qvtc9" Apr 16 17:47:04.927898 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.923666 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rqj2\" (UniqueName: \"kubernetes.io/projected/60138c66-5b15-46e1-949c-003ae842b87a-kube-api-access-5rqj2\") pod \"keda-metrics-apiserver-7c9f485588-qvtc9\" (UID: \"60138c66-5b15-46e1-949c-003ae842b87a\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-qvtc9" Apr 16 17:47:04.935184 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.935166 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/keda-admission-cf49989db-brlcx"] Apr 16 17:47:04.938502 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.938487 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-admission-cf49989db-brlcx" Apr 16 17:47:04.940359 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.940340 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-admission-webhooks-certs\"" Apr 16 17:47:04.948201 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:04.948180 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-admission-cf49989db-brlcx"] Apr 16 17:47:05.114167 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:05.114077 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/a7c5e733-186b-4127-9d9d-ce8e00e41b43-certificates\") pod \"keda-admission-cf49989db-brlcx\" (UID: \"a7c5e733-186b-4127-9d9d-ce8e00e41b43\") " pod="openshift-keda/keda-admission-cf49989db-brlcx" Apr 16 17:47:05.114337 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:05.114168 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krdbw\" (UniqueName: \"kubernetes.io/projected/a7c5e733-186b-4127-9d9d-ce8e00e41b43-kube-api-access-krdbw\") pod \"keda-admission-cf49989db-brlcx\" (UID: \"a7c5e733-186b-4127-9d9d-ce8e00e41b43\") " pod="openshift-keda/keda-admission-cf49989db-brlcx" Apr 16 17:47:05.114337 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:05.114286 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05-certificates\") pod \"keda-operator-ffbb595cb-nbxrg\" (UID: \"cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05\") " pod="openshift-keda/keda-operator-ffbb595cb-nbxrg" Apr 16 17:47:05.114470 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:47:05.114432 2571 secret.go:281] references non-existent secret key: ca.crt Apr 16 17:47:05.114470 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:47:05.114470 2571 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: ca.crt Apr 16 17:47:05.114632 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:47:05.114482 2571 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-operator-ffbb595cb-nbxrg: references non-existent secret key: ca.crt Apr 16 17:47:05.114632 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:47:05.114540 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05-certificates podName:cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05 nodeName:}" failed. No retries permitted until 2026-04-16 17:47:06.114519452 +0000 UTC m=+347.587748904 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05-certificates") pod "keda-operator-ffbb595cb-nbxrg" (UID: "cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05") : references non-existent secret key: ca.crt Apr 16 17:47:05.215516 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:05.214940 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/a7c5e733-186b-4127-9d9d-ce8e00e41b43-certificates\") pod \"keda-admission-cf49989db-brlcx\" (UID: \"a7c5e733-186b-4127-9d9d-ce8e00e41b43\") " pod="openshift-keda/keda-admission-cf49989db-brlcx" Apr 16 17:47:05.215516 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:05.214989 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-krdbw\" (UniqueName: \"kubernetes.io/projected/a7c5e733-186b-4127-9d9d-ce8e00e41b43-kube-api-access-krdbw\") pod \"keda-admission-cf49989db-brlcx\" (UID: \"a7c5e733-186b-4127-9d9d-ce8e00e41b43\") " pod="openshift-keda/keda-admission-cf49989db-brlcx" Apr 16 17:47:05.217883 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:05.217838 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/a7c5e733-186b-4127-9d9d-ce8e00e41b43-certificates\") pod \"keda-admission-cf49989db-brlcx\" (UID: \"a7c5e733-186b-4127-9d9d-ce8e00e41b43\") " pod="openshift-keda/keda-admission-cf49989db-brlcx" Apr 16 17:47:05.222926 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:05.222897 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-krdbw\" (UniqueName: \"kubernetes.io/projected/a7c5e733-186b-4127-9d9d-ce8e00e41b43-kube-api-access-krdbw\") pod \"keda-admission-cf49989db-brlcx\" (UID: \"a7c5e733-186b-4127-9d9d-ce8e00e41b43\") " pod="openshift-keda/keda-admission-cf49989db-brlcx" Apr 16 17:47:05.249598 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:05.249573 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-admission-cf49989db-brlcx" Apr 16 17:47:05.376810 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:05.376688 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-admission-cf49989db-brlcx"] Apr 16 17:47:05.379709 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:47:05.379684 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7c5e733_186b_4127_9d9d_ce8e00e41b43.slice/crio-0884a0094b7130c323f455a9c6a7339dd5ad4911e15fa7713b862c2c319a80a5 WatchSource:0}: Error finding container 0884a0094b7130c323f455a9c6a7339dd5ad4911e15fa7713b862c2c319a80a5: Status 404 returned error can't find the container with id 0884a0094b7130c323f455a9c6a7339dd5ad4911e15fa7713b862c2c319a80a5 Apr 16 17:47:05.416702 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:05.416673 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/60138c66-5b15-46e1-949c-003ae842b87a-certificates\") pod \"keda-metrics-apiserver-7c9f485588-qvtc9\" (UID: \"60138c66-5b15-46e1-949c-003ae842b87a\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-qvtc9" Apr 16 17:47:05.417091 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:47:05.416823 2571 secret.go:281] references non-existent secret key: tls.crt Apr 16 17:47:05.417091 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:47:05.416840 2571 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 16 17:47:05.417091 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:47:05.416879 2571 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-qvtc9: references non-existent secret key: tls.crt Apr 16 17:47:05.417091 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:47:05.416937 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/60138c66-5b15-46e1-949c-003ae842b87a-certificates podName:60138c66-5b15-46e1-949c-003ae842b87a nodeName:}" failed. No retries permitted until 2026-04-16 17:47:06.416917085 +0000 UTC m=+347.890146539 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/60138c66-5b15-46e1-949c-003ae842b87a-certificates") pod "keda-metrics-apiserver-7c9f485588-qvtc9" (UID: "60138c66-5b15-46e1-949c-003ae842b87a") : references non-existent secret key: tls.crt Apr 16 17:47:06.122394 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:06.122365 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05-certificates\") pod \"keda-operator-ffbb595cb-nbxrg\" (UID: \"cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05\") " pod="openshift-keda/keda-operator-ffbb595cb-nbxrg" Apr 16 17:47:06.122542 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:47:06.122502 2571 secret.go:281] references non-existent secret key: ca.crt Apr 16 17:47:06.122542 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:47:06.122519 2571 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: ca.crt Apr 16 17:47:06.122542 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:47:06.122529 2571 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-operator-ffbb595cb-nbxrg: references non-existent secret key: ca.crt Apr 16 17:47:06.122641 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:47:06.122578 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05-certificates podName:cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05 nodeName:}" failed. No retries permitted until 2026-04-16 17:47:08.122563613 +0000 UTC m=+349.595793064 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05-certificates") pod "keda-operator-ffbb595cb-nbxrg" (UID: "cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05") : references non-existent secret key: ca.crt Apr 16 17:47:06.165516 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:06.165491 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-admission-cf49989db-brlcx" event={"ID":"a7c5e733-186b-4127-9d9d-ce8e00e41b43","Type":"ContainerStarted","Data":"0884a0094b7130c323f455a9c6a7339dd5ad4911e15fa7713b862c2c319a80a5"} Apr 16 17:47:06.425104 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:06.425059 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/60138c66-5b15-46e1-949c-003ae842b87a-certificates\") pod \"keda-metrics-apiserver-7c9f485588-qvtc9\" (UID: \"60138c66-5b15-46e1-949c-003ae842b87a\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-qvtc9" Apr 16 17:47:06.425464 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:47:06.425196 2571 secret.go:281] references non-existent secret key: tls.crt Apr 16 17:47:06.425464 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:47:06.425213 2571 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 16 17:47:06.425464 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:47:06.425234 2571 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-qvtc9: references non-existent secret key: tls.crt Apr 16 17:47:06.425464 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:47:06.425286 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/60138c66-5b15-46e1-949c-003ae842b87a-certificates podName:60138c66-5b15-46e1-949c-003ae842b87a nodeName:}" failed. No retries permitted until 2026-04-16 17:47:08.425271081 +0000 UTC m=+349.898500532 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/60138c66-5b15-46e1-949c-003ae842b87a-certificates") pod "keda-metrics-apiserver-7c9f485588-qvtc9" (UID: "60138c66-5b15-46e1-949c-003ae842b87a") : references non-existent secret key: tls.crt Apr 16 17:47:08.140829 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:08.140794 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05-certificates\") pod \"keda-operator-ffbb595cb-nbxrg\" (UID: \"cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05\") " pod="openshift-keda/keda-operator-ffbb595cb-nbxrg" Apr 16 17:47:08.141243 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:47:08.140964 2571 secret.go:281] references non-existent secret key: ca.crt Apr 16 17:47:08.141243 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:47:08.140985 2571 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: ca.crt Apr 16 17:47:08.141243 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:47:08.140995 2571 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-operator-ffbb595cb-nbxrg: references non-existent secret key: ca.crt Apr 16 17:47:08.141243 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:47:08.141055 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05-certificates podName:cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05 nodeName:}" failed. No retries permitted until 2026-04-16 17:47:12.14103779 +0000 UTC m=+353.614267241 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05-certificates") pod "keda-operator-ffbb595cb-nbxrg" (UID: "cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05") : references non-existent secret key: ca.crt Apr 16 17:47:08.172953 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:08.172915 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-admission-cf49989db-brlcx" event={"ID":"a7c5e733-186b-4127-9d9d-ce8e00e41b43","Type":"ContainerStarted","Data":"a5adba75738c8ecf736dc0fbe4494fe3c01e1fd87406da76ff13c470f4d45997"} Apr 16 17:47:08.173090 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:08.173006 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-admission-cf49989db-brlcx" Apr 16 17:47:08.190733 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:08.190692 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/keda-admission-cf49989db-brlcx" podStartSLOduration=2.396134198 podStartE2EDuration="4.190678209s" podCreationTimestamp="2026-04-16 17:47:04 +0000 UTC" firstStartedPulling="2026-04-16 17:47:05.380903166 +0000 UTC m=+346.854132616" lastFinishedPulling="2026-04-16 17:47:07.175447176 +0000 UTC m=+348.648676627" observedRunningTime="2026-04-16 17:47:08.188748761 +0000 UTC m=+349.661978236" watchObservedRunningTime="2026-04-16 17:47:08.190678209 +0000 UTC m=+349.663907681" Apr 16 17:47:08.442275 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:08.442204 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/60138c66-5b15-46e1-949c-003ae842b87a-certificates\") pod \"keda-metrics-apiserver-7c9f485588-qvtc9\" (UID: \"60138c66-5b15-46e1-949c-003ae842b87a\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-qvtc9" Apr 16 17:47:08.442404 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:47:08.442326 2571 secret.go:281] references non-existent secret key: tls.crt Apr 16 17:47:08.442404 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:47:08.442341 2571 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 16 17:47:08.442404 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:47:08.442360 2571 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-qvtc9: references non-existent secret key: tls.crt Apr 16 17:47:08.442502 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:47:08.442423 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/60138c66-5b15-46e1-949c-003ae842b87a-certificates podName:60138c66-5b15-46e1-949c-003ae842b87a nodeName:}" failed. No retries permitted until 2026-04-16 17:47:12.442409883 +0000 UTC m=+353.915639334 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/60138c66-5b15-46e1-949c-003ae842b87a-certificates") pod "keda-metrics-apiserver-7c9f485588-qvtc9" (UID: "60138c66-5b15-46e1-949c-003ae842b87a") : references non-existent secret key: tls.crt Apr 16 17:47:12.167693 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:12.167653 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05-certificates\") pod \"keda-operator-ffbb595cb-nbxrg\" (UID: \"cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05\") " pod="openshift-keda/keda-operator-ffbb595cb-nbxrg" Apr 16 17:47:12.170343 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:12.170317 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05-certificates\") pod \"keda-operator-ffbb595cb-nbxrg\" (UID: \"cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05\") " pod="openshift-keda/keda-operator-ffbb595cb-nbxrg" Apr 16 17:47:12.203949 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:12.203922 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-operator-ffbb595cb-nbxrg" Apr 16 17:47:12.322827 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:12.322791 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-operator-ffbb595cb-nbxrg"] Apr 16 17:47:12.325659 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:47:12.325619 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd4bb9d4_c3fd_4359_9b7e_c51dba4b2b05.slice/crio-eec8b8db09db3c3bff73bbe7a64174670e52b7b3ee67d9815bfcdd6b47e762d7 WatchSource:0}: Error finding container eec8b8db09db3c3bff73bbe7a64174670e52b7b3ee67d9815bfcdd6b47e762d7: Status 404 returned error can't find the container with id eec8b8db09db3c3bff73bbe7a64174670e52b7b3ee67d9815bfcdd6b47e762d7 Apr 16 17:47:12.470594 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:12.470516 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/60138c66-5b15-46e1-949c-003ae842b87a-certificates\") pod \"keda-metrics-apiserver-7c9f485588-qvtc9\" (UID: \"60138c66-5b15-46e1-949c-003ae842b87a\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-qvtc9" Apr 16 17:47:12.472925 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:12.472905 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/60138c66-5b15-46e1-949c-003ae842b87a-certificates\") pod \"keda-metrics-apiserver-7c9f485588-qvtc9\" (UID: \"60138c66-5b15-46e1-949c-003ae842b87a\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-qvtc9" Apr 16 17:47:12.518322 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:12.518296 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-qvtc9" Apr 16 17:47:12.635822 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:12.635792 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-metrics-apiserver-7c9f485588-qvtc9"] Apr 16 17:47:12.638645 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:47:12.638622 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60138c66_5b15_46e1_949c_003ae842b87a.slice/crio-78ea3bfa664a955b977c9d5b42ec2f65ceb6a03d9b56434af18548085383aba3 WatchSource:0}: Error finding container 78ea3bfa664a955b977c9d5b42ec2f65ceb6a03d9b56434af18548085383aba3: Status 404 returned error can't find the container with id 78ea3bfa664a955b977c9d5b42ec2f65ceb6a03d9b56434af18548085383aba3 Apr 16 17:47:13.189047 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:13.189014 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-operator-ffbb595cb-nbxrg" event={"ID":"cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05","Type":"ContainerStarted","Data":"eec8b8db09db3c3bff73bbe7a64174670e52b7b3ee67d9815bfcdd6b47e762d7"} Apr 16 17:47:13.190021 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:13.189998 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-qvtc9" event={"ID":"60138c66-5b15-46e1-949c-003ae842b87a","Type":"ContainerStarted","Data":"78ea3bfa664a955b977c9d5b42ec2f65ceb6a03d9b56434af18548085383aba3"} Apr 16 17:47:16.201019 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:16.200980 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-qvtc9" event={"ID":"60138c66-5b15-46e1-949c-003ae842b87a","Type":"ContainerStarted","Data":"ca4a30c81e441c9715a80135845e3f79a920e7392f89e6f55fd254c2f97dbc0c"} Apr 16 17:47:16.201456 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:16.201074 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-qvtc9" Apr 16 17:47:16.218809 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:16.218764 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-qvtc9" podStartSLOduration=9.505093913 podStartE2EDuration="12.218742634s" podCreationTimestamp="2026-04-16 17:47:04 +0000 UTC" firstStartedPulling="2026-04-16 17:47:12.63994051 +0000 UTC m=+354.113169961" lastFinishedPulling="2026-04-16 17:47:15.353589231 +0000 UTC m=+356.826818682" observedRunningTime="2026-04-16 17:47:16.216940184 +0000 UTC m=+357.690169677" watchObservedRunningTime="2026-04-16 17:47:16.218742634 +0000 UTC m=+357.691972107" Apr 16 17:47:27.209269 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:27.209232 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-qvtc9" Apr 16 17:47:29.178531 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:29.178500 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-admission-cf49989db-brlcx" Apr 16 17:47:32.251871 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:32.251833 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-operator-ffbb595cb-nbxrg" event={"ID":"cd4bb9d4-c3fd-4359-9b7e-c51dba4b2b05","Type":"ContainerStarted","Data":"aaa9246d856d2d059848262660e437d90865b273556ccff87a6809a91f1d374d"} Apr 16 17:47:32.252280 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:32.251903 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-operator-ffbb595cb-nbxrg" Apr 16 17:47:32.270945 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:32.270900 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/keda-operator-ffbb595cb-nbxrg" podStartSLOduration=9.035424266 podStartE2EDuration="28.270884903s" podCreationTimestamp="2026-04-16 17:47:04 +0000 UTC" firstStartedPulling="2026-04-16 17:47:12.326834512 +0000 UTC m=+353.800063975" lastFinishedPulling="2026-04-16 17:47:31.562295149 +0000 UTC m=+373.035524612" observedRunningTime="2026-04-16 17:47:32.270395888 +0000 UTC m=+373.743625365" watchObservedRunningTime="2026-04-16 17:47:32.270884903 +0000 UTC m=+373.744114376" Apr 16 17:47:53.257454 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:47:53.257420 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-operator-ffbb595cb-nbxrg" Apr 16 17:48:19.104216 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:19.104182 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/kserve-controller-manager-7f8f4564d-h4j5q"] Apr 16 17:48:19.106377 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:19.106360 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-7f8f4564d-h4j5q" Apr 16 17:48:19.108357 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:19.108333 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"kserve-controller-manager-dockercfg-zcdkd\"" Apr 16 17:48:19.109106 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:19.109088 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"openshift-service-ca.crt\"" Apr 16 17:48:19.109262 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:19.109109 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"kube-root-ca.crt\"" Apr 16 17:48:19.109262 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:19.109108 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"kserve-webhook-server-cert\"" Apr 16 17:48:19.117900 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:19.117877 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-7f8f4564d-h4j5q"] Apr 16 17:48:19.265953 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:19.265923 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zs8x\" (UniqueName: \"kubernetes.io/projected/d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1-kube-api-access-2zs8x\") pod \"kserve-controller-manager-7f8f4564d-h4j5q\" (UID: \"d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1\") " pod="kserve/kserve-controller-manager-7f8f4564d-h4j5q" Apr 16 17:48:19.266108 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:19.265972 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1-cert\") pod \"kserve-controller-manager-7f8f4564d-h4j5q\" (UID: \"d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1\") " pod="kserve/kserve-controller-manager-7f8f4564d-h4j5q" Apr 16 17:48:19.367083 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:19.366997 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2zs8x\" (UniqueName: \"kubernetes.io/projected/d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1-kube-api-access-2zs8x\") pod \"kserve-controller-manager-7f8f4564d-h4j5q\" (UID: \"d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1\") " pod="kserve/kserve-controller-manager-7f8f4564d-h4j5q" Apr 16 17:48:19.367083 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:19.367049 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1-cert\") pod \"kserve-controller-manager-7f8f4564d-h4j5q\" (UID: \"d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1\") " pod="kserve/kserve-controller-manager-7f8f4564d-h4j5q" Apr 16 17:48:19.367292 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:48:19.367161 2571 secret.go:189] Couldn't get secret kserve/kserve-webhook-server-cert: secret "kserve-webhook-server-cert" not found Apr 16 17:48:19.367292 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:48:19.367225 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1-cert podName:d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1 nodeName:}" failed. No retries permitted until 2026-04-16 17:48:19.867207599 +0000 UTC m=+421.340437067 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1-cert") pod "kserve-controller-manager-7f8f4564d-h4j5q" (UID: "d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1") : secret "kserve-webhook-server-cert" not found Apr 16 17:48:19.377175 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:19.377152 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zs8x\" (UniqueName: \"kubernetes.io/projected/d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1-kube-api-access-2zs8x\") pod \"kserve-controller-manager-7f8f4564d-h4j5q\" (UID: \"d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1\") " pod="kserve/kserve-controller-manager-7f8f4564d-h4j5q" Apr 16 17:48:19.870685 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:19.870639 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1-cert\") pod \"kserve-controller-manager-7f8f4564d-h4j5q\" (UID: \"d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1\") " pod="kserve/kserve-controller-manager-7f8f4564d-h4j5q" Apr 16 17:48:19.873078 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:19.873054 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1-cert\") pod \"kserve-controller-manager-7f8f4564d-h4j5q\" (UID: \"d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1\") " pod="kserve/kserve-controller-manager-7f8f4564d-h4j5q" Apr 16 17:48:20.016351 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:20.016319 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-7f8f4564d-h4j5q" Apr 16 17:48:20.134321 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:20.134297 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-7f8f4564d-h4j5q"] Apr 16 17:48:20.136846 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:48:20.136814 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4f5e9c6_03ed_4cd4_8c21_5c5c4148f8e1.slice/crio-db4557d32049a43f3d56d5f051a71eed4bf5da1b0dc956c7409337ceaf3e337c WatchSource:0}: Error finding container db4557d32049a43f3d56d5f051a71eed4bf5da1b0dc956c7409337ceaf3e337c: Status 404 returned error can't find the container with id db4557d32049a43f3d56d5f051a71eed4bf5da1b0dc956c7409337ceaf3e337c Apr 16 17:48:20.395328 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:20.395291 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-7f8f4564d-h4j5q" event={"ID":"d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1","Type":"ContainerStarted","Data":"db4557d32049a43f3d56d5f051a71eed4bf5da1b0dc956c7409337ceaf3e337c"} Apr 16 17:48:23.407435 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:23.407394 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-7f8f4564d-h4j5q" event={"ID":"d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1","Type":"ContainerStarted","Data":"41ad94bb2c9ddbf91a9020ddac272cbacfe9ac575e503c36c5173e0d3b9ff1e0"} Apr 16 17:48:23.407435 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:23.407443 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/kserve-controller-manager-7f8f4564d-h4j5q" Apr 16 17:48:23.440131 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:23.440084 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/kserve-controller-manager-7f8f4564d-h4j5q" podStartSLOduration=2.187536494 podStartE2EDuration="4.440068731s" podCreationTimestamp="2026-04-16 17:48:19 +0000 UTC" firstStartedPulling="2026-04-16 17:48:20.138049188 +0000 UTC m=+421.611278643" lastFinishedPulling="2026-04-16 17:48:22.390581426 +0000 UTC m=+423.863810880" observedRunningTime="2026-04-16 17:48:23.437727614 +0000 UTC m=+424.910957088" watchObservedRunningTime="2026-04-16 17:48:23.440068731 +0000 UTC m=+424.913298245" Apr 16 17:48:54.416592 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:54.416516 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/kserve-controller-manager-7f8f4564d-h4j5q" Apr 16 17:48:55.705431 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:55.705397 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve/kserve-controller-manager-7f8f4564d-h4j5q"] Apr 16 17:48:55.705759 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:55.705620 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve/kserve-controller-manager-7f8f4564d-h4j5q" podUID="d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1" containerName="manager" containerID="cri-o://41ad94bb2c9ddbf91a9020ddac272cbacfe9ac575e503c36c5173e0d3b9ff1e0" gracePeriod=10 Apr 16 17:48:55.937557 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:55.937537 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-7f8f4564d-h4j5q" Apr 16 17:48:56.044469 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:56.044434 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1-cert\") pod \"d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1\" (UID: \"d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1\") " Apr 16 17:48:56.044469 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:56.044484 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zs8x\" (UniqueName: \"kubernetes.io/projected/d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1-kube-api-access-2zs8x\") pod \"d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1\" (UID: \"d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1\") " Apr 16 17:48:56.046566 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:56.046534 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1-kube-api-access-2zs8x" (OuterVolumeSpecName: "kube-api-access-2zs8x") pod "d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1" (UID: "d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1"). InnerVolumeSpecName "kube-api-access-2zs8x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 17:48:56.046649 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:56.046586 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1-cert" (OuterVolumeSpecName: "cert") pod "d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1" (UID: "d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 17:48:56.145943 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:56.145919 2571 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2zs8x\" (UniqueName: \"kubernetes.io/projected/d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1-kube-api-access-2zs8x\") on node \"ip-10-0-138-134.ec2.internal\" DevicePath \"\"" Apr 16 17:48:56.145943 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:56.145940 2571 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1-cert\") on node \"ip-10-0-138-134.ec2.internal\" DevicePath \"\"" Apr 16 17:48:56.508979 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:56.508949 2571 generic.go:358] "Generic (PLEG): container finished" podID="d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1" containerID="41ad94bb2c9ddbf91a9020ddac272cbacfe9ac575e503c36c5173e0d3b9ff1e0" exitCode=0 Apr 16 17:48:56.509104 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:56.509013 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-7f8f4564d-h4j5q" Apr 16 17:48:56.509104 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:56.509036 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-7f8f4564d-h4j5q" event={"ID":"d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1","Type":"ContainerDied","Data":"41ad94bb2c9ddbf91a9020ddac272cbacfe9ac575e503c36c5173e0d3b9ff1e0"} Apr 16 17:48:56.509104 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:56.509078 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-7f8f4564d-h4j5q" event={"ID":"d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1","Type":"ContainerDied","Data":"db4557d32049a43f3d56d5f051a71eed4bf5da1b0dc956c7409337ceaf3e337c"} Apr 16 17:48:56.509104 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:56.509097 2571 scope.go:117] "RemoveContainer" containerID="41ad94bb2c9ddbf91a9020ddac272cbacfe9ac575e503c36c5173e0d3b9ff1e0" Apr 16 17:48:56.517988 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:56.517968 2571 scope.go:117] "RemoveContainer" containerID="41ad94bb2c9ddbf91a9020ddac272cbacfe9ac575e503c36c5173e0d3b9ff1e0" Apr 16 17:48:56.518278 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:48:56.518257 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41ad94bb2c9ddbf91a9020ddac272cbacfe9ac575e503c36c5173e0d3b9ff1e0\": container with ID starting with 41ad94bb2c9ddbf91a9020ddac272cbacfe9ac575e503c36c5173e0d3b9ff1e0 not found: ID does not exist" containerID="41ad94bb2c9ddbf91a9020ddac272cbacfe9ac575e503c36c5173e0d3b9ff1e0" Apr 16 17:48:56.518345 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:56.518287 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41ad94bb2c9ddbf91a9020ddac272cbacfe9ac575e503c36c5173e0d3b9ff1e0"} err="failed to get container status \"41ad94bb2c9ddbf91a9020ddac272cbacfe9ac575e503c36c5173e0d3b9ff1e0\": rpc error: code = NotFound desc = could not find container \"41ad94bb2c9ddbf91a9020ddac272cbacfe9ac575e503c36c5173e0d3b9ff1e0\": container with ID starting with 41ad94bb2c9ddbf91a9020ddac272cbacfe9ac575e503c36c5173e0d3b9ff1e0 not found: ID does not exist" Apr 16 17:48:56.530741 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:56.530714 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve/kserve-controller-manager-7f8f4564d-h4j5q"] Apr 16 17:48:56.536084 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:56.536062 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve/kserve-controller-manager-7f8f4564d-h4j5q"] Apr 16 17:48:57.034691 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:48:57.034661 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1" path="/var/lib/kubelet/pods/d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1/volumes" Apr 16 17:49:31.283493 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:31.283460 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/model-serving-api-86f7b4b499-bjw8k"] Apr 16 17:49:31.283949 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:31.283784 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1" containerName="manager" Apr 16 17:49:31.283949 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:31.283796 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1" containerName="manager" Apr 16 17:49:31.283949 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:31.283882 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="d4f5e9c6-03ed-4cd4-8c21-5c5c4148f8e1" containerName="manager" Apr 16 17:49:31.286072 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:31.286056 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/model-serving-api-86f7b4b499-bjw8k" Apr 16 17:49:31.288110 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:31.288086 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"openshift-service-ca.crt\"" Apr 16 17:49:31.288244 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:31.288126 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"model-serving-api-tls\"" Apr 16 17:49:31.288976 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:31.288960 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"model-serving-api-dockercfg-zctft\"" Apr 16 17:49:31.288976 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:31.288972 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"kube-root-ca.crt\"" Apr 16 17:49:31.300168 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:31.300150 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/model-serving-api-86f7b4b499-bjw8k"] Apr 16 17:49:31.308483 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:31.308463 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/odh-model-controller-696fc77849-tlp6f"] Apr 16 17:49:31.310795 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:31.310777 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/odh-model-controller-696fc77849-tlp6f" Apr 16 17:49:31.313042 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:31.313027 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"odh-model-controller-webhook-cert\"" Apr 16 17:49:31.313135 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:31.313054 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"odh-model-controller-dockercfg-dgj65\"" Apr 16 17:49:31.322720 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:31.322695 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/odh-model-controller-696fc77849-tlp6f"] Apr 16 17:49:31.406082 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:31.406057 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certs\" (UniqueName: \"kubernetes.io/secret/f2494067-51ef-4a80-a329-f02b5ba3e970-tls-certs\") pod \"model-serving-api-86f7b4b499-bjw8k\" (UID: \"f2494067-51ef-4a80-a329-f02b5ba3e970\") " pod="kserve/model-serving-api-86f7b4b499-bjw8k" Apr 16 17:49:31.406174 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:31.406088 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwt9v\" (UniqueName: \"kubernetes.io/projected/99e5ce53-3010-4a08-a9af-f04c84f63ae5-kube-api-access-cwt9v\") pod \"odh-model-controller-696fc77849-tlp6f\" (UID: \"99e5ce53-3010-4a08-a9af-f04c84f63ae5\") " pod="kserve/odh-model-controller-696fc77849-tlp6f" Apr 16 17:49:31.406174 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:31.406110 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l484r\" (UniqueName: \"kubernetes.io/projected/f2494067-51ef-4a80-a329-f02b5ba3e970-kube-api-access-l484r\") pod \"model-serving-api-86f7b4b499-bjw8k\" (UID: \"f2494067-51ef-4a80-a329-f02b5ba3e970\") " pod="kserve/model-serving-api-86f7b4b499-bjw8k" Apr 16 17:49:31.406258 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:31.406176 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/99e5ce53-3010-4a08-a9af-f04c84f63ae5-cert\") pod \"odh-model-controller-696fc77849-tlp6f\" (UID: \"99e5ce53-3010-4a08-a9af-f04c84f63ae5\") " pod="kserve/odh-model-controller-696fc77849-tlp6f" Apr 16 17:49:31.507158 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:31.507136 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l484r\" (UniqueName: \"kubernetes.io/projected/f2494067-51ef-4a80-a329-f02b5ba3e970-kube-api-access-l484r\") pod \"model-serving-api-86f7b4b499-bjw8k\" (UID: \"f2494067-51ef-4a80-a329-f02b5ba3e970\") " pod="kserve/model-serving-api-86f7b4b499-bjw8k" Apr 16 17:49:31.507248 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:31.507172 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/99e5ce53-3010-4a08-a9af-f04c84f63ae5-cert\") pod \"odh-model-controller-696fc77849-tlp6f\" (UID: \"99e5ce53-3010-4a08-a9af-f04c84f63ae5\") " pod="kserve/odh-model-controller-696fc77849-tlp6f" Apr 16 17:49:31.507248 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:31.507229 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-certs\" (UniqueName: \"kubernetes.io/secret/f2494067-51ef-4a80-a329-f02b5ba3e970-tls-certs\") pod \"model-serving-api-86f7b4b499-bjw8k\" (UID: \"f2494067-51ef-4a80-a329-f02b5ba3e970\") " pod="kserve/model-serving-api-86f7b4b499-bjw8k" Apr 16 17:49:31.507248 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:31.507247 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cwt9v\" (UniqueName: \"kubernetes.io/projected/99e5ce53-3010-4a08-a9af-f04c84f63ae5-kube-api-access-cwt9v\") pod \"odh-model-controller-696fc77849-tlp6f\" (UID: \"99e5ce53-3010-4a08-a9af-f04c84f63ae5\") " pod="kserve/odh-model-controller-696fc77849-tlp6f" Apr 16 17:49:31.507409 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:49:31.507387 2571 secret.go:189] Couldn't get secret kserve/model-serving-api-tls: secret "model-serving-api-tls" not found Apr 16 17:49:31.507487 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:49:31.507476 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2494067-51ef-4a80-a329-f02b5ba3e970-tls-certs podName:f2494067-51ef-4a80-a329-f02b5ba3e970 nodeName:}" failed. No retries permitted until 2026-04-16 17:49:32.007454738 +0000 UTC m=+493.480684200 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certs" (UniqueName: "kubernetes.io/secret/f2494067-51ef-4a80-a329-f02b5ba3e970-tls-certs") pod "model-serving-api-86f7b4b499-bjw8k" (UID: "f2494067-51ef-4a80-a329-f02b5ba3e970") : secret "model-serving-api-tls" not found Apr 16 17:49:31.509677 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:31.509650 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/99e5ce53-3010-4a08-a9af-f04c84f63ae5-cert\") pod \"odh-model-controller-696fc77849-tlp6f\" (UID: \"99e5ce53-3010-4a08-a9af-f04c84f63ae5\") " pod="kserve/odh-model-controller-696fc77849-tlp6f" Apr 16 17:49:31.519460 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:31.519438 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l484r\" (UniqueName: \"kubernetes.io/projected/f2494067-51ef-4a80-a329-f02b5ba3e970-kube-api-access-l484r\") pod \"model-serving-api-86f7b4b499-bjw8k\" (UID: \"f2494067-51ef-4a80-a329-f02b5ba3e970\") " pod="kserve/model-serving-api-86f7b4b499-bjw8k" Apr 16 17:49:31.519580 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:31.519560 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwt9v\" (UniqueName: \"kubernetes.io/projected/99e5ce53-3010-4a08-a9af-f04c84f63ae5-kube-api-access-cwt9v\") pod \"odh-model-controller-696fc77849-tlp6f\" (UID: \"99e5ce53-3010-4a08-a9af-f04c84f63ae5\") " pod="kserve/odh-model-controller-696fc77849-tlp6f" Apr 16 17:49:31.622087 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:31.622014 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/odh-model-controller-696fc77849-tlp6f" Apr 16 17:49:31.746559 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:31.746473 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/odh-model-controller-696fc77849-tlp6f"] Apr 16 17:49:31.748809 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:49:31.748780 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99e5ce53_3010_4a08_a9af_f04c84f63ae5.slice/crio-509f3719e6d3b0483188df7d5c6683cde2b597223dbb00b44acc9d12962a1ef8 WatchSource:0}: Error finding container 509f3719e6d3b0483188df7d5c6683cde2b597223dbb00b44acc9d12962a1ef8: Status 404 returned error can't find the container with id 509f3719e6d3b0483188df7d5c6683cde2b597223dbb00b44acc9d12962a1ef8 Apr 16 17:49:32.011711 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:32.011685 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-certs\" (UniqueName: \"kubernetes.io/secret/f2494067-51ef-4a80-a329-f02b5ba3e970-tls-certs\") pod \"model-serving-api-86f7b4b499-bjw8k\" (UID: \"f2494067-51ef-4a80-a329-f02b5ba3e970\") " pod="kserve/model-serving-api-86f7b4b499-bjw8k" Apr 16 17:49:32.014046 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:32.014025 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-certs\" (UniqueName: \"kubernetes.io/secret/f2494067-51ef-4a80-a329-f02b5ba3e970-tls-certs\") pod \"model-serving-api-86f7b4b499-bjw8k\" (UID: \"f2494067-51ef-4a80-a329-f02b5ba3e970\") " pod="kserve/model-serving-api-86f7b4b499-bjw8k" Apr 16 17:49:32.195909 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:32.195884 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/model-serving-api-86f7b4b499-bjw8k" Apr 16 17:49:32.315940 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:32.315917 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/model-serving-api-86f7b4b499-bjw8k"] Apr 16 17:49:32.317813 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:49:32.317784 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2494067_51ef_4a80_a329_f02b5ba3e970.slice/crio-f88c691d5db508706eb05a2585ff3320dd26af07fe01bc134cf899eeafd42b4b WatchSource:0}: Error finding container f88c691d5db508706eb05a2585ff3320dd26af07fe01bc134cf899eeafd42b4b: Status 404 returned error can't find the container with id f88c691d5db508706eb05a2585ff3320dd26af07fe01bc134cf899eeafd42b4b Apr 16 17:49:32.628187 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:32.628108 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/model-serving-api-86f7b4b499-bjw8k" event={"ID":"f2494067-51ef-4a80-a329-f02b5ba3e970","Type":"ContainerStarted","Data":"f88c691d5db508706eb05a2585ff3320dd26af07fe01bc134cf899eeafd42b4b"} Apr 16 17:49:32.629355 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:32.629314 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/odh-model-controller-696fc77849-tlp6f" event={"ID":"99e5ce53-3010-4a08-a9af-f04c84f63ae5","Type":"ContainerStarted","Data":"509f3719e6d3b0483188df7d5c6683cde2b597223dbb00b44acc9d12962a1ef8"} Apr 16 17:49:35.641245 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:35.641211 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/model-serving-api-86f7b4b499-bjw8k" event={"ID":"f2494067-51ef-4a80-a329-f02b5ba3e970","Type":"ContainerStarted","Data":"3dbd0d145418134fbb8b79e0a5517677235a12edd3b3350f131d4cc801e5f652"} Apr 16 17:49:35.641632 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:35.641266 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/model-serving-api-86f7b4b499-bjw8k" Apr 16 17:49:35.642560 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:35.642537 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/odh-model-controller-696fc77849-tlp6f" event={"ID":"99e5ce53-3010-4a08-a9af-f04c84f63ae5","Type":"ContainerStarted","Data":"a9edd1e6d7c874f29f3c2f0d209aa34715dccc8699b924ba7780c51fc0c58f61"} Apr 16 17:49:35.642661 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:35.642650 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/odh-model-controller-696fc77849-tlp6f" Apr 16 17:49:35.658988 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:35.658936 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/model-serving-api-86f7b4b499-bjw8k" podStartSLOduration=2.20138218 podStartE2EDuration="4.658924192s" podCreationTimestamp="2026-04-16 17:49:31 +0000 UTC" firstStartedPulling="2026-04-16 17:49:32.319618945 +0000 UTC m=+493.792848400" lastFinishedPulling="2026-04-16 17:49:34.77716096 +0000 UTC m=+496.250390412" observedRunningTime="2026-04-16 17:49:35.65845147 +0000 UTC m=+497.131680944" watchObservedRunningTime="2026-04-16 17:49:35.658924192 +0000 UTC m=+497.132153664" Apr 16 17:49:35.675901 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:35.675835 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/odh-model-controller-696fc77849-tlp6f" podStartSLOduration=1.645971115 podStartE2EDuration="4.675823128s" podCreationTimestamp="2026-04-16 17:49:31 +0000 UTC" firstStartedPulling="2026-04-16 17:49:31.750319793 +0000 UTC m=+493.223549247" lastFinishedPulling="2026-04-16 17:49:34.780171807 +0000 UTC m=+496.253401260" observedRunningTime="2026-04-16 17:49:35.67418045 +0000 UTC m=+497.147409949" watchObservedRunningTime="2026-04-16 17:49:35.675823128 +0000 UTC m=+497.149052601" Apr 16 17:49:35.969982 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:35.969923 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-66b4746c66-lncl6"] Apr 16 17:49:35.972122 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:35.972105 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66b4746c66-lncl6" Apr 16 17:49:35.974363 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:35.974341 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Apr 16 17:49:35.974464 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:35.974401 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Apr 16 17:49:35.974464 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:35.974427 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-ghc5z\"" Apr 16 17:49:35.975114 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:35.975095 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Apr 16 17:49:35.975207 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:35.975114 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Apr 16 17:49:35.975400 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:35.975386 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Apr 16 17:49:35.980134 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:35.980112 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Apr 16 17:49:35.983754 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:35.983734 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-66b4746c66-lncl6"] Apr 16 17:49:36.146260 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:36.146228 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/acbe6a76-08fe-460f-957f-725cfd01cee8-console-oauth-config\") pod \"console-66b4746c66-lncl6\" (UID: \"acbe6a76-08fe-460f-957f-725cfd01cee8\") " pod="openshift-console/console-66b4746c66-lncl6" Apr 16 17:49:36.146388 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:36.146263 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acbe6a76-08fe-460f-957f-725cfd01cee8-trusted-ca-bundle\") pod \"console-66b4746c66-lncl6\" (UID: \"acbe6a76-08fe-460f-957f-725cfd01cee8\") " pod="openshift-console/console-66b4746c66-lncl6" Apr 16 17:49:36.146388 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:36.146346 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxmx6\" (UniqueName: \"kubernetes.io/projected/acbe6a76-08fe-460f-957f-725cfd01cee8-kube-api-access-pxmx6\") pod \"console-66b4746c66-lncl6\" (UID: \"acbe6a76-08fe-460f-957f-725cfd01cee8\") " pod="openshift-console/console-66b4746c66-lncl6" Apr 16 17:49:36.146481 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:36.146395 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/acbe6a76-08fe-460f-957f-725cfd01cee8-service-ca\") pod \"console-66b4746c66-lncl6\" (UID: \"acbe6a76-08fe-460f-957f-725cfd01cee8\") " pod="openshift-console/console-66b4746c66-lncl6" Apr 16 17:49:36.146481 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:36.146423 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/acbe6a76-08fe-460f-957f-725cfd01cee8-oauth-serving-cert\") pod \"console-66b4746c66-lncl6\" (UID: \"acbe6a76-08fe-460f-957f-725cfd01cee8\") " pod="openshift-console/console-66b4746c66-lncl6" Apr 16 17:49:36.146481 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:36.146453 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/acbe6a76-08fe-460f-957f-725cfd01cee8-console-config\") pod \"console-66b4746c66-lncl6\" (UID: \"acbe6a76-08fe-460f-957f-725cfd01cee8\") " pod="openshift-console/console-66b4746c66-lncl6" Apr 16 17:49:36.146580 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:36.146483 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/acbe6a76-08fe-460f-957f-725cfd01cee8-console-serving-cert\") pod \"console-66b4746c66-lncl6\" (UID: \"acbe6a76-08fe-460f-957f-725cfd01cee8\") " pod="openshift-console/console-66b4746c66-lncl6" Apr 16 17:49:36.247795 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:36.247737 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pxmx6\" (UniqueName: \"kubernetes.io/projected/acbe6a76-08fe-460f-957f-725cfd01cee8-kube-api-access-pxmx6\") pod \"console-66b4746c66-lncl6\" (UID: \"acbe6a76-08fe-460f-957f-725cfd01cee8\") " pod="openshift-console/console-66b4746c66-lncl6" Apr 16 17:49:36.247795 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:36.247768 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/acbe6a76-08fe-460f-957f-725cfd01cee8-service-ca\") pod \"console-66b4746c66-lncl6\" (UID: \"acbe6a76-08fe-460f-957f-725cfd01cee8\") " pod="openshift-console/console-66b4746c66-lncl6" Apr 16 17:49:36.247795 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:36.247790 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/acbe6a76-08fe-460f-957f-725cfd01cee8-oauth-serving-cert\") pod \"console-66b4746c66-lncl6\" (UID: \"acbe6a76-08fe-460f-957f-725cfd01cee8\") " pod="openshift-console/console-66b4746c66-lncl6" Apr 16 17:49:36.248031 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:36.247817 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/acbe6a76-08fe-460f-957f-725cfd01cee8-console-config\") pod \"console-66b4746c66-lncl6\" (UID: \"acbe6a76-08fe-460f-957f-725cfd01cee8\") " pod="openshift-console/console-66b4746c66-lncl6" Apr 16 17:49:36.248031 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:36.247848 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/acbe6a76-08fe-460f-957f-725cfd01cee8-console-serving-cert\") pod \"console-66b4746c66-lncl6\" (UID: \"acbe6a76-08fe-460f-957f-725cfd01cee8\") " pod="openshift-console/console-66b4746c66-lncl6" Apr 16 17:49:36.248031 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:36.247899 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/acbe6a76-08fe-460f-957f-725cfd01cee8-console-oauth-config\") pod \"console-66b4746c66-lncl6\" (UID: \"acbe6a76-08fe-460f-957f-725cfd01cee8\") " pod="openshift-console/console-66b4746c66-lncl6" Apr 16 17:49:36.248175 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:36.248071 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acbe6a76-08fe-460f-957f-725cfd01cee8-trusted-ca-bundle\") pod \"console-66b4746c66-lncl6\" (UID: \"acbe6a76-08fe-460f-957f-725cfd01cee8\") " pod="openshift-console/console-66b4746c66-lncl6" Apr 16 17:49:36.248544 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:36.248521 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/acbe6a76-08fe-460f-957f-725cfd01cee8-oauth-serving-cert\") pod \"console-66b4746c66-lncl6\" (UID: \"acbe6a76-08fe-460f-957f-725cfd01cee8\") " pod="openshift-console/console-66b4746c66-lncl6" Apr 16 17:49:36.248632 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:36.248551 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/acbe6a76-08fe-460f-957f-725cfd01cee8-service-ca\") pod \"console-66b4746c66-lncl6\" (UID: \"acbe6a76-08fe-460f-957f-725cfd01cee8\") " pod="openshift-console/console-66b4746c66-lncl6" Apr 16 17:49:36.248686 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:36.248662 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/acbe6a76-08fe-460f-957f-725cfd01cee8-console-config\") pod \"console-66b4746c66-lncl6\" (UID: \"acbe6a76-08fe-460f-957f-725cfd01cee8\") " pod="openshift-console/console-66b4746c66-lncl6" Apr 16 17:49:36.248878 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:36.248843 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acbe6a76-08fe-460f-957f-725cfd01cee8-trusted-ca-bundle\") pod \"console-66b4746c66-lncl6\" (UID: \"acbe6a76-08fe-460f-957f-725cfd01cee8\") " pod="openshift-console/console-66b4746c66-lncl6" Apr 16 17:49:36.250204 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:36.250184 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/acbe6a76-08fe-460f-957f-725cfd01cee8-console-oauth-config\") pod \"console-66b4746c66-lncl6\" (UID: \"acbe6a76-08fe-460f-957f-725cfd01cee8\") " pod="openshift-console/console-66b4746c66-lncl6" Apr 16 17:49:36.250443 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:36.250420 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/acbe6a76-08fe-460f-957f-725cfd01cee8-console-serving-cert\") pod \"console-66b4746c66-lncl6\" (UID: \"acbe6a76-08fe-460f-957f-725cfd01cee8\") " pod="openshift-console/console-66b4746c66-lncl6" Apr 16 17:49:36.256167 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:36.256147 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxmx6\" (UniqueName: \"kubernetes.io/projected/acbe6a76-08fe-460f-957f-725cfd01cee8-kube-api-access-pxmx6\") pod \"console-66b4746c66-lncl6\" (UID: \"acbe6a76-08fe-460f-957f-725cfd01cee8\") " pod="openshift-console/console-66b4746c66-lncl6" Apr 16 17:49:36.282135 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:36.282117 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66b4746c66-lncl6" Apr 16 17:49:36.410042 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:36.410016 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-66b4746c66-lncl6"] Apr 16 17:49:36.412207 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:49:36.412171 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podacbe6a76_08fe_460f_957f_725cfd01cee8.slice/crio-d895aab4cb5bc35afe37a8b44e6b877b2082302b3ef39c409aded18993544ca7 WatchSource:0}: Error finding container d895aab4cb5bc35afe37a8b44e6b877b2082302b3ef39c409aded18993544ca7: Status 404 returned error can't find the container with id d895aab4cb5bc35afe37a8b44e6b877b2082302b3ef39c409aded18993544ca7 Apr 16 17:49:36.647397 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:36.647363 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66b4746c66-lncl6" event={"ID":"acbe6a76-08fe-460f-957f-725cfd01cee8","Type":"ContainerStarted","Data":"7bbfff4cce4de24093edf57ed98037ee667fbcde92ba633066d04bc321ceba2a"} Apr 16 17:49:36.647784 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:36.647401 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66b4746c66-lncl6" event={"ID":"acbe6a76-08fe-460f-957f-725cfd01cee8","Type":"ContainerStarted","Data":"d895aab4cb5bc35afe37a8b44e6b877b2082302b3ef39c409aded18993544ca7"} Apr 16 17:49:36.670106 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:36.670063 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-66b4746c66-lncl6" podStartSLOduration=1.670049404 podStartE2EDuration="1.670049404s" podCreationTimestamp="2026-04-16 17:49:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 17:49:36.669185028 +0000 UTC m=+498.142414505" watchObservedRunningTime="2026-04-16 17:49:36.670049404 +0000 UTC m=+498.143278877" Apr 16 17:49:46.282829 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:46.282793 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-66b4746c66-lncl6" Apr 16 17:49:46.283444 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:46.282845 2571 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-66b4746c66-lncl6" Apr 16 17:49:46.287774 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:46.287746 2571 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-66b4746c66-lncl6" Apr 16 17:49:46.650092 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:46.650063 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/odh-model-controller-696fc77849-tlp6f" Apr 16 17:49:46.653201 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:46.653173 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/model-serving-api-86f7b4b499-bjw8k" Apr 16 17:49:46.688016 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:49:46.687991 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-66b4746c66-lncl6" Apr 16 17:50:08.127517 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:50:08.127445 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d"] Apr 16 17:50:08.185569 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:50:08.185546 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d"] Apr 16 17:50:08.185720 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:50:08.185663 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" Apr 16 17:50:08.187641 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:50:08.187622 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"default-dockercfg-kfjrd\"" Apr 16 17:50:08.274098 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:50:08.274070 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/57903f03-0043-4167-8658-957d7cc78f9f-kserve-provision-location\") pod \"isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d\" (UID: \"57903f03-0043-4167-8658-957d7cc78f9f\") " pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" Apr 16 17:50:08.375377 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:50:08.375350 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/57903f03-0043-4167-8658-957d7cc78f9f-kserve-provision-location\") pod \"isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d\" (UID: \"57903f03-0043-4167-8658-957d7cc78f9f\") " pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" Apr 16 17:50:08.375659 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:50:08.375641 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/57903f03-0043-4167-8658-957d7cc78f9f-kserve-provision-location\") pod \"isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d\" (UID: \"57903f03-0043-4167-8658-957d7cc78f9f\") " pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" Apr 16 17:50:08.496573 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:50:08.496552 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" Apr 16 17:50:08.622878 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:50:08.622838 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d"] Apr 16 17:50:08.624758 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:50:08.624731 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57903f03_0043_4167_8658_957d7cc78f9f.slice/crio-6e05777b3603bb45c57cd9c07ba3bdb8d8cde2661a00753f109a4631e334870a WatchSource:0}: Error finding container 6e05777b3603bb45c57cd9c07ba3bdb8d8cde2661a00753f109a4631e334870a: Status 404 returned error can't find the container with id 6e05777b3603bb45c57cd9c07ba3bdb8d8cde2661a00753f109a4631e334870a Apr 16 17:50:08.759334 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:50:08.759264 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" event={"ID":"57903f03-0043-4167-8658-957d7cc78f9f","Type":"ContainerStarted","Data":"6e05777b3603bb45c57cd9c07ba3bdb8d8cde2661a00753f109a4631e334870a"} Apr 16 17:50:11.771960 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:50:11.771908 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" event={"ID":"57903f03-0043-4167-8658-957d7cc78f9f","Type":"ContainerStarted","Data":"db2781d4f7ac17c2bf70a680f5d905c28ef1c9acfac0378ce4eed197ad017d9f"} Apr 16 17:50:15.787454 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:50:15.787375 2571 generic.go:358] "Generic (PLEG): container finished" podID="57903f03-0043-4167-8658-957d7cc78f9f" containerID="db2781d4f7ac17c2bf70a680f5d905c28ef1c9acfac0378ce4eed197ad017d9f" exitCode=0 Apr 16 17:50:15.787796 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:50:15.787448 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" event={"ID":"57903f03-0043-4167-8658-957d7cc78f9f","Type":"ContainerDied","Data":"db2781d4f7ac17c2bf70a680f5d905c28ef1c9acfac0378ce4eed197ad017d9f"} Apr 16 17:50:29.850997 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:50:29.850963 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" event={"ID":"57903f03-0043-4167-8658-957d7cc78f9f","Type":"ContainerStarted","Data":"3eb08c6a44c8bf110185f86bc33f1fe5b60d287c485c28a64d0499df607c9e15"} Apr 16 17:50:32.863088 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:50:32.863053 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" event={"ID":"57903f03-0043-4167-8658-957d7cc78f9f","Type":"ContainerStarted","Data":"c92a631aa8ed5077c87998729f0b5ad3755fe979beb96650ff1932c4c0c87d00"} Apr 16 17:50:32.863497 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:50:32.863250 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" Apr 16 17:50:32.863497 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:50:32.863282 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" Apr 16 17:50:32.864751 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:50:32.864718 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" podUID="57903f03-0043-4167-8658-957d7cc78f9f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.32:8080: connect: connection refused" Apr 16 17:50:32.865439 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:50:32.865417 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" podUID="57903f03-0043-4167-8658-957d7cc78f9f" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:50:32.879733 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:50:32.879681 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" podStartSLOduration=1.506501427 podStartE2EDuration="24.87967032s" podCreationTimestamp="2026-04-16 17:50:08 +0000 UTC" firstStartedPulling="2026-04-16 17:50:08.626802149 +0000 UTC m=+530.100031613" lastFinishedPulling="2026-04-16 17:50:31.99997104 +0000 UTC m=+553.473200506" observedRunningTime="2026-04-16 17:50:32.877787889 +0000 UTC m=+554.351017361" watchObservedRunningTime="2026-04-16 17:50:32.87967032 +0000 UTC m=+554.352899794" Apr 16 17:50:33.867881 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:50:33.867826 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" podUID="57903f03-0043-4167-8658-957d7cc78f9f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.32:8080: connect: connection refused" Apr 16 17:50:33.868275 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:50:33.868126 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" podUID="57903f03-0043-4167-8658-957d7cc78f9f" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:50:43.868205 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:50:43.868159 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" podUID="57903f03-0043-4167-8658-957d7cc78f9f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.32:8080: connect: connection refused" Apr 16 17:50:43.868602 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:50:43.868576 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" podUID="57903f03-0043-4167-8658-957d7cc78f9f" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:50:53.868488 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:50:53.868442 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" podUID="57903f03-0043-4167-8658-957d7cc78f9f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.32:8080: connect: connection refused" Apr 16 17:50:53.868908 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:50:53.868836 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" podUID="57903f03-0043-4167-8658-957d7cc78f9f" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:51:03.868450 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:03.868404 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" podUID="57903f03-0043-4167-8658-957d7cc78f9f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.32:8080: connect: connection refused" Apr 16 17:51:03.868947 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:03.868896 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" podUID="57903f03-0043-4167-8658-957d7cc78f9f" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:51:13.867815 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:13.867763 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" podUID="57903f03-0043-4167-8658-957d7cc78f9f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.32:8080: connect: connection refused" Apr 16 17:51:13.868288 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:13.868258 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" podUID="57903f03-0043-4167-8658-957d7cc78f9f" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:51:23.868291 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:23.868245 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" podUID="57903f03-0043-4167-8658-957d7cc78f9f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.32:8080: connect: connection refused" Apr 16 17:51:23.868776 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:23.868729 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" podUID="57903f03-0043-4167-8658-957d7cc78f9f" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:51:33.868669 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:33.868594 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" Apr 16 17:51:33.869032 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:33.868802 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" Apr 16 17:51:43.196124 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:43.196092 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d"] Apr 16 17:51:43.198673 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:43.196471 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" podUID="57903f03-0043-4167-8658-957d7cc78f9f" containerName="kserve-container" containerID="cri-o://3eb08c6a44c8bf110185f86bc33f1fe5b60d287c485c28a64d0499df607c9e15" gracePeriod=30 Apr 16 17:51:43.198673 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:43.196554 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" podUID="57903f03-0043-4167-8658-957d7cc78f9f" containerName="agent" containerID="cri-o://c92a631aa8ed5077c87998729f0b5ad3755fe979beb96650ff1932c4c0c87d00" gracePeriod=30 Apr 16 17:51:43.293370 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:43.293339 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-6c064-predictor-78b76f88f7-dzvrl"] Apr 16 17:51:43.297103 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:43.297085 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-6c064-predictor-78b76f88f7-dzvrl" Apr 16 17:51:43.306974 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:43.306950 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-6c064-predictor-78b76f88f7-dzvrl"] Apr 16 17:51:43.351436 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:43.351414 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-6c064-predictor-69d59c9c79-z8fh2"] Apr 16 17:51:43.354970 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:43.354938 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-6c064-predictor-69d59c9c79-z8fh2" Apr 16 17:51:43.366612 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:43.366590 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-6c064-predictor-69d59c9c79-z8fh2"] Apr 16 17:51:43.389205 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:43.389180 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/1f437258-b4d8-4dc4-9dd5-2a68d4b0af64-kserve-provision-location\") pod \"isvc-xgboost-graph-raw-6c064-predictor-69d59c9c79-z8fh2\" (UID: \"1f437258-b4d8-4dc4-9dd5-2a68d4b0af64\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-6c064-predictor-69d59c9c79-z8fh2" Apr 16 17:51:43.389295 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:43.389218 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/a74de01c-8be7-4cce-a0af-db2cc3990065-kserve-provision-location\") pod \"isvc-sklearn-graph-raw-6c064-predictor-78b76f88f7-dzvrl\" (UID: \"a74de01c-8be7-4cce-a0af-db2cc3990065\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-6c064-predictor-78b76f88f7-dzvrl" Apr 16 17:51:43.490199 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:43.490145 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/a74de01c-8be7-4cce-a0af-db2cc3990065-kserve-provision-location\") pod \"isvc-sklearn-graph-raw-6c064-predictor-78b76f88f7-dzvrl\" (UID: \"a74de01c-8be7-4cce-a0af-db2cc3990065\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-6c064-predictor-78b76f88f7-dzvrl" Apr 16 17:51:43.490278 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:43.490238 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/1f437258-b4d8-4dc4-9dd5-2a68d4b0af64-kserve-provision-location\") pod \"isvc-xgboost-graph-raw-6c064-predictor-69d59c9c79-z8fh2\" (UID: \"1f437258-b4d8-4dc4-9dd5-2a68d4b0af64\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-6c064-predictor-69d59c9c79-z8fh2" Apr 16 17:51:43.490609 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:43.490591 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/1f437258-b4d8-4dc4-9dd5-2a68d4b0af64-kserve-provision-location\") pod \"isvc-xgboost-graph-raw-6c064-predictor-69d59c9c79-z8fh2\" (UID: \"1f437258-b4d8-4dc4-9dd5-2a68d4b0af64\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-6c064-predictor-69d59c9c79-z8fh2" Apr 16 17:51:43.490656 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:43.490629 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/a74de01c-8be7-4cce-a0af-db2cc3990065-kserve-provision-location\") pod \"isvc-sklearn-graph-raw-6c064-predictor-78b76f88f7-dzvrl\" (UID: \"a74de01c-8be7-4cce-a0af-db2cc3990065\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-6c064-predictor-78b76f88f7-dzvrl" Apr 16 17:51:43.609004 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:43.608979 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-6c064-predictor-78b76f88f7-dzvrl" Apr 16 17:51:43.665342 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:43.664901 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-6c064-predictor-69d59c9c79-z8fh2" Apr 16 17:51:43.734835 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:43.734810 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-6c064-predictor-78b76f88f7-dzvrl"] Apr 16 17:51:43.737212 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:51:43.737186 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda74de01c_8be7_4cce_a0af_db2cc3990065.slice/crio-0337b4325653394a98a2748609d8440a8bf41146073f1e8a0ac0776750a8d483 WatchSource:0}: Error finding container 0337b4325653394a98a2748609d8440a8bf41146073f1e8a0ac0776750a8d483: Status 404 returned error can't find the container with id 0337b4325653394a98a2748609d8440a8bf41146073f1e8a0ac0776750a8d483 Apr 16 17:51:43.739112 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:43.739094 2571 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 17:51:43.793951 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:43.793525 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-6c064-predictor-69d59c9c79-z8fh2"] Apr 16 17:51:43.798900 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:51:43.798871 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f437258_b4d8_4dc4_9dd5_2a68d4b0af64.slice/crio-528198f2d74523de71d098017b656d19cf0cb46bd5d41c1ce5348fbfdda7faf8 WatchSource:0}: Error finding container 528198f2d74523de71d098017b656d19cf0cb46bd5d41c1ce5348fbfdda7faf8: Status 404 returned error can't find the container with id 528198f2d74523de71d098017b656d19cf0cb46bd5d41c1ce5348fbfdda7faf8 Apr 16 17:51:43.868720 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:43.868667 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" podUID="57903f03-0043-4167-8658-957d7cc78f9f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.32:8080: connect: connection refused" Apr 16 17:51:43.869014 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:43.868987 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" podUID="57903f03-0043-4167-8658-957d7cc78f9f" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:51:44.107783 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:44.107695 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-6c064-predictor-78b76f88f7-dzvrl" event={"ID":"a74de01c-8be7-4cce-a0af-db2cc3990065","Type":"ContainerStarted","Data":"7e5ca1a9683b557ff24bd019544c2c53efb4259d8f902d9bcf4b208f4c067c57"} Apr 16 17:51:44.107783 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:44.107741 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-6c064-predictor-78b76f88f7-dzvrl" event={"ID":"a74de01c-8be7-4cce-a0af-db2cc3990065","Type":"ContainerStarted","Data":"0337b4325653394a98a2748609d8440a8bf41146073f1e8a0ac0776750a8d483"} Apr 16 17:51:44.109163 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:44.109110 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-6c064-predictor-69d59c9c79-z8fh2" event={"ID":"1f437258-b4d8-4dc4-9dd5-2a68d4b0af64","Type":"ContainerStarted","Data":"eebd3e8714a8aae39608c4103e4133c0539b7a0fbfd562a36a2e5d87b772c31a"} Apr 16 17:51:44.109272 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:44.109168 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-6c064-predictor-69d59c9c79-z8fh2" event={"ID":"1f437258-b4d8-4dc4-9dd5-2a68d4b0af64","Type":"ContainerStarted","Data":"528198f2d74523de71d098017b656d19cf0cb46bd5d41c1ce5348fbfdda7faf8"} Apr 16 17:51:47.121421 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:47.121347 2571 generic.go:358] "Generic (PLEG): container finished" podID="57903f03-0043-4167-8658-957d7cc78f9f" containerID="3eb08c6a44c8bf110185f86bc33f1fe5b60d287c485c28a64d0499df607c9e15" exitCode=0 Apr 16 17:51:47.121740 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:47.121425 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" event={"ID":"57903f03-0043-4167-8658-957d7cc78f9f","Type":"ContainerDied","Data":"3eb08c6a44c8bf110185f86bc33f1fe5b60d287c485c28a64d0499df607c9e15"} Apr 16 17:51:48.127458 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:48.127035 2571 generic.go:358] "Generic (PLEG): container finished" podID="a74de01c-8be7-4cce-a0af-db2cc3990065" containerID="7e5ca1a9683b557ff24bd019544c2c53efb4259d8f902d9bcf4b208f4c067c57" exitCode=0 Apr 16 17:51:48.127458 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:48.127171 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-6c064-predictor-78b76f88f7-dzvrl" event={"ID":"a74de01c-8be7-4cce-a0af-db2cc3990065","Type":"ContainerDied","Data":"7e5ca1a9683b557ff24bd019544c2c53efb4259d8f902d9bcf4b208f4c067c57"} Apr 16 17:51:48.129610 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:48.129573 2571 generic.go:358] "Generic (PLEG): container finished" podID="1f437258-b4d8-4dc4-9dd5-2a68d4b0af64" containerID="eebd3e8714a8aae39608c4103e4133c0539b7a0fbfd562a36a2e5d87b772c31a" exitCode=0 Apr 16 17:51:48.129841 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:48.129823 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-6c064-predictor-69d59c9c79-z8fh2" event={"ID":"1f437258-b4d8-4dc4-9dd5-2a68d4b0af64","Type":"ContainerDied","Data":"eebd3e8714a8aae39608c4103e4133c0539b7a0fbfd562a36a2e5d87b772c31a"} Apr 16 17:51:49.136815 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:49.136780 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-6c064-predictor-78b76f88f7-dzvrl" event={"ID":"a74de01c-8be7-4cce-a0af-db2cc3990065","Type":"ContainerStarted","Data":"47aafabbc0c33728a4c032c036e2649742bf9ac866e3bee58b5def68bc34c3eb"} Apr 16 17:51:49.137304 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:49.137178 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-6c064-predictor-78b76f88f7-dzvrl" Apr 16 17:51:49.138850 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:49.138819 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-6c064-predictor-78b76f88f7-dzvrl" podUID="a74de01c-8be7-4cce-a0af-db2cc3990065" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.33:8080: connect: connection refused" Apr 16 17:51:49.156761 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:49.156711 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-6c064-predictor-78b76f88f7-dzvrl" podStartSLOduration=6.156694675 podStartE2EDuration="6.156694675s" podCreationTimestamp="2026-04-16 17:51:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 17:51:49.153051034 +0000 UTC m=+630.626280509" watchObservedRunningTime="2026-04-16 17:51:49.156694675 +0000 UTC m=+630.629924149" Apr 16 17:51:50.142848 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:50.142807 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-6c064-predictor-78b76f88f7-dzvrl" podUID="a74de01c-8be7-4cce-a0af-db2cc3990065" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.33:8080: connect: connection refused" Apr 16 17:51:53.867956 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:53.867903 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" podUID="57903f03-0043-4167-8658-957d7cc78f9f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.32:8080: connect: connection refused" Apr 16 17:51:53.868403 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:51:53.868302 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" podUID="57903f03-0043-4167-8658-957d7cc78f9f" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:52:00.143803 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:00.143754 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-6c064-predictor-78b76f88f7-dzvrl" podUID="a74de01c-8be7-4cce-a0af-db2cc3990065" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.33:8080: connect: connection refused" Apr 16 17:52:03.868560 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:03.868507 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" podUID="57903f03-0043-4167-8658-957d7cc78f9f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.32:8080: connect: connection refused" Apr 16 17:52:03.869042 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:03.868662 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" Apr 16 17:52:03.869042 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:03.868879 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" podUID="57903f03-0043-4167-8658-957d7cc78f9f" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:52:03.869042 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:03.868996 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" Apr 16 17:52:07.209681 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:07.209650 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-6c064-predictor-69d59c9c79-z8fh2" event={"ID":"1f437258-b4d8-4dc4-9dd5-2a68d4b0af64","Type":"ContainerStarted","Data":"5e9328eef40176b208f9eb90913eb3a05a019645bffec292f2c4b3e2543b2d92"} Apr 16 17:52:07.210048 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:07.209905 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-6c064-predictor-69d59c9c79-z8fh2" Apr 16 17:52:07.211128 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:07.211103 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-6c064-predictor-69d59c9c79-z8fh2" podUID="1f437258-b4d8-4dc4-9dd5-2a68d4b0af64" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.34:8080: connect: connection refused" Apr 16 17:52:07.226622 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:07.226583 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-6c064-predictor-69d59c9c79-z8fh2" podStartSLOduration=6.02902959 podStartE2EDuration="24.226571334s" podCreationTimestamp="2026-04-16 17:51:43 +0000 UTC" firstStartedPulling="2026-04-16 17:51:48.132101269 +0000 UTC m=+629.605330721" lastFinishedPulling="2026-04-16 17:52:06.329642997 +0000 UTC m=+647.802872465" observedRunningTime="2026-04-16 17:52:07.224325339 +0000 UTC m=+648.697554811" watchObservedRunningTime="2026-04-16 17:52:07.226571334 +0000 UTC m=+648.699800875" Apr 16 17:52:08.213594 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:08.213557 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-6c064-predictor-69d59c9c79-z8fh2" podUID="1f437258-b4d8-4dc4-9dd5-2a68d4b0af64" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.34:8080: connect: connection refused" Apr 16 17:52:10.142836 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:10.142798 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-6c064-predictor-78b76f88f7-dzvrl" podUID="a74de01c-8be7-4cce-a0af-db2cc3990065" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.33:8080: connect: connection refused" Apr 16 17:52:13.888583 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:13.888561 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" Apr 16 17:52:14.044105 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:14.044081 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/57903f03-0043-4167-8658-957d7cc78f9f-kserve-provision-location\") pod \"57903f03-0043-4167-8658-957d7cc78f9f\" (UID: \"57903f03-0043-4167-8658-957d7cc78f9f\") " Apr 16 17:52:14.044400 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:14.044374 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57903f03-0043-4167-8658-957d7cc78f9f-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "57903f03-0043-4167-8658-957d7cc78f9f" (UID: "57903f03-0043-4167-8658-957d7cc78f9f"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 17:52:14.144814 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:14.144788 2571 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/57903f03-0043-4167-8658-957d7cc78f9f-kserve-provision-location\") on node \"ip-10-0-138-134.ec2.internal\" DevicePath \"\"" Apr 16 17:52:14.233717 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:14.233689 2571 generic.go:358] "Generic (PLEG): container finished" podID="57903f03-0043-4167-8658-957d7cc78f9f" containerID="c92a631aa8ed5077c87998729f0b5ad3755fe979beb96650ff1932c4c0c87d00" exitCode=0 Apr 16 17:52:14.233844 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:14.233831 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" Apr 16 17:52:14.233905 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:14.233825 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" event={"ID":"57903f03-0043-4167-8658-957d7cc78f9f","Type":"ContainerDied","Data":"c92a631aa8ed5077c87998729f0b5ad3755fe979beb96650ff1932c4c0c87d00"} Apr 16 17:52:14.233905 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:14.233894 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" event={"ID":"57903f03-0043-4167-8658-957d7cc78f9f","Type":"ContainerDied","Data":"6e05777b3603bb45c57cd9c07ba3bdb8d8cde2661a00753f109a4631e334870a"} Apr 16 17:52:14.233975 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:14.233910 2571 scope.go:117] "RemoveContainer" containerID="c92a631aa8ed5077c87998729f0b5ad3755fe979beb96650ff1932c4c0c87d00" Apr 16 17:52:14.242758 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:14.242742 2571 scope.go:117] "RemoveContainer" containerID="3eb08c6a44c8bf110185f86bc33f1fe5b60d287c485c28a64d0499df607c9e15" Apr 16 17:52:14.249549 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:14.249534 2571 scope.go:117] "RemoveContainer" containerID="db2781d4f7ac17c2bf70a680f5d905c28ef1c9acfac0378ce4eed197ad017d9f" Apr 16 17:52:14.255744 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:14.255719 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d"] Apr 16 17:52:14.256928 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:14.256911 2571 scope.go:117] "RemoveContainer" containerID="c92a631aa8ed5077c87998729f0b5ad3755fe979beb96650ff1932c4c0c87d00" Apr 16 17:52:14.257181 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:52:14.257156 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c92a631aa8ed5077c87998729f0b5ad3755fe979beb96650ff1932c4c0c87d00\": container with ID starting with c92a631aa8ed5077c87998729f0b5ad3755fe979beb96650ff1932c4c0c87d00 not found: ID does not exist" containerID="c92a631aa8ed5077c87998729f0b5ad3755fe979beb96650ff1932c4c0c87d00" Apr 16 17:52:14.257267 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:14.257191 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c92a631aa8ed5077c87998729f0b5ad3755fe979beb96650ff1932c4c0c87d00"} err="failed to get container status \"c92a631aa8ed5077c87998729f0b5ad3755fe979beb96650ff1932c4c0c87d00\": rpc error: code = NotFound desc = could not find container \"c92a631aa8ed5077c87998729f0b5ad3755fe979beb96650ff1932c4c0c87d00\": container with ID starting with c92a631aa8ed5077c87998729f0b5ad3755fe979beb96650ff1932c4c0c87d00 not found: ID does not exist" Apr 16 17:52:14.257267 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:14.257215 2571 scope.go:117] "RemoveContainer" containerID="3eb08c6a44c8bf110185f86bc33f1fe5b60d287c485c28a64d0499df607c9e15" Apr 16 17:52:14.257680 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:52:14.257650 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3eb08c6a44c8bf110185f86bc33f1fe5b60d287c485c28a64d0499df607c9e15\": container with ID starting with 3eb08c6a44c8bf110185f86bc33f1fe5b60d287c485c28a64d0499df607c9e15 not found: ID does not exist" containerID="3eb08c6a44c8bf110185f86bc33f1fe5b60d287c485c28a64d0499df607c9e15" Apr 16 17:52:14.257783 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:14.257691 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eb08c6a44c8bf110185f86bc33f1fe5b60d287c485c28a64d0499df607c9e15"} err="failed to get container status \"3eb08c6a44c8bf110185f86bc33f1fe5b60d287c485c28a64d0499df607c9e15\": rpc error: code = NotFound desc = could not find container \"3eb08c6a44c8bf110185f86bc33f1fe5b60d287c485c28a64d0499df607c9e15\": container with ID starting with 3eb08c6a44c8bf110185f86bc33f1fe5b60d287c485c28a64d0499df607c9e15 not found: ID does not exist" Apr 16 17:52:14.257783 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:14.257732 2571 scope.go:117] "RemoveContainer" containerID="db2781d4f7ac17c2bf70a680f5d905c28ef1c9acfac0378ce4eed197ad017d9f" Apr 16 17:52:14.258148 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:52:14.258120 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db2781d4f7ac17c2bf70a680f5d905c28ef1c9acfac0378ce4eed197ad017d9f\": container with ID starting with db2781d4f7ac17c2bf70a680f5d905c28ef1c9acfac0378ce4eed197ad017d9f not found: ID does not exist" containerID="db2781d4f7ac17c2bf70a680f5d905c28ef1c9acfac0378ce4eed197ad017d9f" Apr 16 17:52:14.258226 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:14.258151 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db2781d4f7ac17c2bf70a680f5d905c28ef1c9acfac0378ce4eed197ad017d9f"} err="failed to get container status \"db2781d4f7ac17c2bf70a680f5d905c28ef1c9acfac0378ce4eed197ad017d9f\": rpc error: code = NotFound desc = could not find container \"db2781d4f7ac17c2bf70a680f5d905c28ef1c9acfac0378ce4eed197ad017d9f\": container with ID starting with db2781d4f7ac17c2bf70a680f5d905c28ef1c9acfac0378ce4eed197ad017d9f not found: ID does not exist" Apr 16 17:52:14.259487 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:14.259469 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d"] Apr 16 17:52:14.869353 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:14.869315 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" podUID="57903f03-0043-4167-8658-957d7cc78f9f" containerName="agent" probeResult="failure" output="Get \"http://10.133.0.32:9081/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 16 17:52:14.869353 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:14.869339 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-d6cb7-predictor-5f49984f7f-tm72d" podUID="57903f03-0043-4167-8658-957d7cc78f9f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.32:8080: i/o timeout" Apr 16 17:52:15.035510 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:15.035473 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57903f03-0043-4167-8658-957d7cc78f9f" path="/var/lib/kubelet/pods/57903f03-0043-4167-8658-957d7cc78f9f/volumes" Apr 16 17:52:18.214485 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:18.214444 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-6c064-predictor-69d59c9c79-z8fh2" podUID="1f437258-b4d8-4dc4-9dd5-2a68d4b0af64" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.34:8080: connect: connection refused" Apr 16 17:52:20.143309 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:20.143269 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-6c064-predictor-78b76f88f7-dzvrl" podUID="a74de01c-8be7-4cce-a0af-db2cc3990065" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.33:8080: connect: connection refused" Apr 16 17:52:28.213912 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:28.213847 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-6c064-predictor-69d59c9c79-z8fh2" podUID="1f437258-b4d8-4dc4-9dd5-2a68d4b0af64" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.34:8080: connect: connection refused" Apr 16 17:52:30.143276 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:30.143235 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-6c064-predictor-78b76f88f7-dzvrl" podUID="a74de01c-8be7-4cce-a0af-db2cc3990065" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.33:8080: connect: connection refused" Apr 16 17:52:38.214509 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:38.214469 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-6c064-predictor-69d59c9c79-z8fh2" podUID="1f437258-b4d8-4dc4-9dd5-2a68d4b0af64" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.34:8080: connect: connection refused" Apr 16 17:52:40.142762 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:40.142723 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-6c064-predictor-78b76f88f7-dzvrl" podUID="a74de01c-8be7-4cce-a0af-db2cc3990065" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.33:8080: connect: connection refused" Apr 16 17:52:48.213920 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:48.213879 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-6c064-predictor-69d59c9c79-z8fh2" podUID="1f437258-b4d8-4dc4-9dd5-2a68d4b0af64" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.34:8080: connect: connection refused" Apr 16 17:52:50.143763 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:50.143725 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-6c064-predictor-78b76f88f7-dzvrl" podUID="a74de01c-8be7-4cce-a0af-db2cc3990065" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.33:8080: connect: connection refused" Apr 16 17:52:58.214574 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:52:58.214530 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-6c064-predictor-69d59c9c79-z8fh2" podUID="1f437258-b4d8-4dc4-9dd5-2a68d4b0af64" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.34:8080: connect: connection refused" Apr 16 17:53:00.144451 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:00.144419 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-6c064-predictor-78b76f88f7-dzvrl" Apr 16 17:53:08.215232 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:08.215134 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-6c064-predictor-69d59c9c79-z8fh2" Apr 16 17:53:13.317505 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:13.317470 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-6c064-5c4f4845b6-8wrbq"] Apr 16 17:53:13.318046 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:13.318029 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="57903f03-0043-4167-8658-957d7cc78f9f" containerName="agent" Apr 16 17:53:13.318132 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:13.318048 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="57903f03-0043-4167-8658-957d7cc78f9f" containerName="agent" Apr 16 17:53:13.318132 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:13.318064 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="57903f03-0043-4167-8658-957d7cc78f9f" containerName="storage-initializer" Apr 16 17:53:13.318132 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:13.318071 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="57903f03-0043-4167-8658-957d7cc78f9f" containerName="storage-initializer" Apr 16 17:53:13.318132 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:13.318083 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="57903f03-0043-4167-8658-957d7cc78f9f" containerName="kserve-container" Apr 16 17:53:13.318132 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:13.318092 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="57903f03-0043-4167-8658-957d7cc78f9f" containerName="kserve-container" Apr 16 17:53:13.318364 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:13.318187 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="57903f03-0043-4167-8658-957d7cc78f9f" containerName="kserve-container" Apr 16 17:53:13.318364 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:13.318201 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="57903f03-0043-4167-8658-957d7cc78f9f" containerName="agent" Apr 16 17:53:13.320217 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:13.320198 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-raw-6c064-5c4f4845b6-8wrbq" Apr 16 17:53:13.322385 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:13.322358 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"model-chainer-raw-6c064-serving-cert\"" Apr 16 17:53:13.322482 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:13.322393 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"openshift-service-ca.crt\"" Apr 16 17:53:13.322482 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:13.322358 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"model-chainer-raw-6c064-kube-rbac-proxy-sar-config\"" Apr 16 17:53:13.328498 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:13.328470 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-6c064-5c4f4845b6-8wrbq"] Apr 16 17:53:13.389043 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:13.389015 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fce0b23-574e-44ac-85af-4977d4a860e8-openshift-service-ca-bundle\") pod \"model-chainer-raw-6c064-5c4f4845b6-8wrbq\" (UID: \"5fce0b23-574e-44ac-85af-4977d4a860e8\") " pod="kserve-ci-e2e-test/model-chainer-raw-6c064-5c4f4845b6-8wrbq" Apr 16 17:53:13.389186 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:13.389094 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5fce0b23-574e-44ac-85af-4977d4a860e8-proxy-tls\") pod \"model-chainer-raw-6c064-5c4f4845b6-8wrbq\" (UID: \"5fce0b23-574e-44ac-85af-4977d4a860e8\") " pod="kserve-ci-e2e-test/model-chainer-raw-6c064-5c4f4845b6-8wrbq" Apr 16 17:53:13.489585 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:13.489554 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5fce0b23-574e-44ac-85af-4977d4a860e8-proxy-tls\") pod \"model-chainer-raw-6c064-5c4f4845b6-8wrbq\" (UID: \"5fce0b23-574e-44ac-85af-4977d4a860e8\") " pod="kserve-ci-e2e-test/model-chainer-raw-6c064-5c4f4845b6-8wrbq" Apr 16 17:53:13.489713 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:13.489619 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fce0b23-574e-44ac-85af-4977d4a860e8-openshift-service-ca-bundle\") pod \"model-chainer-raw-6c064-5c4f4845b6-8wrbq\" (UID: \"5fce0b23-574e-44ac-85af-4977d4a860e8\") " pod="kserve-ci-e2e-test/model-chainer-raw-6c064-5c4f4845b6-8wrbq" Apr 16 17:53:13.490224 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:13.490195 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fce0b23-574e-44ac-85af-4977d4a860e8-openshift-service-ca-bundle\") pod \"model-chainer-raw-6c064-5c4f4845b6-8wrbq\" (UID: \"5fce0b23-574e-44ac-85af-4977d4a860e8\") " pod="kserve-ci-e2e-test/model-chainer-raw-6c064-5c4f4845b6-8wrbq" Apr 16 17:53:13.492107 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:13.492086 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5fce0b23-574e-44ac-85af-4977d4a860e8-proxy-tls\") pod \"model-chainer-raw-6c064-5c4f4845b6-8wrbq\" (UID: \"5fce0b23-574e-44ac-85af-4977d4a860e8\") " pod="kserve-ci-e2e-test/model-chainer-raw-6c064-5c4f4845b6-8wrbq" Apr 16 17:53:13.631367 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:13.631311 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-raw-6c064-5c4f4845b6-8wrbq" Apr 16 17:53:13.751761 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:13.751739 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-6c064-5c4f4845b6-8wrbq"] Apr 16 17:53:13.754564 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:53:13.754535 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fce0b23_574e_44ac_85af_4977d4a860e8.slice/crio-898c3d5c6d6483ecc054a982e2d8239af046a9090dfd13440af8d75366d7a8d5 WatchSource:0}: Error finding container 898c3d5c6d6483ecc054a982e2d8239af046a9090dfd13440af8d75366d7a8d5: Status 404 returned error can't find the container with id 898c3d5c6d6483ecc054a982e2d8239af046a9090dfd13440af8d75366d7a8d5 Apr 16 17:53:14.438748 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:14.438711 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-raw-6c064-5c4f4845b6-8wrbq" event={"ID":"5fce0b23-574e-44ac-85af-4977d4a860e8","Type":"ContainerStarted","Data":"898c3d5c6d6483ecc054a982e2d8239af046a9090dfd13440af8d75366d7a8d5"} Apr 16 17:53:16.450321 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:16.450285 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-raw-6c064-5c4f4845b6-8wrbq" event={"ID":"5fce0b23-574e-44ac-85af-4977d4a860e8","Type":"ContainerStarted","Data":"4d7c53474c1b1f0569e50291792965b93684bcb8af044b639dd2a937f5b77611"} Apr 16 17:53:16.450764 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:16.450403 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/model-chainer-raw-6c064-5c4f4845b6-8wrbq" Apr 16 17:53:16.468105 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:16.468054 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/model-chainer-raw-6c064-5c4f4845b6-8wrbq" podStartSLOduration=1.524137369 podStartE2EDuration="3.468039766s" podCreationTimestamp="2026-04-16 17:53:13 +0000 UTC" firstStartedPulling="2026-04-16 17:53:13.756386457 +0000 UTC m=+715.229615908" lastFinishedPulling="2026-04-16 17:53:15.70028885 +0000 UTC m=+717.173518305" observedRunningTime="2026-04-16 17:53:16.466534433 +0000 UTC m=+717.939763940" watchObservedRunningTime="2026-04-16 17:53:16.468039766 +0000 UTC m=+717.941269239" Apr 16 17:53:22.460195 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:22.460170 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/model-chainer-raw-6c064-5c4f4845b6-8wrbq" Apr 16 17:53:23.360335 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:23.360294 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-6c064-5c4f4845b6-8wrbq"] Apr 16 17:53:23.360610 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:23.360582 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/model-chainer-raw-6c064-5c4f4845b6-8wrbq" podUID="5fce0b23-574e-44ac-85af-4977d4a860e8" containerName="model-chainer-raw-6c064" containerID="cri-o://4d7c53474c1b1f0569e50291792965b93684bcb8af044b639dd2a937f5b77611" gracePeriod=30 Apr 16 17:53:23.485534 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:23.485503 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-6c064-predictor-78b76f88f7-dzvrl"] Apr 16 17:53:23.485881 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:23.485746 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-6c064-predictor-78b76f88f7-dzvrl" podUID="a74de01c-8be7-4cce-a0af-db2cc3990065" containerName="kserve-container" containerID="cri-o://47aafabbc0c33728a4c032c036e2649742bf9ac866e3bee58b5def68bc34c3eb" gracePeriod=30 Apr 16 17:53:23.546339 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:23.546312 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-e4bae-predictor-ddcdfbccc-b5rt6"] Apr 16 17:53:23.550077 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:23.550059 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-e4bae-predictor-ddcdfbccc-b5rt6" Apr 16 17:53:23.561128 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:23.561108 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-e4bae-predictor-ddcdfbccc-b5rt6"] Apr 16 17:53:23.608204 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:23.608175 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-e4bae-predictor-5c4695cf55-trjk8"] Apr 16 17:53:23.611825 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:23.611770 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-e4bae-predictor-5c4695cf55-trjk8" Apr 16 17:53:23.620908 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:23.620887 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-e4bae-predictor-5c4695cf55-trjk8"] Apr 16 17:53:23.629535 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:23.629508 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-6c064-predictor-69d59c9c79-z8fh2"] Apr 16 17:53:23.629773 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:23.629753 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-6c064-predictor-69d59c9c79-z8fh2" podUID="1f437258-b4d8-4dc4-9dd5-2a68d4b0af64" containerName="kserve-container" containerID="cri-o://5e9328eef40176b208f9eb90913eb3a05a019645bffec292f2c4b3e2543b2d92" gracePeriod=30 Apr 16 17:53:23.665827 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:23.665804 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/c4607e26-c460-4a4d-a12c-153a1cd3c7ad-kserve-provision-location\") pod \"isvc-sklearn-graph-raw-hpa-e4bae-predictor-ddcdfbccc-b5rt6\" (UID: \"c4607e26-c460-4a4d-a12c-153a1cd3c7ad\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-e4bae-predictor-ddcdfbccc-b5rt6" Apr 16 17:53:23.766885 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:23.766848 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/c4607e26-c460-4a4d-a12c-153a1cd3c7ad-kserve-provision-location\") pod \"isvc-sklearn-graph-raw-hpa-e4bae-predictor-ddcdfbccc-b5rt6\" (UID: \"c4607e26-c460-4a4d-a12c-153a1cd3c7ad\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-e4bae-predictor-ddcdfbccc-b5rt6" Apr 16 17:53:23.767031 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:23.766917 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/58f1a8e0-7d1f-43cb-80a9-c619abf538a5-kserve-provision-location\") pod \"isvc-xgboost-graph-raw-hpa-e4bae-predictor-5c4695cf55-trjk8\" (UID: \"58f1a8e0-7d1f-43cb-80a9-c619abf538a5\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-e4bae-predictor-5c4695cf55-trjk8" Apr 16 17:53:23.767256 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:23.767233 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/c4607e26-c460-4a4d-a12c-153a1cd3c7ad-kserve-provision-location\") pod \"isvc-sklearn-graph-raw-hpa-e4bae-predictor-ddcdfbccc-b5rt6\" (UID: \"c4607e26-c460-4a4d-a12c-153a1cd3c7ad\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-e4bae-predictor-ddcdfbccc-b5rt6" Apr 16 17:53:23.860370 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:23.860344 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-e4bae-predictor-ddcdfbccc-b5rt6" Apr 16 17:53:23.868278 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:23.868215 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/58f1a8e0-7d1f-43cb-80a9-c619abf538a5-kserve-provision-location\") pod \"isvc-xgboost-graph-raw-hpa-e4bae-predictor-5c4695cf55-trjk8\" (UID: \"58f1a8e0-7d1f-43cb-80a9-c619abf538a5\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-e4bae-predictor-5c4695cf55-trjk8" Apr 16 17:53:23.868571 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:23.868551 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/58f1a8e0-7d1f-43cb-80a9-c619abf538a5-kserve-provision-location\") pod \"isvc-xgboost-graph-raw-hpa-e4bae-predictor-5c4695cf55-trjk8\" (UID: \"58f1a8e0-7d1f-43cb-80a9-c619abf538a5\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-e4bae-predictor-5c4695cf55-trjk8" Apr 16 17:53:23.922955 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:23.922812 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-e4bae-predictor-5c4695cf55-trjk8" Apr 16 17:53:23.987055 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:23.986083 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-e4bae-predictor-ddcdfbccc-b5rt6"] Apr 16 17:53:23.989624 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:53:23.989595 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4607e26_c460_4a4d_a12c_153a1cd3c7ad.slice/crio-cc8ad0f4bdf5920797add576a59353c6ecb95c23e56ed5c5912c028129f0b802 WatchSource:0}: Error finding container cc8ad0f4bdf5920797add576a59353c6ecb95c23e56ed5c5912c028129f0b802: Status 404 returned error can't find the container with id cc8ad0f4bdf5920797add576a59353c6ecb95c23e56ed5c5912c028129f0b802 Apr 16 17:53:24.053727 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:24.053712 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-e4bae-predictor-5c4695cf55-trjk8"] Apr 16 17:53:24.055696 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:53:24.055673 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58f1a8e0_7d1f_43cb_80a9_c619abf538a5.slice/crio-90a60e28f857bae2ac8be9496b44250d8533e06aa3fa9a53f619c991e8abd63c WatchSource:0}: Error finding container 90a60e28f857bae2ac8be9496b44250d8533e06aa3fa9a53f619c991e8abd63c: Status 404 returned error can't find the container with id 90a60e28f857bae2ac8be9496b44250d8533e06aa3fa9a53f619c991e8abd63c Apr 16 17:53:24.481151 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:24.481082 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-e4bae-predictor-ddcdfbccc-b5rt6" event={"ID":"c4607e26-c460-4a4d-a12c-153a1cd3c7ad","Type":"ContainerStarted","Data":"05c9c602d7805d23169150295920aef4f310ab7a1ec567fe7d11de3142afc43e"} Apr 16 17:53:24.481151 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:24.481126 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-e4bae-predictor-ddcdfbccc-b5rt6" event={"ID":"c4607e26-c460-4a4d-a12c-153a1cd3c7ad","Type":"ContainerStarted","Data":"cc8ad0f4bdf5920797add576a59353c6ecb95c23e56ed5c5912c028129f0b802"} Apr 16 17:53:24.482492 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:24.482463 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-e4bae-predictor-5c4695cf55-trjk8" event={"ID":"58f1a8e0-7d1f-43cb-80a9-c619abf538a5","Type":"ContainerStarted","Data":"33fda55d9931443d1e212ed68bb0033bad4f43cf8ef7bb909eb9475d89786217"} Apr 16 17:53:24.482598 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:24.482498 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-e4bae-predictor-5c4695cf55-trjk8" event={"ID":"58f1a8e0-7d1f-43cb-80a9-c619abf538a5","Type":"ContainerStarted","Data":"90a60e28f857bae2ac8be9496b44250d8533e06aa3fa9a53f619c991e8abd63c"} Apr 16 17:53:26.870416 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:26.870396 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-6c064-predictor-69d59c9c79-z8fh2" Apr 16 17:53:26.992342 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:26.992270 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/1f437258-b4d8-4dc4-9dd5-2a68d4b0af64-kserve-provision-location\") pod \"1f437258-b4d8-4dc4-9dd5-2a68d4b0af64\" (UID: \"1f437258-b4d8-4dc4-9dd5-2a68d4b0af64\") " Apr 16 17:53:26.992667 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:26.992639 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f437258-b4d8-4dc4-9dd5-2a68d4b0af64-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "1f437258-b4d8-4dc4-9dd5-2a68d4b0af64" (UID: "1f437258-b4d8-4dc4-9dd5-2a68d4b0af64"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 17:53:27.093596 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:27.093577 2571 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/1f437258-b4d8-4dc4-9dd5-2a68d4b0af64-kserve-provision-location\") on node \"ip-10-0-138-134.ec2.internal\" DevicePath \"\"" Apr 16 17:53:27.421842 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:27.421821 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-6c064-predictor-78b76f88f7-dzvrl" Apr 16 17:53:27.458350 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:27.458313 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-raw-6c064-5c4f4845b6-8wrbq" podUID="5fce0b23-574e-44ac-85af-4977d4a860e8" containerName="model-chainer-raw-6c064" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:53:27.494253 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:27.494222 2571 generic.go:358] "Generic (PLEG): container finished" podID="a74de01c-8be7-4cce-a0af-db2cc3990065" containerID="47aafabbc0c33728a4c032c036e2649742bf9ac866e3bee58b5def68bc34c3eb" exitCode=0 Apr 16 17:53:27.494408 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:27.494299 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-6c064-predictor-78b76f88f7-dzvrl" Apr 16 17:53:27.494408 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:27.494313 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-6c064-predictor-78b76f88f7-dzvrl" event={"ID":"a74de01c-8be7-4cce-a0af-db2cc3990065","Type":"ContainerDied","Data":"47aafabbc0c33728a4c032c036e2649742bf9ac866e3bee58b5def68bc34c3eb"} Apr 16 17:53:27.494408 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:27.494360 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-6c064-predictor-78b76f88f7-dzvrl" event={"ID":"a74de01c-8be7-4cce-a0af-db2cc3990065","Type":"ContainerDied","Data":"0337b4325653394a98a2748609d8440a8bf41146073f1e8a0ac0776750a8d483"} Apr 16 17:53:27.494408 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:27.494387 2571 scope.go:117] "RemoveContainer" containerID="47aafabbc0c33728a4c032c036e2649742bf9ac866e3bee58b5def68bc34c3eb" Apr 16 17:53:27.495836 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:27.495811 2571 generic.go:358] "Generic (PLEG): container finished" podID="1f437258-b4d8-4dc4-9dd5-2a68d4b0af64" containerID="5e9328eef40176b208f9eb90913eb3a05a019645bffec292f2c4b3e2543b2d92" exitCode=0 Apr 16 17:53:27.495975 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:27.495887 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-6c064-predictor-69d59c9c79-z8fh2" Apr 16 17:53:27.495975 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:27.495896 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-6c064-predictor-69d59c9c79-z8fh2" event={"ID":"1f437258-b4d8-4dc4-9dd5-2a68d4b0af64","Type":"ContainerDied","Data":"5e9328eef40176b208f9eb90913eb3a05a019645bffec292f2c4b3e2543b2d92"} Apr 16 17:53:27.495975 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:27.495923 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-6c064-predictor-69d59c9c79-z8fh2" event={"ID":"1f437258-b4d8-4dc4-9dd5-2a68d4b0af64","Type":"ContainerDied","Data":"528198f2d74523de71d098017b656d19cf0cb46bd5d41c1ce5348fbfdda7faf8"} Apr 16 17:53:27.496223 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:27.496185 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/a74de01c-8be7-4cce-a0af-db2cc3990065-kserve-provision-location\") pod \"a74de01c-8be7-4cce-a0af-db2cc3990065\" (UID: \"a74de01c-8be7-4cce-a0af-db2cc3990065\") " Apr 16 17:53:27.496527 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:27.496501 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a74de01c-8be7-4cce-a0af-db2cc3990065-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "a74de01c-8be7-4cce-a0af-db2cc3990065" (UID: "a74de01c-8be7-4cce-a0af-db2cc3990065"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 17:53:27.502492 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:27.502475 2571 scope.go:117] "RemoveContainer" containerID="7e5ca1a9683b557ff24bd019544c2c53efb4259d8f902d9bcf4b208f4c067c57" Apr 16 17:53:27.509461 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:27.509441 2571 scope.go:117] "RemoveContainer" containerID="47aafabbc0c33728a4c032c036e2649742bf9ac866e3bee58b5def68bc34c3eb" Apr 16 17:53:27.509705 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:53:27.509687 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47aafabbc0c33728a4c032c036e2649742bf9ac866e3bee58b5def68bc34c3eb\": container with ID starting with 47aafabbc0c33728a4c032c036e2649742bf9ac866e3bee58b5def68bc34c3eb not found: ID does not exist" containerID="47aafabbc0c33728a4c032c036e2649742bf9ac866e3bee58b5def68bc34c3eb" Apr 16 17:53:27.509781 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:27.509715 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47aafabbc0c33728a4c032c036e2649742bf9ac866e3bee58b5def68bc34c3eb"} err="failed to get container status \"47aafabbc0c33728a4c032c036e2649742bf9ac866e3bee58b5def68bc34c3eb\": rpc error: code = NotFound desc = could not find container \"47aafabbc0c33728a4c032c036e2649742bf9ac866e3bee58b5def68bc34c3eb\": container with ID starting with 47aafabbc0c33728a4c032c036e2649742bf9ac866e3bee58b5def68bc34c3eb not found: ID does not exist" Apr 16 17:53:27.509781 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:27.509740 2571 scope.go:117] "RemoveContainer" containerID="7e5ca1a9683b557ff24bd019544c2c53efb4259d8f902d9bcf4b208f4c067c57" Apr 16 17:53:27.510032 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:53:27.510010 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e5ca1a9683b557ff24bd019544c2c53efb4259d8f902d9bcf4b208f4c067c57\": container with ID starting with 7e5ca1a9683b557ff24bd019544c2c53efb4259d8f902d9bcf4b208f4c067c57 not found: ID does not exist" containerID="7e5ca1a9683b557ff24bd019544c2c53efb4259d8f902d9bcf4b208f4c067c57" Apr 16 17:53:27.510082 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:27.510038 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e5ca1a9683b557ff24bd019544c2c53efb4259d8f902d9bcf4b208f4c067c57"} err="failed to get container status \"7e5ca1a9683b557ff24bd019544c2c53efb4259d8f902d9bcf4b208f4c067c57\": rpc error: code = NotFound desc = could not find container \"7e5ca1a9683b557ff24bd019544c2c53efb4259d8f902d9bcf4b208f4c067c57\": container with ID starting with 7e5ca1a9683b557ff24bd019544c2c53efb4259d8f902d9bcf4b208f4c067c57 not found: ID does not exist" Apr 16 17:53:27.510082 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:27.510053 2571 scope.go:117] "RemoveContainer" containerID="5e9328eef40176b208f9eb90913eb3a05a019645bffec292f2c4b3e2543b2d92" Apr 16 17:53:27.515136 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:27.515118 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-6c064-predictor-69d59c9c79-z8fh2"] Apr 16 17:53:27.519715 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:27.519323 2571 scope.go:117] "RemoveContainer" containerID="eebd3e8714a8aae39608c4103e4133c0539b7a0fbfd562a36a2e5d87b772c31a" Apr 16 17:53:27.520725 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:27.520699 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-6c064-predictor-69d59c9c79-z8fh2"] Apr 16 17:53:27.528250 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:27.528230 2571 scope.go:117] "RemoveContainer" containerID="5e9328eef40176b208f9eb90913eb3a05a019645bffec292f2c4b3e2543b2d92" Apr 16 17:53:27.528492 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:53:27.528472 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e9328eef40176b208f9eb90913eb3a05a019645bffec292f2c4b3e2543b2d92\": container with ID starting with 5e9328eef40176b208f9eb90913eb3a05a019645bffec292f2c4b3e2543b2d92 not found: ID does not exist" containerID="5e9328eef40176b208f9eb90913eb3a05a019645bffec292f2c4b3e2543b2d92" Apr 16 17:53:27.528555 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:27.528499 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e9328eef40176b208f9eb90913eb3a05a019645bffec292f2c4b3e2543b2d92"} err="failed to get container status \"5e9328eef40176b208f9eb90913eb3a05a019645bffec292f2c4b3e2543b2d92\": rpc error: code = NotFound desc = could not find container \"5e9328eef40176b208f9eb90913eb3a05a019645bffec292f2c4b3e2543b2d92\": container with ID starting with 5e9328eef40176b208f9eb90913eb3a05a019645bffec292f2c4b3e2543b2d92 not found: ID does not exist" Apr 16 17:53:27.528555 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:27.528522 2571 scope.go:117] "RemoveContainer" containerID="eebd3e8714a8aae39608c4103e4133c0539b7a0fbfd562a36a2e5d87b772c31a" Apr 16 17:53:27.528753 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:53:27.528734 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eebd3e8714a8aae39608c4103e4133c0539b7a0fbfd562a36a2e5d87b772c31a\": container with ID starting with eebd3e8714a8aae39608c4103e4133c0539b7a0fbfd562a36a2e5d87b772c31a not found: ID does not exist" containerID="eebd3e8714a8aae39608c4103e4133c0539b7a0fbfd562a36a2e5d87b772c31a" Apr 16 17:53:27.528801 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:27.528760 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eebd3e8714a8aae39608c4103e4133c0539b7a0fbfd562a36a2e5d87b772c31a"} err="failed to get container status \"eebd3e8714a8aae39608c4103e4133c0539b7a0fbfd562a36a2e5d87b772c31a\": rpc error: code = NotFound desc = could not find container \"eebd3e8714a8aae39608c4103e4133c0539b7a0fbfd562a36a2e5d87b772c31a\": container with ID starting with eebd3e8714a8aae39608c4103e4133c0539b7a0fbfd562a36a2e5d87b772c31a not found: ID does not exist" Apr 16 17:53:27.597286 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:27.597229 2571 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/a74de01c-8be7-4cce-a0af-db2cc3990065-kserve-provision-location\") on node \"ip-10-0-138-134.ec2.internal\" DevicePath \"\"" Apr 16 17:53:27.817184 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:27.817141 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-6c064-predictor-78b76f88f7-dzvrl"] Apr 16 17:53:27.818790 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:27.818764 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-6c064-predictor-78b76f88f7-dzvrl"] Apr 16 17:53:28.502489 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:28.502453 2571 generic.go:358] "Generic (PLEG): container finished" podID="c4607e26-c460-4a4d-a12c-153a1cd3c7ad" containerID="05c9c602d7805d23169150295920aef4f310ab7a1ec567fe7d11de3142afc43e" exitCode=0 Apr 16 17:53:28.502965 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:28.502527 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-e4bae-predictor-ddcdfbccc-b5rt6" event={"ID":"c4607e26-c460-4a4d-a12c-153a1cd3c7ad","Type":"ContainerDied","Data":"05c9c602d7805d23169150295920aef4f310ab7a1ec567fe7d11de3142afc43e"} Apr 16 17:53:28.504134 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:28.504112 2571 generic.go:358] "Generic (PLEG): container finished" podID="58f1a8e0-7d1f-43cb-80a9-c619abf538a5" containerID="33fda55d9931443d1e212ed68bb0033bad4f43cf8ef7bb909eb9475d89786217" exitCode=0 Apr 16 17:53:28.504246 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:28.504207 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-e4bae-predictor-5c4695cf55-trjk8" event={"ID":"58f1a8e0-7d1f-43cb-80a9-c619abf538a5","Type":"ContainerDied","Data":"33fda55d9931443d1e212ed68bb0033bad4f43cf8ef7bb909eb9475d89786217"} Apr 16 17:53:29.035574 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:29.035545 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f437258-b4d8-4dc4-9dd5-2a68d4b0af64" path="/var/lib/kubelet/pods/1f437258-b4d8-4dc4-9dd5-2a68d4b0af64/volumes" Apr 16 17:53:29.035923 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:29.035910 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a74de01c-8be7-4cce-a0af-db2cc3990065" path="/var/lib/kubelet/pods/a74de01c-8be7-4cce-a0af-db2cc3990065/volumes" Apr 16 17:53:29.510763 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:29.510730 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-e4bae-predictor-ddcdfbccc-b5rt6" event={"ID":"c4607e26-c460-4a4d-a12c-153a1cd3c7ad","Type":"ContainerStarted","Data":"4033550f93bbc5e15e4dc83fe16d60aa28b2aaa842b3e4c504778b515704ca8b"} Apr 16 17:53:29.511196 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:29.511014 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-e4bae-predictor-ddcdfbccc-b5rt6" Apr 16 17:53:29.512393 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:29.512367 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-e4bae-predictor-ddcdfbccc-b5rt6" podUID="c4607e26-c460-4a4d-a12c-153a1cd3c7ad" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.36:8080: connect: connection refused" Apr 16 17:53:29.512509 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:29.512458 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-e4bae-predictor-5c4695cf55-trjk8" event={"ID":"58f1a8e0-7d1f-43cb-80a9-c619abf538a5","Type":"ContainerStarted","Data":"b573a8dd67a3bd72471f43e7b736c8e891ec35db5c06453f2e3f3d938b9960a2"} Apr 16 17:53:29.512739 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:29.512724 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-e4bae-predictor-5c4695cf55-trjk8" Apr 16 17:53:29.513753 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:29.513706 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-e4bae-predictor-5c4695cf55-trjk8" podUID="58f1a8e0-7d1f-43cb-80a9-c619abf538a5" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.37:8080: connect: connection refused" Apr 16 17:53:29.531510 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:29.531472 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-e4bae-predictor-ddcdfbccc-b5rt6" podStartSLOduration=6.531463016 podStartE2EDuration="6.531463016s" podCreationTimestamp="2026-04-16 17:53:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 17:53:29.529551021 +0000 UTC m=+731.002780494" watchObservedRunningTime="2026-04-16 17:53:29.531463016 +0000 UTC m=+731.004692485" Apr 16 17:53:29.546114 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:29.546078 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-e4bae-predictor-5c4695cf55-trjk8" podStartSLOduration=6.54606929 podStartE2EDuration="6.54606929s" podCreationTimestamp="2026-04-16 17:53:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 17:53:29.544581771 +0000 UTC m=+731.017811245" watchObservedRunningTime="2026-04-16 17:53:29.54606929 +0000 UTC m=+731.019298764" Apr 16 17:53:30.516993 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:30.516935 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-e4bae-predictor-ddcdfbccc-b5rt6" podUID="c4607e26-c460-4a4d-a12c-153a1cd3c7ad" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.36:8080: connect: connection refused" Apr 16 17:53:30.517416 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:30.517095 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-e4bae-predictor-5c4695cf55-trjk8" podUID="58f1a8e0-7d1f-43cb-80a9-c619abf538a5" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.37:8080: connect: connection refused" Apr 16 17:53:32.458006 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:32.457966 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-raw-6c064-5c4f4845b6-8wrbq" podUID="5fce0b23-574e-44ac-85af-4977d4a860e8" containerName="model-chainer-raw-6c064" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:53:37.458597 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:37.458559 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-raw-6c064-5c4f4845b6-8wrbq" podUID="5fce0b23-574e-44ac-85af-4977d4a860e8" containerName="model-chainer-raw-6c064" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:53:37.459095 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:37.458678 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/model-chainer-raw-6c064-5c4f4845b6-8wrbq" Apr 16 17:53:40.517409 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:40.517368 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-e4bae-predictor-5c4695cf55-trjk8" podUID="58f1a8e0-7d1f-43cb-80a9-c619abf538a5" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.37:8080: connect: connection refused" Apr 16 17:53:40.517757 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:40.517369 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-e4bae-predictor-ddcdfbccc-b5rt6" podUID="c4607e26-c460-4a4d-a12c-153a1cd3c7ad" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.36:8080: connect: connection refused" Apr 16 17:53:42.458440 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:42.458397 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-raw-6c064-5c4f4845b6-8wrbq" podUID="5fce0b23-574e-44ac-85af-4977d4a860e8" containerName="model-chainer-raw-6c064" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:53:47.459025 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:47.458988 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-raw-6c064-5c4f4845b6-8wrbq" podUID="5fce0b23-574e-44ac-85af-4977d4a860e8" containerName="model-chainer-raw-6c064" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:53:50.517518 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:50.517481 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-e4bae-predictor-5c4695cf55-trjk8" podUID="58f1a8e0-7d1f-43cb-80a9-c619abf538a5" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.37:8080: connect: connection refused" Apr 16 17:53:50.517843 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:50.517481 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-e4bae-predictor-ddcdfbccc-b5rt6" podUID="c4607e26-c460-4a4d-a12c-153a1cd3c7ad" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.36:8080: connect: connection refused" Apr 16 17:53:52.458383 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:52.458349 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-raw-6c064-5c4f4845b6-8wrbq" podUID="5fce0b23-574e-44ac-85af-4977d4a860e8" containerName="model-chainer-raw-6c064" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:53:53.498322 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:53.498297 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-raw-6c064-5c4f4845b6-8wrbq" Apr 16 17:53:53.597684 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:53.597651 2571 generic.go:358] "Generic (PLEG): container finished" podID="5fce0b23-574e-44ac-85af-4977d4a860e8" containerID="4d7c53474c1b1f0569e50291792965b93684bcb8af044b639dd2a937f5b77611" exitCode=0 Apr 16 17:53:53.597881 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:53.597724 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-raw-6c064-5c4f4845b6-8wrbq" Apr 16 17:53:53.597881 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:53.597736 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-raw-6c064-5c4f4845b6-8wrbq" event={"ID":"5fce0b23-574e-44ac-85af-4977d4a860e8","Type":"ContainerDied","Data":"4d7c53474c1b1f0569e50291792965b93684bcb8af044b639dd2a937f5b77611"} Apr 16 17:53:53.597881 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:53.597774 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-raw-6c064-5c4f4845b6-8wrbq" event={"ID":"5fce0b23-574e-44ac-85af-4977d4a860e8","Type":"ContainerDied","Data":"898c3d5c6d6483ecc054a982e2d8239af046a9090dfd13440af8d75366d7a8d5"} Apr 16 17:53:53.597881 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:53.597793 2571 scope.go:117] "RemoveContainer" containerID="4d7c53474c1b1f0569e50291792965b93684bcb8af044b639dd2a937f5b77611" Apr 16 17:53:53.598947 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:53.598928 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5fce0b23-574e-44ac-85af-4977d4a860e8-proxy-tls\") pod \"5fce0b23-574e-44ac-85af-4977d4a860e8\" (UID: \"5fce0b23-574e-44ac-85af-4977d4a860e8\") " Apr 16 17:53:53.599102 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:53.599083 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fce0b23-574e-44ac-85af-4977d4a860e8-openshift-service-ca-bundle\") pod \"5fce0b23-574e-44ac-85af-4977d4a860e8\" (UID: \"5fce0b23-574e-44ac-85af-4977d4a860e8\") " Apr 16 17:53:53.599562 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:53.599535 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fce0b23-574e-44ac-85af-4977d4a860e8-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "5fce0b23-574e-44ac-85af-4977d4a860e8" (UID: "5fce0b23-574e-44ac-85af-4977d4a860e8"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 17:53:53.601178 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:53.601154 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fce0b23-574e-44ac-85af-4977d4a860e8-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "5fce0b23-574e-44ac-85af-4977d4a860e8" (UID: "5fce0b23-574e-44ac-85af-4977d4a860e8"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 17:53:53.610883 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:53.610867 2571 scope.go:117] "RemoveContainer" containerID="4d7c53474c1b1f0569e50291792965b93684bcb8af044b639dd2a937f5b77611" Apr 16 17:53:53.611130 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:53:53.611113 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d7c53474c1b1f0569e50291792965b93684bcb8af044b639dd2a937f5b77611\": container with ID starting with 4d7c53474c1b1f0569e50291792965b93684bcb8af044b639dd2a937f5b77611 not found: ID does not exist" containerID="4d7c53474c1b1f0569e50291792965b93684bcb8af044b639dd2a937f5b77611" Apr 16 17:53:53.611187 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:53.611136 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d7c53474c1b1f0569e50291792965b93684bcb8af044b639dd2a937f5b77611"} err="failed to get container status \"4d7c53474c1b1f0569e50291792965b93684bcb8af044b639dd2a937f5b77611\": rpc error: code = NotFound desc = could not find container \"4d7c53474c1b1f0569e50291792965b93684bcb8af044b639dd2a937f5b77611\": container with ID starting with 4d7c53474c1b1f0569e50291792965b93684bcb8af044b639dd2a937f5b77611 not found: ID does not exist" Apr 16 17:53:53.700061 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:53.700031 2571 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5fce0b23-574e-44ac-85af-4977d4a860e8-proxy-tls\") on node \"ip-10-0-138-134.ec2.internal\" DevicePath \"\"" Apr 16 17:53:53.700061 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:53.700052 2571 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fce0b23-574e-44ac-85af-4977d4a860e8-openshift-service-ca-bundle\") on node \"ip-10-0-138-134.ec2.internal\" DevicePath \"\"" Apr 16 17:53:53.919906 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:53.919883 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-6c064-5c4f4845b6-8wrbq"] Apr 16 17:53:53.924981 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:53.924959 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-6c064-5c4f4845b6-8wrbq"] Apr 16 17:53:55.035544 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:53:55.035514 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fce0b23-574e-44ac-85af-4977d4a860e8" path="/var/lib/kubelet/pods/5fce0b23-574e-44ac-85af-4977d4a860e8/volumes" Apr 16 17:54:00.517230 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:00.517188 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-e4bae-predictor-5c4695cf55-trjk8" podUID="58f1a8e0-7d1f-43cb-80a9-c619abf538a5" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.37:8080: connect: connection refused" Apr 16 17:54:00.517642 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:00.517189 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-e4bae-predictor-ddcdfbccc-b5rt6" podUID="c4607e26-c460-4a4d-a12c-153a1cd3c7ad" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.36:8080: connect: connection refused" Apr 16 17:54:10.517578 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:10.517534 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-e4bae-predictor-5c4695cf55-trjk8" podUID="58f1a8e0-7d1f-43cb-80a9-c619abf538a5" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.37:8080: connect: connection refused" Apr 16 17:54:10.517971 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:10.517533 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-e4bae-predictor-ddcdfbccc-b5rt6" podUID="c4607e26-c460-4a4d-a12c-153a1cd3c7ad" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.36:8080: connect: connection refused" Apr 16 17:54:20.517191 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:20.517147 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-e4bae-predictor-ddcdfbccc-b5rt6" podUID="c4607e26-c460-4a4d-a12c-153a1cd3c7ad" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.36:8080: connect: connection refused" Apr 16 17:54:20.517627 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:20.517147 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-e4bae-predictor-5c4695cf55-trjk8" podUID="58f1a8e0-7d1f-43cb-80a9-c619abf538a5" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.37:8080: connect: connection refused" Apr 16 17:54:30.518220 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:30.518184 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-e4bae-predictor-5c4695cf55-trjk8" Apr 16 17:54:30.518801 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:30.518618 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-e4bae-predictor-ddcdfbccc-b5rt6" Apr 16 17:54:53.696470 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:53.696435 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-e4bae-predictor-5c4695cf55-trjk8"] Apr 16 17:54:53.696904 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:53.696707 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-e4bae-predictor-5c4695cf55-trjk8" podUID="58f1a8e0-7d1f-43cb-80a9-c619abf538a5" containerName="kserve-container" containerID="cri-o://b573a8dd67a3bd72471f43e7b736c8e891ec35db5c06453f2e3f3d938b9960a2" gracePeriod=30 Apr 16 17:54:53.752824 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:53.752794 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-e4bae-predictor-ddcdfbccc-b5rt6"] Apr 16 17:54:53.753176 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:53.753149 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-e4bae-predictor-ddcdfbccc-b5rt6" podUID="c4607e26-c460-4a4d-a12c-153a1cd3c7ad" containerName="kserve-container" containerID="cri-o://4033550f93bbc5e15e4dc83fe16d60aa28b2aaa842b3e4c504778b515704ca8b" gracePeriod=30 Apr 16 17:54:53.777056 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:53.777032 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/message-dumper-raw-8600f-predictor-6d8f89894d-tf7jh"] Apr 16 17:54:53.777397 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:53.777385 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5fce0b23-574e-44ac-85af-4977d4a860e8" containerName="model-chainer-raw-6c064" Apr 16 17:54:53.777447 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:53.777399 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fce0b23-574e-44ac-85af-4977d4a860e8" containerName="model-chainer-raw-6c064" Apr 16 17:54:53.777447 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:53.777414 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a74de01c-8be7-4cce-a0af-db2cc3990065" containerName="storage-initializer" Apr 16 17:54:53.777447 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:53.777419 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="a74de01c-8be7-4cce-a0af-db2cc3990065" containerName="storage-initializer" Apr 16 17:54:53.777447 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:53.777431 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1f437258-b4d8-4dc4-9dd5-2a68d4b0af64" containerName="storage-initializer" Apr 16 17:54:53.777447 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:53.777437 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f437258-b4d8-4dc4-9dd5-2a68d4b0af64" containerName="storage-initializer" Apr 16 17:54:53.777447 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:53.777445 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a74de01c-8be7-4cce-a0af-db2cc3990065" containerName="kserve-container" Apr 16 17:54:53.777621 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:53.777451 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="a74de01c-8be7-4cce-a0af-db2cc3990065" containerName="kserve-container" Apr 16 17:54:53.777621 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:53.777462 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1f437258-b4d8-4dc4-9dd5-2a68d4b0af64" containerName="kserve-container" Apr 16 17:54:53.777621 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:53.777467 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f437258-b4d8-4dc4-9dd5-2a68d4b0af64" containerName="kserve-container" Apr 16 17:54:53.777621 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:53.777526 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="5fce0b23-574e-44ac-85af-4977d4a860e8" containerName="model-chainer-raw-6c064" Apr 16 17:54:53.777621 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:53.777536 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="1f437258-b4d8-4dc4-9dd5-2a68d4b0af64" containerName="kserve-container" Apr 16 17:54:53.777621 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:53.777545 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="a74de01c-8be7-4cce-a0af-db2cc3990065" containerName="kserve-container" Apr 16 17:54:53.780739 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:53.780706 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/message-dumper-raw-8600f-predictor-6d8f89894d-tf7jh" Apr 16 17:54:53.788973 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:53.788950 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/message-dumper-raw-8600f-predictor-6d8f89894d-tf7jh"] Apr 16 17:54:53.791154 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:53.791137 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/message-dumper-raw-8600f-predictor-6d8f89894d-tf7jh" Apr 16 17:54:53.921422 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:53.921382 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/message-dumper-raw-8600f-predictor-6d8f89894d-tf7jh"] Apr 16 17:54:53.924496 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:54:53.924469 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf79572e2_caf7_404a_ab2d_5a9bad337d6e.slice/crio-6d842dc6df07f9cf46cfc08a8c593ddaf1599194f8ab30533f8bd85bee518b85 WatchSource:0}: Error finding container 6d842dc6df07f9cf46cfc08a8c593ddaf1599194f8ab30533f8bd85bee518b85: Status 404 returned error can't find the container with id 6d842dc6df07f9cf46cfc08a8c593ddaf1599194f8ab30533f8bd85bee518b85 Apr 16 17:54:54.808650 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:54.808613 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/message-dumper-raw-8600f-predictor-6d8f89894d-tf7jh" event={"ID":"f79572e2-caf7-404a-ab2d-5a9bad337d6e","Type":"ContainerStarted","Data":"6d842dc6df07f9cf46cfc08a8c593ddaf1599194f8ab30533f8bd85bee518b85"} Apr 16 17:54:55.814200 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:55.814167 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/message-dumper-raw-8600f-predictor-6d8f89894d-tf7jh" event={"ID":"f79572e2-caf7-404a-ab2d-5a9bad337d6e","Type":"ContainerStarted","Data":"53d677909bacbf84972cfc06a1c51384d7a348c02440871970a80e71f1cdd753"} Apr 16 17:54:55.814545 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:55.814325 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/message-dumper-raw-8600f-predictor-6d8f89894d-tf7jh" Apr 16 17:54:55.815894 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:55.815873 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/message-dumper-raw-8600f-predictor-6d8f89894d-tf7jh" Apr 16 17:54:55.829691 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:55.829637 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/message-dumper-raw-8600f-predictor-6d8f89894d-tf7jh" podStartSLOduration=1.786606367 podStartE2EDuration="2.829623111s" podCreationTimestamp="2026-04-16 17:54:53 +0000 UTC" firstStartedPulling="2026-04-16 17:54:53.926260462 +0000 UTC m=+815.399489913" lastFinishedPulling="2026-04-16 17:54:54.969277205 +0000 UTC m=+816.442506657" observedRunningTime="2026-04-16 17:54:55.828364891 +0000 UTC m=+817.301594365" watchObservedRunningTime="2026-04-16 17:54:55.829623111 +0000 UTC m=+817.302852586" Apr 16 17:54:57.059438 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:57.059417 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-e4bae-predictor-5c4695cf55-trjk8" Apr 16 17:54:57.198624 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:57.198535 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/58f1a8e0-7d1f-43cb-80a9-c619abf538a5-kserve-provision-location\") pod \"58f1a8e0-7d1f-43cb-80a9-c619abf538a5\" (UID: \"58f1a8e0-7d1f-43cb-80a9-c619abf538a5\") " Apr 16 17:54:57.198990 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:57.198963 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58f1a8e0-7d1f-43cb-80a9-c619abf538a5-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "58f1a8e0-7d1f-43cb-80a9-c619abf538a5" (UID: "58f1a8e0-7d1f-43cb-80a9-c619abf538a5"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 17:54:57.299992 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:57.299960 2571 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/58f1a8e0-7d1f-43cb-80a9-c619abf538a5-kserve-provision-location\") on node \"ip-10-0-138-134.ec2.internal\" DevicePath \"\"" Apr 16 17:54:57.694689 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:57.694662 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-e4bae-predictor-ddcdfbccc-b5rt6" Apr 16 17:54:57.703170 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:57.703149 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/c4607e26-c460-4a4d-a12c-153a1cd3c7ad-kserve-provision-location\") pod \"c4607e26-c460-4a4d-a12c-153a1cd3c7ad\" (UID: \"c4607e26-c460-4a4d-a12c-153a1cd3c7ad\") " Apr 16 17:54:57.703454 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:57.703432 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4607e26-c460-4a4d-a12c-153a1cd3c7ad-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "c4607e26-c460-4a4d-a12c-153a1cd3c7ad" (UID: "c4607e26-c460-4a4d-a12c-153a1cd3c7ad"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 17:54:57.803733 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:57.803657 2571 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/c4607e26-c460-4a4d-a12c-153a1cd3c7ad-kserve-provision-location\") on node \"ip-10-0-138-134.ec2.internal\" DevicePath \"\"" Apr 16 17:54:57.822501 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:57.822471 2571 generic.go:358] "Generic (PLEG): container finished" podID="c4607e26-c460-4a4d-a12c-153a1cd3c7ad" containerID="4033550f93bbc5e15e4dc83fe16d60aa28b2aaa842b3e4c504778b515704ca8b" exitCode=0 Apr 16 17:54:57.822630 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:57.822544 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-e4bae-predictor-ddcdfbccc-b5rt6" Apr 16 17:54:57.822630 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:57.822553 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-e4bae-predictor-ddcdfbccc-b5rt6" event={"ID":"c4607e26-c460-4a4d-a12c-153a1cd3c7ad","Type":"ContainerDied","Data":"4033550f93bbc5e15e4dc83fe16d60aa28b2aaa842b3e4c504778b515704ca8b"} Apr 16 17:54:57.822630 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:57.822593 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-e4bae-predictor-ddcdfbccc-b5rt6" event={"ID":"c4607e26-c460-4a4d-a12c-153a1cd3c7ad","Type":"ContainerDied","Data":"cc8ad0f4bdf5920797add576a59353c6ecb95c23e56ed5c5912c028129f0b802"} Apr 16 17:54:57.822630 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:57.822613 2571 scope.go:117] "RemoveContainer" containerID="4033550f93bbc5e15e4dc83fe16d60aa28b2aaa842b3e4c504778b515704ca8b" Apr 16 17:54:57.823974 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:57.823946 2571 generic.go:358] "Generic (PLEG): container finished" podID="58f1a8e0-7d1f-43cb-80a9-c619abf538a5" containerID="b573a8dd67a3bd72471f43e7b736c8e891ec35db5c06453f2e3f3d938b9960a2" exitCode=0 Apr 16 17:54:57.824076 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:57.824010 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-e4bae-predictor-5c4695cf55-trjk8" Apr 16 17:54:57.824076 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:57.824037 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-e4bae-predictor-5c4695cf55-trjk8" event={"ID":"58f1a8e0-7d1f-43cb-80a9-c619abf538a5","Type":"ContainerDied","Data":"b573a8dd67a3bd72471f43e7b736c8e891ec35db5c06453f2e3f3d938b9960a2"} Apr 16 17:54:57.824182 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:57.824082 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-e4bae-predictor-5c4695cf55-trjk8" event={"ID":"58f1a8e0-7d1f-43cb-80a9-c619abf538a5","Type":"ContainerDied","Data":"90a60e28f857bae2ac8be9496b44250d8533e06aa3fa9a53f619c991e8abd63c"} Apr 16 17:54:57.832186 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:57.832163 2571 scope.go:117] "RemoveContainer" containerID="05c9c602d7805d23169150295920aef4f310ab7a1ec567fe7d11de3142afc43e" Apr 16 17:54:57.839479 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:57.839463 2571 scope.go:117] "RemoveContainer" containerID="4033550f93bbc5e15e4dc83fe16d60aa28b2aaa842b3e4c504778b515704ca8b" Apr 16 17:54:57.839728 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:54:57.839710 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4033550f93bbc5e15e4dc83fe16d60aa28b2aaa842b3e4c504778b515704ca8b\": container with ID starting with 4033550f93bbc5e15e4dc83fe16d60aa28b2aaa842b3e4c504778b515704ca8b not found: ID does not exist" containerID="4033550f93bbc5e15e4dc83fe16d60aa28b2aaa842b3e4c504778b515704ca8b" Apr 16 17:54:57.839800 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:57.839739 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4033550f93bbc5e15e4dc83fe16d60aa28b2aaa842b3e4c504778b515704ca8b"} err="failed to get container status \"4033550f93bbc5e15e4dc83fe16d60aa28b2aaa842b3e4c504778b515704ca8b\": rpc error: code = NotFound desc = could not find container \"4033550f93bbc5e15e4dc83fe16d60aa28b2aaa842b3e4c504778b515704ca8b\": container with ID starting with 4033550f93bbc5e15e4dc83fe16d60aa28b2aaa842b3e4c504778b515704ca8b not found: ID does not exist" Apr 16 17:54:57.839800 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:57.839762 2571 scope.go:117] "RemoveContainer" containerID="05c9c602d7805d23169150295920aef4f310ab7a1ec567fe7d11de3142afc43e" Apr 16 17:54:57.840020 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:54:57.840005 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05c9c602d7805d23169150295920aef4f310ab7a1ec567fe7d11de3142afc43e\": container with ID starting with 05c9c602d7805d23169150295920aef4f310ab7a1ec567fe7d11de3142afc43e not found: ID does not exist" containerID="05c9c602d7805d23169150295920aef4f310ab7a1ec567fe7d11de3142afc43e" Apr 16 17:54:57.840083 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:57.840030 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05c9c602d7805d23169150295920aef4f310ab7a1ec567fe7d11de3142afc43e"} err="failed to get container status \"05c9c602d7805d23169150295920aef4f310ab7a1ec567fe7d11de3142afc43e\": rpc error: code = NotFound desc = could not find container \"05c9c602d7805d23169150295920aef4f310ab7a1ec567fe7d11de3142afc43e\": container with ID starting with 05c9c602d7805d23169150295920aef4f310ab7a1ec567fe7d11de3142afc43e not found: ID does not exist" Apr 16 17:54:57.840083 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:57.840059 2571 scope.go:117] "RemoveContainer" containerID="b573a8dd67a3bd72471f43e7b736c8e891ec35db5c06453f2e3f3d938b9960a2" Apr 16 17:54:57.846955 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:57.846937 2571 scope.go:117] "RemoveContainer" containerID="33fda55d9931443d1e212ed68bb0033bad4f43cf8ef7bb909eb9475d89786217" Apr 16 17:54:57.847231 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:57.847214 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-e4bae-predictor-5c4695cf55-trjk8"] Apr 16 17:54:57.850713 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:57.850694 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-e4bae-predictor-5c4695cf55-trjk8"] Apr 16 17:54:57.854300 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:57.854280 2571 scope.go:117] "RemoveContainer" containerID="b573a8dd67a3bd72471f43e7b736c8e891ec35db5c06453f2e3f3d938b9960a2" Apr 16 17:54:57.854538 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:54:57.854521 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b573a8dd67a3bd72471f43e7b736c8e891ec35db5c06453f2e3f3d938b9960a2\": container with ID starting with b573a8dd67a3bd72471f43e7b736c8e891ec35db5c06453f2e3f3d938b9960a2 not found: ID does not exist" containerID="b573a8dd67a3bd72471f43e7b736c8e891ec35db5c06453f2e3f3d938b9960a2" Apr 16 17:54:57.854599 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:57.854543 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b573a8dd67a3bd72471f43e7b736c8e891ec35db5c06453f2e3f3d938b9960a2"} err="failed to get container status \"b573a8dd67a3bd72471f43e7b736c8e891ec35db5c06453f2e3f3d938b9960a2\": rpc error: code = NotFound desc = could not find container \"b573a8dd67a3bd72471f43e7b736c8e891ec35db5c06453f2e3f3d938b9960a2\": container with ID starting with b573a8dd67a3bd72471f43e7b736c8e891ec35db5c06453f2e3f3d938b9960a2 not found: ID does not exist" Apr 16 17:54:57.854599 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:57.854557 2571 scope.go:117] "RemoveContainer" containerID="33fda55d9931443d1e212ed68bb0033bad4f43cf8ef7bb909eb9475d89786217" Apr 16 17:54:57.854807 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:54:57.854790 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33fda55d9931443d1e212ed68bb0033bad4f43cf8ef7bb909eb9475d89786217\": container with ID starting with 33fda55d9931443d1e212ed68bb0033bad4f43cf8ef7bb909eb9475d89786217 not found: ID does not exist" containerID="33fda55d9931443d1e212ed68bb0033bad4f43cf8ef7bb909eb9475d89786217" Apr 16 17:54:57.854884 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:57.854815 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33fda55d9931443d1e212ed68bb0033bad4f43cf8ef7bb909eb9475d89786217"} err="failed to get container status \"33fda55d9931443d1e212ed68bb0033bad4f43cf8ef7bb909eb9475d89786217\": rpc error: code = NotFound desc = could not find container \"33fda55d9931443d1e212ed68bb0033bad4f43cf8ef7bb909eb9475d89786217\": container with ID starting with 33fda55d9931443d1e212ed68bb0033bad4f43cf8ef7bb909eb9475d89786217 not found: ID does not exist" Apr 16 17:54:57.859129 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:57.859104 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-e4bae-predictor-ddcdfbccc-b5rt6"] Apr 16 17:54:57.860996 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:57.860978 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-e4bae-predictor-ddcdfbccc-b5rt6"] Apr 16 17:54:59.035234 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:59.035202 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58f1a8e0-7d1f-43cb-80a9-c619abf538a5" path="/var/lib/kubelet/pods/58f1a8e0-7d1f-43cb-80a9-c619abf538a5/volumes" Apr 16 17:54:59.035619 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:54:59.035563 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4607e26-c460-4a4d-a12c-153a1cd3c7ad" path="/var/lib/kubelet/pods/c4607e26-c460-4a4d-a12c-153a1cd3c7ad/volumes" Apr 16 17:55:03.804954 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:03.804923 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf"] Apr 16 17:55:03.805403 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:03.805278 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="58f1a8e0-7d1f-43cb-80a9-c619abf538a5" containerName="kserve-container" Apr 16 17:55:03.805403 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:03.805289 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="58f1a8e0-7d1f-43cb-80a9-c619abf538a5" containerName="kserve-container" Apr 16 17:55:03.805403 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:03.805303 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="58f1a8e0-7d1f-43cb-80a9-c619abf538a5" containerName="storage-initializer" Apr 16 17:55:03.805403 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:03.805310 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="58f1a8e0-7d1f-43cb-80a9-c619abf538a5" containerName="storage-initializer" Apr 16 17:55:03.805403 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:03.805316 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c4607e26-c460-4a4d-a12c-153a1cd3c7ad" containerName="kserve-container" Apr 16 17:55:03.805403 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:03.805321 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4607e26-c460-4a4d-a12c-153a1cd3c7ad" containerName="kserve-container" Apr 16 17:55:03.805403 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:03.805335 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c4607e26-c460-4a4d-a12c-153a1cd3c7ad" containerName="storage-initializer" Apr 16 17:55:03.805403 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:03.805340 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4607e26-c460-4a4d-a12c-153a1cd3c7ad" containerName="storage-initializer" Apr 16 17:55:03.805403 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:03.805389 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="58f1a8e0-7d1f-43cb-80a9-c619abf538a5" containerName="kserve-container" Apr 16 17:55:03.805403 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:03.805398 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="c4607e26-c460-4a4d-a12c-153a1cd3c7ad" containerName="kserve-container" Apr 16 17:55:03.810250 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:03.810232 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" Apr 16 17:55:03.820192 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:03.820169 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf"] Apr 16 17:55:03.845561 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:03.845528 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/9527746e-5495-4423-b309-8ed593905301-kserve-provision-location\") pod \"isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf\" (UID: \"9527746e-5495-4423-b309-8ed593905301\") " pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" Apr 16 17:55:03.947088 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:03.947059 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/9527746e-5495-4423-b309-8ed593905301-kserve-provision-location\") pod \"isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf\" (UID: \"9527746e-5495-4423-b309-8ed593905301\") " pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" Apr 16 17:55:03.947465 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:03.947443 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/9527746e-5495-4423-b309-8ed593905301-kserve-provision-location\") pod \"isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf\" (UID: \"9527746e-5495-4423-b309-8ed593905301\") " pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" Apr 16 17:55:04.121143 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:04.121083 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" Apr 16 17:55:04.245050 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:04.245029 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf"] Apr 16 17:55:04.246969 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:55:04.246948 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9527746e_5495_4423_b309_8ed593905301.slice/crio-63e0d953542773efc66d9e700a661f46e5e3762e3b30a44795402bb88d036561 WatchSource:0}: Error finding container 63e0d953542773efc66d9e700a661f46e5e3762e3b30a44795402bb88d036561: Status 404 returned error can't find the container with id 63e0d953542773efc66d9e700a661f46e5e3762e3b30a44795402bb88d036561 Apr 16 17:55:04.852477 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:04.852438 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" event={"ID":"9527746e-5495-4423-b309-8ed593905301","Type":"ContainerStarted","Data":"83b99db43816170e3a96af7f2fe51d9be37284976ebb34cc1ecef1569f90a3b6"} Apr 16 17:55:04.852477 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:04.852480 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" event={"ID":"9527746e-5495-4423-b309-8ed593905301","Type":"ContainerStarted","Data":"63e0d953542773efc66d9e700a661f46e5e3762e3b30a44795402bb88d036561"} Apr 16 17:55:08.867106 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:08.867071 2571 generic.go:358] "Generic (PLEG): container finished" podID="9527746e-5495-4423-b309-8ed593905301" containerID="83b99db43816170e3a96af7f2fe51d9be37284976ebb34cc1ecef1569f90a3b6" exitCode=0 Apr 16 17:55:08.867453 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:08.867132 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" event={"ID":"9527746e-5495-4423-b309-8ed593905301","Type":"ContainerDied","Data":"83b99db43816170e3a96af7f2fe51d9be37284976ebb34cc1ecef1569f90a3b6"} Apr 16 17:55:09.872087 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:09.872049 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" event={"ID":"9527746e-5495-4423-b309-8ed593905301","Type":"ContainerStarted","Data":"db8f96b91d411738a94a94674360d36456595c9b3188471f2e8e2edec73d354d"} Apr 16 17:55:09.872087 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:09.872092 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" event={"ID":"9527746e-5495-4423-b309-8ed593905301","Type":"ContainerStarted","Data":"1757a821997403147828a28a4743f444904dc126cdc9a064ccd9f1203ab6973c"} Apr 16 17:55:09.872561 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:09.872385 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" Apr 16 17:55:09.872561 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:09.872416 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" Apr 16 17:55:09.873711 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:09.873685 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" podUID="9527746e-5495-4423-b309-8ed593905301" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.39:8080: connect: connection refused" Apr 16 17:55:09.874309 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:09.874288 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" podUID="9527746e-5495-4423-b309-8ed593905301" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:55:09.889393 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:09.889356 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" podStartSLOduration=6.889344305 podStartE2EDuration="6.889344305s" podCreationTimestamp="2026-04-16 17:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 17:55:09.887184827 +0000 UTC m=+831.360414300" watchObservedRunningTime="2026-04-16 17:55:09.889344305 +0000 UTC m=+831.362573777" Apr 16 17:55:10.876276 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:10.876234 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" podUID="9527746e-5495-4423-b309-8ed593905301" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.39:8080: connect: connection refused" Apr 16 17:55:10.876661 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:10.876507 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" podUID="9527746e-5495-4423-b309-8ed593905301" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:55:20.876656 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:20.876608 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" podUID="9527746e-5495-4423-b309-8ed593905301" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.39:8080: connect: connection refused" Apr 16 17:55:20.877200 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:20.877092 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" podUID="9527746e-5495-4423-b309-8ed593905301" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:55:30.877127 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:30.877080 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" podUID="9527746e-5495-4423-b309-8ed593905301" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.39:8080: connect: connection refused" Apr 16 17:55:30.877706 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:30.877679 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" podUID="9527746e-5495-4423-b309-8ed593905301" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:55:40.877169 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:40.877116 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" podUID="9527746e-5495-4423-b309-8ed593905301" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.39:8080: connect: connection refused" Apr 16 17:55:40.877694 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:40.877639 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" podUID="9527746e-5495-4423-b309-8ed593905301" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:55:50.876926 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:50.876878 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" podUID="9527746e-5495-4423-b309-8ed593905301" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.39:8080: connect: connection refused" Apr 16 17:55:50.877504 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:55:50.877473 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" podUID="9527746e-5495-4423-b309-8ed593905301" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:56:00.876669 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:00.876579 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" podUID="9527746e-5495-4423-b309-8ed593905301" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.39:8080: connect: connection refused" Apr 16 17:56:00.877211 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:00.877175 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" podUID="9527746e-5495-4423-b309-8ed593905301" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:56:10.877071 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:10.877036 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" Apr 16 17:56:10.877545 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:10.877359 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" Apr 16 17:56:18.830426 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:18.830391 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_message-dumper-raw-8600f-predictor-6d8f89894d-tf7jh_f79572e2-caf7-404a-ab2d-5a9bad337d6e/kserve-container/0.log" Apr 16 17:56:19.004554 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:19.004524 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf"] Apr 16 17:56:19.004826 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:19.004804 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" podUID="9527746e-5495-4423-b309-8ed593905301" containerName="kserve-container" containerID="cri-o://1757a821997403147828a28a4743f444904dc126cdc9a064ccd9f1203ab6973c" gracePeriod=30 Apr 16 17:56:19.004953 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:19.004923 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" podUID="9527746e-5495-4423-b309-8ed593905301" containerName="agent" containerID="cri-o://db8f96b91d411738a94a94674360d36456595c9b3188471f2e8e2edec73d354d" gracePeriod=30 Apr 16 17:56:19.026578 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:19.026549 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7"] Apr 16 17:56:19.033456 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:19.033429 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7" Apr 16 17:56:19.040177 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:19.040147 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7"] Apr 16 17:56:19.125428 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:19.125357 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/message-dumper-raw-8600f-predictor-6d8f89894d-tf7jh"] Apr 16 17:56:19.125706 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:19.125657 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/message-dumper-raw-8600f-predictor-6d8f89894d-tf7jh" podUID="f79572e2-caf7-404a-ab2d-5a9bad337d6e" containerName="kserve-container" containerID="cri-o://53d677909bacbf84972cfc06a1c51384d7a348c02440871970a80e71f1cdd753" gracePeriod=30 Apr 16 17:56:19.210770 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:19.210743 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/ca06b770-0beb-4fca-ae0e-a6113065dbdd-kserve-provision-location\") pod \"isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7\" (UID: \"ca06b770-0beb-4fca-ae0e-a6113065dbdd\") " pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7" Apr 16 17:56:19.312186 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:19.312151 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/ca06b770-0beb-4fca-ae0e-a6113065dbdd-kserve-provision-location\") pod \"isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7\" (UID: \"ca06b770-0beb-4fca-ae0e-a6113065dbdd\") " pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7" Apr 16 17:56:19.312532 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:19.312510 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/ca06b770-0beb-4fca-ae0e-a6113065dbdd-kserve-provision-location\") pod \"isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7\" (UID: \"ca06b770-0beb-4fca-ae0e-a6113065dbdd\") " pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7" Apr 16 17:56:19.346287 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:19.346265 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7" Apr 16 17:56:19.369444 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:19.369414 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/message-dumper-raw-8600f-predictor-6d8f89894d-tf7jh" Apr 16 17:56:19.474037 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:19.474009 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7"] Apr 16 17:56:19.477388 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:56:19.477358 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca06b770_0beb_4fca_ae0e_a6113065dbdd.slice/crio-a8fcd0cde98716521bc255d7114a6025461634aa37f1eb5c78dddd8f9a5a2cea WatchSource:0}: Error finding container a8fcd0cde98716521bc255d7114a6025461634aa37f1eb5c78dddd8f9a5a2cea: Status 404 returned error can't find the container with id a8fcd0cde98716521bc255d7114a6025461634aa37f1eb5c78dddd8f9a5a2cea Apr 16 17:56:20.122533 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:20.122495 2571 generic.go:358] "Generic (PLEG): container finished" podID="f79572e2-caf7-404a-ab2d-5a9bad337d6e" containerID="53d677909bacbf84972cfc06a1c51384d7a348c02440871970a80e71f1cdd753" exitCode=2 Apr 16 17:56:20.122967 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:20.122553 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/message-dumper-raw-8600f-predictor-6d8f89894d-tf7jh" Apr 16 17:56:20.122967 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:20.122581 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/message-dumper-raw-8600f-predictor-6d8f89894d-tf7jh" event={"ID":"f79572e2-caf7-404a-ab2d-5a9bad337d6e","Type":"ContainerDied","Data":"53d677909bacbf84972cfc06a1c51384d7a348c02440871970a80e71f1cdd753"} Apr 16 17:56:20.122967 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:20.122615 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/message-dumper-raw-8600f-predictor-6d8f89894d-tf7jh" event={"ID":"f79572e2-caf7-404a-ab2d-5a9bad337d6e","Type":"ContainerDied","Data":"6d842dc6df07f9cf46cfc08a8c593ddaf1599194f8ab30533f8bd85bee518b85"} Apr 16 17:56:20.122967 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:20.122635 2571 scope.go:117] "RemoveContainer" containerID="53d677909bacbf84972cfc06a1c51384d7a348c02440871970a80e71f1cdd753" Apr 16 17:56:20.124187 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:20.124163 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7" event={"ID":"ca06b770-0beb-4fca-ae0e-a6113065dbdd","Type":"ContainerStarted","Data":"9d4659491ffa55a62e17f7892be381f6277c72c830a2be604ce00d81cc1c9dc7"} Apr 16 17:56:20.124290 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:20.124196 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7" event={"ID":"ca06b770-0beb-4fca-ae0e-a6113065dbdd","Type":"ContainerStarted","Data":"a8fcd0cde98716521bc255d7114a6025461634aa37f1eb5c78dddd8f9a5a2cea"} Apr 16 17:56:20.130960 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:20.130946 2571 scope.go:117] "RemoveContainer" containerID="53d677909bacbf84972cfc06a1c51384d7a348c02440871970a80e71f1cdd753" Apr 16 17:56:20.131196 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:56:20.131176 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53d677909bacbf84972cfc06a1c51384d7a348c02440871970a80e71f1cdd753\": container with ID starting with 53d677909bacbf84972cfc06a1c51384d7a348c02440871970a80e71f1cdd753 not found: ID does not exist" containerID="53d677909bacbf84972cfc06a1c51384d7a348c02440871970a80e71f1cdd753" Apr 16 17:56:20.131328 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:20.131201 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53d677909bacbf84972cfc06a1c51384d7a348c02440871970a80e71f1cdd753"} err="failed to get container status \"53d677909bacbf84972cfc06a1c51384d7a348c02440871970a80e71f1cdd753\": rpc error: code = NotFound desc = could not find container \"53d677909bacbf84972cfc06a1c51384d7a348c02440871970a80e71f1cdd753\": container with ID starting with 53d677909bacbf84972cfc06a1c51384d7a348c02440871970a80e71f1cdd753 not found: ID does not exist" Apr 16 17:56:20.158907 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:20.158887 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/message-dumper-raw-8600f-predictor-6d8f89894d-tf7jh"] Apr 16 17:56:20.163505 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:20.163486 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/message-dumper-raw-8600f-predictor-6d8f89894d-tf7jh"] Apr 16 17:56:20.876646 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:20.876599 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" podUID="9527746e-5495-4423-b309-8ed593905301" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.39:8080: connect: connection refused" Apr 16 17:56:20.876976 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:20.876947 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" podUID="9527746e-5495-4423-b309-8ed593905301" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:56:21.040058 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:21.040028 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f79572e2-caf7-404a-ab2d-5a9bad337d6e" path="/var/lib/kubelet/pods/f79572e2-caf7-404a-ab2d-5a9bad337d6e/volumes" Apr 16 17:56:23.137301 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:23.137266 2571 generic.go:358] "Generic (PLEG): container finished" podID="9527746e-5495-4423-b309-8ed593905301" containerID="1757a821997403147828a28a4743f444904dc126cdc9a064ccd9f1203ab6973c" exitCode=0 Apr 16 17:56:23.137633 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:23.137338 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" event={"ID":"9527746e-5495-4423-b309-8ed593905301","Type":"ContainerDied","Data":"1757a821997403147828a28a4743f444904dc126cdc9a064ccd9f1203ab6973c"} Apr 16 17:56:24.141923 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:24.141889 2571 generic.go:358] "Generic (PLEG): container finished" podID="ca06b770-0beb-4fca-ae0e-a6113065dbdd" containerID="9d4659491ffa55a62e17f7892be381f6277c72c830a2be604ce00d81cc1c9dc7" exitCode=0 Apr 16 17:56:24.142264 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:24.141961 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7" event={"ID":"ca06b770-0beb-4fca-ae0e-a6113065dbdd","Type":"ContainerDied","Data":"9d4659491ffa55a62e17f7892be381f6277c72c830a2be604ce00d81cc1c9dc7"} Apr 16 17:56:25.146103 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:25.146072 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7" event={"ID":"ca06b770-0beb-4fca-ae0e-a6113065dbdd","Type":"ContainerStarted","Data":"06d0d7567da194d949da6d46ef4d590f3ae0d79e28e998b6cffe3796b071fc3c"} Apr 16 17:56:25.146519 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:25.146414 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7" Apr 16 17:56:25.147552 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:25.147530 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7" podUID="ca06b770-0beb-4fca-ae0e-a6113065dbdd" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.40:8080: connect: connection refused" Apr 16 17:56:25.162387 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:25.162343 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7" podStartSLOduration=6.162329602 podStartE2EDuration="6.162329602s" podCreationTimestamp="2026-04-16 17:56:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 17:56:25.161180515 +0000 UTC m=+906.634409994" watchObservedRunningTime="2026-04-16 17:56:25.162329602 +0000 UTC m=+906.635559076" Apr 16 17:56:26.150050 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:26.150008 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7" podUID="ca06b770-0beb-4fca-ae0e-a6113065dbdd" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.40:8080: connect: connection refused" Apr 16 17:56:30.876660 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:30.876614 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" podUID="9527746e-5495-4423-b309-8ed593905301" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.39:8080: connect: connection refused" Apr 16 17:56:30.877104 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:30.877001 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" podUID="9527746e-5495-4423-b309-8ed593905301" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:56:36.150309 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:36.150263 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7" podUID="ca06b770-0beb-4fca-ae0e-a6113065dbdd" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.40:8080: connect: connection refused" Apr 16 17:56:40.876577 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:40.876522 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" podUID="9527746e-5495-4423-b309-8ed593905301" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.39:8080: connect: connection refused" Apr 16 17:56:40.877018 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:40.876698 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" Apr 16 17:56:40.877018 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:40.876912 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" podUID="9527746e-5495-4423-b309-8ed593905301" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:56:40.877018 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:40.877011 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" Apr 16 17:56:46.150774 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:46.150732 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7" podUID="ca06b770-0beb-4fca-ae0e-a6113065dbdd" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.40:8080: connect: connection refused" Apr 16 17:56:49.185045 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:49.185021 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" Apr 16 17:56:49.230869 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:49.230831 2571 generic.go:358] "Generic (PLEG): container finished" podID="9527746e-5495-4423-b309-8ed593905301" containerID="db8f96b91d411738a94a94674360d36456595c9b3188471f2e8e2edec73d354d" exitCode=137 Apr 16 17:56:49.230987 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:49.230949 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" Apr 16 17:56:49.231039 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:49.230946 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" event={"ID":"9527746e-5495-4423-b309-8ed593905301","Type":"ContainerDied","Data":"db8f96b91d411738a94a94674360d36456595c9b3188471f2e8e2edec73d354d"} Apr 16 17:56:49.231075 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:49.231054 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf" event={"ID":"9527746e-5495-4423-b309-8ed593905301","Type":"ContainerDied","Data":"63e0d953542773efc66d9e700a661f46e5e3762e3b30a44795402bb88d036561"} Apr 16 17:56:49.231075 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:49.231070 2571 scope.go:117] "RemoveContainer" containerID="db8f96b91d411738a94a94674360d36456595c9b3188471f2e8e2edec73d354d" Apr 16 17:56:49.235255 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:49.235238 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/9527746e-5495-4423-b309-8ed593905301-kserve-provision-location\") pod \"9527746e-5495-4423-b309-8ed593905301\" (UID: \"9527746e-5495-4423-b309-8ed593905301\") " Apr 16 17:56:49.235529 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:49.235510 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9527746e-5495-4423-b309-8ed593905301-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "9527746e-5495-4423-b309-8ed593905301" (UID: "9527746e-5495-4423-b309-8ed593905301"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 17:56:49.239448 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:49.239431 2571 scope.go:117] "RemoveContainer" containerID="1757a821997403147828a28a4743f444904dc126cdc9a064ccd9f1203ab6973c" Apr 16 17:56:49.246737 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:49.246717 2571 scope.go:117] "RemoveContainer" containerID="83b99db43816170e3a96af7f2fe51d9be37284976ebb34cc1ecef1569f90a3b6" Apr 16 17:56:49.253310 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:49.253296 2571 scope.go:117] "RemoveContainer" containerID="db8f96b91d411738a94a94674360d36456595c9b3188471f2e8e2edec73d354d" Apr 16 17:56:49.253543 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:56:49.253526 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db8f96b91d411738a94a94674360d36456595c9b3188471f2e8e2edec73d354d\": container with ID starting with db8f96b91d411738a94a94674360d36456595c9b3188471f2e8e2edec73d354d not found: ID does not exist" containerID="db8f96b91d411738a94a94674360d36456595c9b3188471f2e8e2edec73d354d" Apr 16 17:56:49.253598 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:49.253549 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db8f96b91d411738a94a94674360d36456595c9b3188471f2e8e2edec73d354d"} err="failed to get container status \"db8f96b91d411738a94a94674360d36456595c9b3188471f2e8e2edec73d354d\": rpc error: code = NotFound desc = could not find container \"db8f96b91d411738a94a94674360d36456595c9b3188471f2e8e2edec73d354d\": container with ID starting with db8f96b91d411738a94a94674360d36456595c9b3188471f2e8e2edec73d354d not found: ID does not exist" Apr 16 17:56:49.253598 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:49.253564 2571 scope.go:117] "RemoveContainer" containerID="1757a821997403147828a28a4743f444904dc126cdc9a064ccd9f1203ab6973c" Apr 16 17:56:49.253741 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:56:49.253728 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1757a821997403147828a28a4743f444904dc126cdc9a064ccd9f1203ab6973c\": container with ID starting with 1757a821997403147828a28a4743f444904dc126cdc9a064ccd9f1203ab6973c not found: ID does not exist" containerID="1757a821997403147828a28a4743f444904dc126cdc9a064ccd9f1203ab6973c" Apr 16 17:56:49.253782 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:49.253743 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1757a821997403147828a28a4743f444904dc126cdc9a064ccd9f1203ab6973c"} err="failed to get container status \"1757a821997403147828a28a4743f444904dc126cdc9a064ccd9f1203ab6973c\": rpc error: code = NotFound desc = could not find container \"1757a821997403147828a28a4743f444904dc126cdc9a064ccd9f1203ab6973c\": container with ID starting with 1757a821997403147828a28a4743f444904dc126cdc9a064ccd9f1203ab6973c not found: ID does not exist" Apr 16 17:56:49.253782 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:49.253755 2571 scope.go:117] "RemoveContainer" containerID="83b99db43816170e3a96af7f2fe51d9be37284976ebb34cc1ecef1569f90a3b6" Apr 16 17:56:49.253962 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:56:49.253950 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83b99db43816170e3a96af7f2fe51d9be37284976ebb34cc1ecef1569f90a3b6\": container with ID starting with 83b99db43816170e3a96af7f2fe51d9be37284976ebb34cc1ecef1569f90a3b6 not found: ID does not exist" containerID="83b99db43816170e3a96af7f2fe51d9be37284976ebb34cc1ecef1569f90a3b6" Apr 16 17:56:49.254009 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:49.253965 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83b99db43816170e3a96af7f2fe51d9be37284976ebb34cc1ecef1569f90a3b6"} err="failed to get container status \"83b99db43816170e3a96af7f2fe51d9be37284976ebb34cc1ecef1569f90a3b6\": rpc error: code = NotFound desc = could not find container \"83b99db43816170e3a96af7f2fe51d9be37284976ebb34cc1ecef1569f90a3b6\": container with ID starting with 83b99db43816170e3a96af7f2fe51d9be37284976ebb34cc1ecef1569f90a3b6 not found: ID does not exist" Apr 16 17:56:49.335982 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:49.335930 2571 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/9527746e-5495-4423-b309-8ed593905301-kserve-provision-location\") on node \"ip-10-0-138-134.ec2.internal\" DevicePath \"\"" Apr 16 17:56:49.552657 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:49.552635 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf"] Apr 16 17:56:49.556447 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:49.556422 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-logger-raw-8600f-predictor-5cddd7c46b-vdstf"] Apr 16 17:56:51.035842 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:51.035808 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9527746e-5495-4423-b309-8ed593905301" path="/var/lib/kubelet/pods/9527746e-5495-4423-b309-8ed593905301/volumes" Apr 16 17:56:56.150662 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:56:56.150625 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7" podUID="ca06b770-0beb-4fca-ae0e-a6113065dbdd" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.40:8080: connect: connection refused" Apr 16 17:57:06.150422 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:57:06.150370 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7" podUID="ca06b770-0beb-4fca-ae0e-a6113065dbdd" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.40:8080: connect: connection refused" Apr 16 17:57:16.150195 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:57:16.150157 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7" podUID="ca06b770-0beb-4fca-ae0e-a6113065dbdd" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.40:8080: connect: connection refused" Apr 16 17:57:26.150901 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:57:26.150826 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7" podUID="ca06b770-0beb-4fca-ae0e-a6113065dbdd" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.40:8080: connect: connection refused" Apr 16 17:57:33.031714 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:57:33.031608 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7" podUID="ca06b770-0beb-4fca-ae0e-a6113065dbdd" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.40:8080: connect: connection refused" Apr 16 17:57:43.031608 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:57:43.031558 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7" podUID="ca06b770-0beb-4fca-ae0e-a6113065dbdd" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.40:8080: connect: connection refused" Apr 16 17:57:53.032110 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:57:53.032071 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7" podUID="ca06b770-0beb-4fca-ae0e-a6113065dbdd" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.40:8080: connect: connection refused" Apr 16 17:58:03.031737 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:03.031690 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7" podUID="ca06b770-0beb-4fca-ae0e-a6113065dbdd" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.40:8080: connect: connection refused" Apr 16 17:58:13.032243 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:13.032188 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7" podUID="ca06b770-0beb-4fca-ae0e-a6113065dbdd" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.40:8080: connect: connection refused" Apr 16 17:58:23.032160 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:23.032110 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7" podUID="ca06b770-0beb-4fca-ae0e-a6113065dbdd" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.40:8080: connect: connection refused" Apr 16 17:58:33.036599 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:33.036565 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7" Apr 16 17:58:39.200421 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:39.200385 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7"] Apr 16 17:58:39.200893 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:39.200642 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7" podUID="ca06b770-0beb-4fca-ae0e-a6113065dbdd" containerName="kserve-container" containerID="cri-o://06d0d7567da194d949da6d46ef4d590f3ae0d79e28e998b6cffe3796b071fc3c" gracePeriod=30 Apr 16 17:58:39.284905 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:39.284874 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-primary-d67b90-predictor-769658cf7d-mgsw8"] Apr 16 17:58:39.285278 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:39.285261 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f79572e2-caf7-404a-ab2d-5a9bad337d6e" containerName="kserve-container" Apr 16 17:58:39.285356 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:39.285280 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="f79572e2-caf7-404a-ab2d-5a9bad337d6e" containerName="kserve-container" Apr 16 17:58:39.285356 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:39.285296 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9527746e-5495-4423-b309-8ed593905301" containerName="kserve-container" Apr 16 17:58:39.285356 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:39.285304 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="9527746e-5495-4423-b309-8ed593905301" containerName="kserve-container" Apr 16 17:58:39.285356 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:39.285329 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9527746e-5495-4423-b309-8ed593905301" containerName="agent" Apr 16 17:58:39.285356 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:39.285338 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="9527746e-5495-4423-b309-8ed593905301" containerName="agent" Apr 16 17:58:39.285609 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:39.285377 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9527746e-5495-4423-b309-8ed593905301" containerName="storage-initializer" Apr 16 17:58:39.285609 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:39.285387 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="9527746e-5495-4423-b309-8ed593905301" containerName="storage-initializer" Apr 16 17:58:39.285609 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:39.285484 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="9527746e-5495-4423-b309-8ed593905301" containerName="kserve-container" Apr 16 17:58:39.285609 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:39.285497 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="f79572e2-caf7-404a-ab2d-5a9bad337d6e" containerName="kserve-container" Apr 16 17:58:39.285609 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:39.285510 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="9527746e-5495-4423-b309-8ed593905301" containerName="agent" Apr 16 17:58:39.288981 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:39.288961 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-primary-d67b90-predictor-769658cf7d-mgsw8" Apr 16 17:58:39.297799 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:39.297774 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-primary-d67b90-predictor-769658cf7d-mgsw8"] Apr 16 17:58:39.348698 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:39.348672 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/a320e17f-b2fc-4b71-80f0-78f2096cdbf2-kserve-provision-location\") pod \"isvc-primary-d67b90-predictor-769658cf7d-mgsw8\" (UID: \"a320e17f-b2fc-4b71-80f0-78f2096cdbf2\") " pod="kserve-ci-e2e-test/isvc-primary-d67b90-predictor-769658cf7d-mgsw8" Apr 16 17:58:39.449355 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:39.449328 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/a320e17f-b2fc-4b71-80f0-78f2096cdbf2-kserve-provision-location\") pod \"isvc-primary-d67b90-predictor-769658cf7d-mgsw8\" (UID: \"a320e17f-b2fc-4b71-80f0-78f2096cdbf2\") " pod="kserve-ci-e2e-test/isvc-primary-d67b90-predictor-769658cf7d-mgsw8" Apr 16 17:58:39.449641 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:39.449625 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/a320e17f-b2fc-4b71-80f0-78f2096cdbf2-kserve-provision-location\") pod \"isvc-primary-d67b90-predictor-769658cf7d-mgsw8\" (UID: \"a320e17f-b2fc-4b71-80f0-78f2096cdbf2\") " pod="kserve-ci-e2e-test/isvc-primary-d67b90-predictor-769658cf7d-mgsw8" Apr 16 17:58:39.600391 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:39.600324 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-primary-d67b90-predictor-769658cf7d-mgsw8" Apr 16 17:58:39.727619 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:39.727602 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-primary-d67b90-predictor-769658cf7d-mgsw8"] Apr 16 17:58:39.729926 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:58:39.729900 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda320e17f_b2fc_4b71_80f0_78f2096cdbf2.slice/crio-ed2632d616c6ff42e1cf01e8635d7a1502c8eee6ccbc9fe8d576f4a05558834e WatchSource:0}: Error finding container ed2632d616c6ff42e1cf01e8635d7a1502c8eee6ccbc9fe8d576f4a05558834e: Status 404 returned error can't find the container with id ed2632d616c6ff42e1cf01e8635d7a1502c8eee6ccbc9fe8d576f4a05558834e Apr 16 17:58:39.732098 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:39.732080 2571 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 17:58:40.618057 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:40.618021 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-primary-d67b90-predictor-769658cf7d-mgsw8" event={"ID":"a320e17f-b2fc-4b71-80f0-78f2096cdbf2","Type":"ContainerStarted","Data":"1e171d0d2e2d358d4629a57e6ef3f0d8c85e049819991e19fc86e1a6c00ac9cd"} Apr 16 17:58:40.618057 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:40.618057 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-primary-d67b90-predictor-769658cf7d-mgsw8" event={"ID":"a320e17f-b2fc-4b71-80f0-78f2096cdbf2","Type":"ContainerStarted","Data":"ed2632d616c6ff42e1cf01e8635d7a1502c8eee6ccbc9fe8d576f4a05558834e"} Apr 16 17:58:43.032164 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:43.032127 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7" podUID="ca06b770-0beb-4fca-ae0e-a6113065dbdd" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.40:8080: connect: connection refused" Apr 16 17:58:43.629539 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:43.629461 2571 generic.go:358] "Generic (PLEG): container finished" podID="a320e17f-b2fc-4b71-80f0-78f2096cdbf2" containerID="1e171d0d2e2d358d4629a57e6ef3f0d8c85e049819991e19fc86e1a6c00ac9cd" exitCode=0 Apr 16 17:58:43.629539 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:43.629514 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-primary-d67b90-predictor-769658cf7d-mgsw8" event={"ID":"a320e17f-b2fc-4b71-80f0-78f2096cdbf2","Type":"ContainerDied","Data":"1e171d0d2e2d358d4629a57e6ef3f0d8c85e049819991e19fc86e1a6c00ac9cd"} Apr 16 17:58:44.633733 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:44.633697 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-primary-d67b90-predictor-769658cf7d-mgsw8" event={"ID":"a320e17f-b2fc-4b71-80f0-78f2096cdbf2","Type":"ContainerStarted","Data":"92fc39dbdc7b222700dd67b41accb7565e126b62220575787b56e439143165f0"} Apr 16 17:58:44.634223 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:44.633985 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-primary-d67b90-predictor-769658cf7d-mgsw8" Apr 16 17:58:44.635386 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:44.635360 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-primary-d67b90-predictor-769658cf7d-mgsw8" podUID="a320e17f-b2fc-4b71-80f0-78f2096cdbf2" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.41:8080: connect: connection refused" Apr 16 17:58:44.649902 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:44.649839 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-primary-d67b90-predictor-769658cf7d-mgsw8" podStartSLOduration=5.649827469 podStartE2EDuration="5.649827469s" podCreationTimestamp="2026-04-16 17:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 17:58:44.648511968 +0000 UTC m=+1046.121741466" watchObservedRunningTime="2026-04-16 17:58:44.649827469 +0000 UTC m=+1046.123056943" Apr 16 17:58:45.637532 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:45.637488 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-primary-d67b90-predictor-769658cf7d-mgsw8" podUID="a320e17f-b2fc-4b71-80f0-78f2096cdbf2" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.41:8080: connect: connection refused" Apr 16 17:58:47.448362 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:47.448340 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7" Apr 16 17:58:47.613476 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:47.613399 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/ca06b770-0beb-4fca-ae0e-a6113065dbdd-kserve-provision-location\") pod \"ca06b770-0beb-4fca-ae0e-a6113065dbdd\" (UID: \"ca06b770-0beb-4fca-ae0e-a6113065dbdd\") " Apr 16 17:58:47.613711 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:47.613690 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca06b770-0beb-4fca-ae0e-a6113065dbdd-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "ca06b770-0beb-4fca-ae0e-a6113065dbdd" (UID: "ca06b770-0beb-4fca-ae0e-a6113065dbdd"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 17:58:47.645334 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:47.645306 2571 generic.go:358] "Generic (PLEG): container finished" podID="ca06b770-0beb-4fca-ae0e-a6113065dbdd" containerID="06d0d7567da194d949da6d46ef4d590f3ae0d79e28e998b6cffe3796b071fc3c" exitCode=0 Apr 16 17:58:47.645477 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:47.645377 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7" Apr 16 17:58:47.645477 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:47.645380 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7" event={"ID":"ca06b770-0beb-4fca-ae0e-a6113065dbdd","Type":"ContainerDied","Data":"06d0d7567da194d949da6d46ef4d590f3ae0d79e28e998b6cffe3796b071fc3c"} Apr 16 17:58:47.645477 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:47.645413 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7" event={"ID":"ca06b770-0beb-4fca-ae0e-a6113065dbdd","Type":"ContainerDied","Data":"a8fcd0cde98716521bc255d7114a6025461634aa37f1eb5c78dddd8f9a5a2cea"} Apr 16 17:58:47.645477 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:47.645428 2571 scope.go:117] "RemoveContainer" containerID="06d0d7567da194d949da6d46ef4d590f3ae0d79e28e998b6cffe3796b071fc3c" Apr 16 17:58:47.654153 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:47.654136 2571 scope.go:117] "RemoveContainer" containerID="9d4659491ffa55a62e17f7892be381f6277c72c830a2be604ce00d81cc1c9dc7" Apr 16 17:58:47.660901 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:47.660882 2571 scope.go:117] "RemoveContainer" containerID="06d0d7567da194d949da6d46ef4d590f3ae0d79e28e998b6cffe3796b071fc3c" Apr 16 17:58:47.661122 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:58:47.661103 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06d0d7567da194d949da6d46ef4d590f3ae0d79e28e998b6cffe3796b071fc3c\": container with ID starting with 06d0d7567da194d949da6d46ef4d590f3ae0d79e28e998b6cffe3796b071fc3c not found: ID does not exist" containerID="06d0d7567da194d949da6d46ef4d590f3ae0d79e28e998b6cffe3796b071fc3c" Apr 16 17:58:47.661177 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:47.661131 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06d0d7567da194d949da6d46ef4d590f3ae0d79e28e998b6cffe3796b071fc3c"} err="failed to get container status \"06d0d7567da194d949da6d46ef4d590f3ae0d79e28e998b6cffe3796b071fc3c\": rpc error: code = NotFound desc = could not find container \"06d0d7567da194d949da6d46ef4d590f3ae0d79e28e998b6cffe3796b071fc3c\": container with ID starting with 06d0d7567da194d949da6d46ef4d590f3ae0d79e28e998b6cffe3796b071fc3c not found: ID does not exist" Apr 16 17:58:47.661177 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:47.661147 2571 scope.go:117] "RemoveContainer" containerID="9d4659491ffa55a62e17f7892be381f6277c72c830a2be604ce00d81cc1c9dc7" Apr 16 17:58:47.661397 ip-10-0-138-134 kubenswrapper[2571]: E0416 17:58:47.661379 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d4659491ffa55a62e17f7892be381f6277c72c830a2be604ce00d81cc1c9dc7\": container with ID starting with 9d4659491ffa55a62e17f7892be381f6277c72c830a2be604ce00d81cc1c9dc7 not found: ID does not exist" containerID="9d4659491ffa55a62e17f7892be381f6277c72c830a2be604ce00d81cc1c9dc7" Apr 16 17:58:47.661446 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:47.661404 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d4659491ffa55a62e17f7892be381f6277c72c830a2be604ce00d81cc1c9dc7"} err="failed to get container status \"9d4659491ffa55a62e17f7892be381f6277c72c830a2be604ce00d81cc1c9dc7\": rpc error: code = NotFound desc = could not find container \"9d4659491ffa55a62e17f7892be381f6277c72c830a2be604ce00d81cc1c9dc7\": container with ID starting with 9d4659491ffa55a62e17f7892be381f6277c72c830a2be604ce00d81cc1c9dc7 not found: ID does not exist" Apr 16 17:58:47.665577 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:47.665557 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7"] Apr 16 17:58:47.668484 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:47.668467 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-scale-raw-b6832-predictor-766d4cf5b7-nvpp7"] Apr 16 17:58:47.714116 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:47.714089 2571 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/ca06b770-0beb-4fca-ae0e-a6113065dbdd-kserve-provision-location\") on node \"ip-10-0-138-134.ec2.internal\" DevicePath \"\"" Apr 16 17:58:49.036319 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:49.036274 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca06b770-0beb-4fca-ae0e-a6113065dbdd" path="/var/lib/kubelet/pods/ca06b770-0beb-4fca-ae0e-a6113065dbdd/volumes" Apr 16 17:58:55.637922 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:58:55.637880 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-primary-d67b90-predictor-769658cf7d-mgsw8" podUID="a320e17f-b2fc-4b71-80f0-78f2096cdbf2" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.41:8080: connect: connection refused" Apr 16 17:59:05.638506 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:59:05.638418 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-primary-d67b90-predictor-769658cf7d-mgsw8" podUID="a320e17f-b2fc-4b71-80f0-78f2096cdbf2" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.41:8080: connect: connection refused" Apr 16 17:59:15.638349 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:59:15.638308 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-primary-d67b90-predictor-769658cf7d-mgsw8" podUID="a320e17f-b2fc-4b71-80f0-78f2096cdbf2" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.41:8080: connect: connection refused" Apr 16 17:59:25.637768 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:59:25.637726 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-primary-d67b90-predictor-769658cf7d-mgsw8" podUID="a320e17f-b2fc-4b71-80f0-78f2096cdbf2" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.41:8080: connect: connection refused" Apr 16 17:59:35.638111 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:59:35.638071 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-primary-d67b90-predictor-769658cf7d-mgsw8" podUID="a320e17f-b2fc-4b71-80f0-78f2096cdbf2" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.41:8080: connect: connection refused" Apr 16 17:59:45.638901 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:59:45.638846 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-primary-d67b90-predictor-769658cf7d-mgsw8" Apr 16 17:59:49.384628 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:59:49.384596 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-secondary-d67b90-predictor-746794b559-zjshw"] Apr 16 17:59:49.385133 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:59:49.385117 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ca06b770-0beb-4fca-ae0e-a6113065dbdd" containerName="storage-initializer" Apr 16 17:59:49.385133 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:59:49.385135 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca06b770-0beb-4fca-ae0e-a6113065dbdd" containerName="storage-initializer" Apr 16 17:59:49.385225 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:59:49.385146 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ca06b770-0beb-4fca-ae0e-a6113065dbdd" containerName="kserve-container" Apr 16 17:59:49.385225 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:59:49.385151 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca06b770-0beb-4fca-ae0e-a6113065dbdd" containerName="kserve-container" Apr 16 17:59:49.385225 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:59:49.385206 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="ca06b770-0beb-4fca-ae0e-a6113065dbdd" containerName="kserve-container" Apr 16 17:59:49.388221 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:59:49.388205 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-secondary-d67b90-predictor-746794b559-zjshw" Apr 16 17:59:49.390253 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:59:49.390230 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"invalid-s3-secret-d67b90\"" Apr 16 17:59:49.390391 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:59:49.390366 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"invalid-s3-sa-d67b90-dockercfg-vhk44\"" Apr 16 17:59:49.390848 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:59:49.390833 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"odh-kserve-custom-ca-bundle\"" Apr 16 17:59:49.402709 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:59:49.402688 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-secondary-d67b90-predictor-746794b559-zjshw"] Apr 16 17:59:49.487423 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:59:49.487400 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/0bc5d3d7-7e42-4b9d-b345-945560f025df-cabundle-cert\") pod \"isvc-secondary-d67b90-predictor-746794b559-zjshw\" (UID: \"0bc5d3d7-7e42-4b9d-b345-945560f025df\") " pod="kserve-ci-e2e-test/isvc-secondary-d67b90-predictor-746794b559-zjshw" Apr 16 17:59:49.487546 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:59:49.487431 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/0bc5d3d7-7e42-4b9d-b345-945560f025df-kserve-provision-location\") pod \"isvc-secondary-d67b90-predictor-746794b559-zjshw\" (UID: \"0bc5d3d7-7e42-4b9d-b345-945560f025df\") " pod="kserve-ci-e2e-test/isvc-secondary-d67b90-predictor-746794b559-zjshw" Apr 16 17:59:49.588607 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:59:49.588569 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/0bc5d3d7-7e42-4b9d-b345-945560f025df-cabundle-cert\") pod \"isvc-secondary-d67b90-predictor-746794b559-zjshw\" (UID: \"0bc5d3d7-7e42-4b9d-b345-945560f025df\") " pod="kserve-ci-e2e-test/isvc-secondary-d67b90-predictor-746794b559-zjshw" Apr 16 17:59:49.588607 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:59:49.588613 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/0bc5d3d7-7e42-4b9d-b345-945560f025df-kserve-provision-location\") pod \"isvc-secondary-d67b90-predictor-746794b559-zjshw\" (UID: \"0bc5d3d7-7e42-4b9d-b345-945560f025df\") " pod="kserve-ci-e2e-test/isvc-secondary-d67b90-predictor-746794b559-zjshw" Apr 16 17:59:49.589020 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:59:49.589004 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/0bc5d3d7-7e42-4b9d-b345-945560f025df-kserve-provision-location\") pod \"isvc-secondary-d67b90-predictor-746794b559-zjshw\" (UID: \"0bc5d3d7-7e42-4b9d-b345-945560f025df\") " pod="kserve-ci-e2e-test/isvc-secondary-d67b90-predictor-746794b559-zjshw" Apr 16 17:59:49.589173 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:59:49.589155 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/0bc5d3d7-7e42-4b9d-b345-945560f025df-cabundle-cert\") pod \"isvc-secondary-d67b90-predictor-746794b559-zjshw\" (UID: \"0bc5d3d7-7e42-4b9d-b345-945560f025df\") " pod="kserve-ci-e2e-test/isvc-secondary-d67b90-predictor-746794b559-zjshw" Apr 16 17:59:49.699139 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:59:49.699115 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-secondary-d67b90-predictor-746794b559-zjshw" Apr 16 17:59:50.028018 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:59:50.027986 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-secondary-d67b90-predictor-746794b559-zjshw"] Apr 16 17:59:50.030933 ip-10-0-138-134 kubenswrapper[2571]: W0416 17:59:50.030904 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bc5d3d7_7e42_4b9d_b345_945560f025df.slice/crio-2e42b619ce21b7f0e09ce21d3ca8738589e68a22610ea717ade8e3854ae9cd5e WatchSource:0}: Error finding container 2e42b619ce21b7f0e09ce21d3ca8738589e68a22610ea717ade8e3854ae9cd5e: Status 404 returned error can't find the container with id 2e42b619ce21b7f0e09ce21d3ca8738589e68a22610ea717ade8e3854ae9cd5e Apr 16 17:59:50.865946 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:59:50.865911 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-secondary-d67b90-predictor-746794b559-zjshw" event={"ID":"0bc5d3d7-7e42-4b9d-b345-945560f025df","Type":"ContainerStarted","Data":"ae8c95abd31b55dfdde291c509a6789fed5d6c776cc855e5b639332473c0bf8e"} Apr 16 17:59:50.866295 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:59:50.865952 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-secondary-d67b90-predictor-746794b559-zjshw" event={"ID":"0bc5d3d7-7e42-4b9d-b345-945560f025df","Type":"ContainerStarted","Data":"2e42b619ce21b7f0e09ce21d3ca8738589e68a22610ea717ade8e3854ae9cd5e"} Apr 16 17:59:57.891475 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:59:57.891448 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-secondary-d67b90-predictor-746794b559-zjshw_0bc5d3d7-7e42-4b9d-b345-945560f025df/storage-initializer/0.log" Apr 16 17:59:57.891829 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:59:57.891484 2571 generic.go:358] "Generic (PLEG): container finished" podID="0bc5d3d7-7e42-4b9d-b345-945560f025df" containerID="ae8c95abd31b55dfdde291c509a6789fed5d6c776cc855e5b639332473c0bf8e" exitCode=1 Apr 16 17:59:57.891829 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:59:57.891534 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-secondary-d67b90-predictor-746794b559-zjshw" event={"ID":"0bc5d3d7-7e42-4b9d-b345-945560f025df","Type":"ContainerDied","Data":"ae8c95abd31b55dfdde291c509a6789fed5d6c776cc855e5b639332473c0bf8e"} Apr 16 17:59:58.897530 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:59:58.897500 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-secondary-d67b90-predictor-746794b559-zjshw_0bc5d3d7-7e42-4b9d-b345-945560f025df/storage-initializer/0.log" Apr 16 17:59:58.897934 ip-10-0-138-134 kubenswrapper[2571]: I0416 17:59:58.897615 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-secondary-d67b90-predictor-746794b559-zjshw" event={"ID":"0bc5d3d7-7e42-4b9d-b345-945560f025df","Type":"ContainerStarted","Data":"1e9aa09e8f7b12870f67693ff213fcd91eacaeed9148ba621fab09c99482f856"} Apr 16 18:00:01.910463 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:01.910441 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-secondary-d67b90-predictor-746794b559-zjshw_0bc5d3d7-7e42-4b9d-b345-945560f025df/storage-initializer/1.log" Apr 16 18:00:01.910868 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:01.910806 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-secondary-d67b90-predictor-746794b559-zjshw_0bc5d3d7-7e42-4b9d-b345-945560f025df/storage-initializer/0.log" Apr 16 18:00:01.910939 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:01.910844 2571 generic.go:358] "Generic (PLEG): container finished" podID="0bc5d3d7-7e42-4b9d-b345-945560f025df" containerID="1e9aa09e8f7b12870f67693ff213fcd91eacaeed9148ba621fab09c99482f856" exitCode=1 Apr 16 18:00:01.910939 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:01.910919 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-secondary-d67b90-predictor-746794b559-zjshw" event={"ID":"0bc5d3d7-7e42-4b9d-b345-945560f025df","Type":"ContainerDied","Data":"1e9aa09e8f7b12870f67693ff213fcd91eacaeed9148ba621fab09c99482f856"} Apr 16 18:00:01.911038 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:01.910964 2571 scope.go:117] "RemoveContainer" containerID="ae8c95abd31b55dfdde291c509a6789fed5d6c776cc855e5b639332473c0bf8e" Apr 16 18:00:01.911315 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:01.911300 2571 scope.go:117] "RemoveContainer" containerID="ae8c95abd31b55dfdde291c509a6789fed5d6c776cc855e5b639332473c0bf8e" Apr 16 18:00:01.921057 ip-10-0-138-134 kubenswrapper[2571]: E0416 18:00:01.921025 2571 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_storage-initializer_isvc-secondary-d67b90-predictor-746794b559-zjshw_kserve-ci-e2e-test_0bc5d3d7-7e42-4b9d-b345-945560f025df_0 in pod sandbox 2e42b619ce21b7f0e09ce21d3ca8738589e68a22610ea717ade8e3854ae9cd5e from index: no such id: 'ae8c95abd31b55dfdde291c509a6789fed5d6c776cc855e5b639332473c0bf8e'" containerID="ae8c95abd31b55dfdde291c509a6789fed5d6c776cc855e5b639332473c0bf8e" Apr 16 18:00:01.921168 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:01.921063 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae8c95abd31b55dfdde291c509a6789fed5d6c776cc855e5b639332473c0bf8e"} err="rpc error: code = Unknown desc = failed to delete container k8s_storage-initializer_isvc-secondary-d67b90-predictor-746794b559-zjshw_kserve-ci-e2e-test_0bc5d3d7-7e42-4b9d-b345-945560f025df_0 in pod sandbox 2e42b619ce21b7f0e09ce21d3ca8738589e68a22610ea717ade8e3854ae9cd5e from index: no such id: 'ae8c95abd31b55dfdde291c509a6789fed5d6c776cc855e5b639332473c0bf8e'" Apr 16 18:00:01.921301 ip-10-0-138-134 kubenswrapper[2571]: E0416 18:00:01.921280 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-initializer\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-initializer pod=isvc-secondary-d67b90-predictor-746794b559-zjshw_kserve-ci-e2e-test(0bc5d3d7-7e42-4b9d-b345-945560f025df)\"" pod="kserve-ci-e2e-test/isvc-secondary-d67b90-predictor-746794b559-zjshw" podUID="0bc5d3d7-7e42-4b9d-b345-945560f025df" Apr 16 18:00:02.916234 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:02.916204 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-secondary-d67b90-predictor-746794b559-zjshw_0bc5d3d7-7e42-4b9d-b345-945560f025df/storage-initializer/1.log" Apr 16 18:00:09.475282 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:09.475248 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-secondary-d67b90-predictor-746794b559-zjshw"] Apr 16 18:00:09.527968 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:09.527934 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-primary-d67b90-predictor-769658cf7d-mgsw8"] Apr 16 18:00:09.528285 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:09.528245 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-primary-d67b90-predictor-769658cf7d-mgsw8" podUID="a320e17f-b2fc-4b71-80f0-78f2096cdbf2" containerName="kserve-container" containerID="cri-o://92fc39dbdc7b222700dd67b41accb7565e126b62220575787b56e439143165f0" gracePeriod=30 Apr 16 18:00:09.574912 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:09.574886 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc"] Apr 16 18:00:09.578723 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:09.578701 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc" Apr 16 18:00:09.580739 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:09.580711 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"fail-s3-sa-67255c-dockercfg-8r72m\"" Apr 16 18:00:09.580836 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:09.580711 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"fail-s3-secret-67255c\"" Apr 16 18:00:09.593523 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:09.593502 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc"] Apr 16 18:00:09.607770 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:09.607750 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-secondary-d67b90-predictor-746794b559-zjshw_0bc5d3d7-7e42-4b9d-b345-945560f025df/storage-initializer/1.log" Apr 16 18:00:09.607889 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:09.607803 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-secondary-d67b90-predictor-746794b559-zjshw" Apr 16 18:00:09.742531 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:09.742467 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/0bc5d3d7-7e42-4b9d-b345-945560f025df-kserve-provision-location\") pod \"0bc5d3d7-7e42-4b9d-b345-945560f025df\" (UID: \"0bc5d3d7-7e42-4b9d-b345-945560f025df\") " Apr 16 18:00:09.742531 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:09.742501 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/0bc5d3d7-7e42-4b9d-b345-945560f025df-cabundle-cert\") pod \"0bc5d3d7-7e42-4b9d-b345-945560f025df\" (UID: \"0bc5d3d7-7e42-4b9d-b345-945560f025df\") " Apr 16 18:00:09.742728 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:09.742697 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/bcd58984-1dbd-4fff-b16d-420a04eb32ed-kserve-provision-location\") pod \"isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc\" (UID: \"bcd58984-1dbd-4fff-b16d-420a04eb32ed\") " pod="kserve-ci-e2e-test/isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc" Apr 16 18:00:09.742788 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:09.742742 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bc5d3d7-7e42-4b9d-b345-945560f025df-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "0bc5d3d7-7e42-4b9d-b345-945560f025df" (UID: "0bc5d3d7-7e42-4b9d-b345-945560f025df"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 18:00:09.742844 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:09.742806 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/bcd58984-1dbd-4fff-b16d-420a04eb32ed-cabundle-cert\") pod \"isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc\" (UID: \"bcd58984-1dbd-4fff-b16d-420a04eb32ed\") " pod="kserve-ci-e2e-test/isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc" Apr 16 18:00:09.742943 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:09.742874 2571 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/0bc5d3d7-7e42-4b9d-b345-945560f025df-kserve-provision-location\") on node \"ip-10-0-138-134.ec2.internal\" DevicePath \"\"" Apr 16 18:00:09.742993 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:09.742951 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bc5d3d7-7e42-4b9d-b345-945560f025df-cabundle-cert" (OuterVolumeSpecName: "cabundle-cert") pod "0bc5d3d7-7e42-4b9d-b345-945560f025df" (UID: "0bc5d3d7-7e42-4b9d-b345-945560f025df"). InnerVolumeSpecName "cabundle-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 18:00:09.843985 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:09.843958 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/bcd58984-1dbd-4fff-b16d-420a04eb32ed-kserve-provision-location\") pod \"isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc\" (UID: \"bcd58984-1dbd-4fff-b16d-420a04eb32ed\") " pod="kserve-ci-e2e-test/isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc" Apr 16 18:00:09.844095 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:09.844018 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/bcd58984-1dbd-4fff-b16d-420a04eb32ed-cabundle-cert\") pod \"isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc\" (UID: \"bcd58984-1dbd-4fff-b16d-420a04eb32ed\") " pod="kserve-ci-e2e-test/isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc" Apr 16 18:00:09.844150 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:09.844109 2571 reconciler_common.go:299] "Volume detached for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/0bc5d3d7-7e42-4b9d-b345-945560f025df-cabundle-cert\") on node \"ip-10-0-138-134.ec2.internal\" DevicePath \"\"" Apr 16 18:00:09.844301 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:09.844281 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/bcd58984-1dbd-4fff-b16d-420a04eb32ed-kserve-provision-location\") pod \"isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc\" (UID: \"bcd58984-1dbd-4fff-b16d-420a04eb32ed\") " pod="kserve-ci-e2e-test/isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc" Apr 16 18:00:09.844556 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:09.844540 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/bcd58984-1dbd-4fff-b16d-420a04eb32ed-cabundle-cert\") pod \"isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc\" (UID: \"bcd58984-1dbd-4fff-b16d-420a04eb32ed\") " pod="kserve-ci-e2e-test/isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc" Apr 16 18:00:09.889197 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:09.889168 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc" Apr 16 18:00:09.943819 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:09.943795 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-secondary-d67b90-predictor-746794b559-zjshw_0bc5d3d7-7e42-4b9d-b345-945560f025df/storage-initializer/1.log" Apr 16 18:00:09.943960 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:09.943931 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-secondary-d67b90-predictor-746794b559-zjshw" Apr 16 18:00:09.944025 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:09.943970 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-secondary-d67b90-predictor-746794b559-zjshw" event={"ID":"0bc5d3d7-7e42-4b9d-b345-945560f025df","Type":"ContainerDied","Data":"2e42b619ce21b7f0e09ce21d3ca8738589e68a22610ea717ade8e3854ae9cd5e"} Apr 16 18:00:09.944025 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:09.944006 2571 scope.go:117] "RemoveContainer" containerID="1e9aa09e8f7b12870f67693ff213fcd91eacaeed9148ba621fab09c99482f856" Apr 16 18:00:09.993977 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:09.993896 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-secondary-d67b90-predictor-746794b559-zjshw"] Apr 16 18:00:09.997322 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:09.997295 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-secondary-d67b90-predictor-746794b559-zjshw"] Apr 16 18:00:10.021595 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:10.021574 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc"] Apr 16 18:00:10.024526 ip-10-0-138-134 kubenswrapper[2571]: W0416 18:00:10.024499 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbcd58984_1dbd_4fff_b16d_420a04eb32ed.slice/crio-9c4fd4147919a693b1b9e0622de38496a731a19ac83ab204412b9a04e32a9102 WatchSource:0}: Error finding container 9c4fd4147919a693b1b9e0622de38496a731a19ac83ab204412b9a04e32a9102: Status 404 returned error can't find the container with id 9c4fd4147919a693b1b9e0622de38496a731a19ac83ab204412b9a04e32a9102 Apr 16 18:00:10.949485 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:10.949452 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc" event={"ID":"bcd58984-1dbd-4fff-b16d-420a04eb32ed","Type":"ContainerStarted","Data":"bd5fcf008e93a4da4793633382ca56a73c5542909e9d0f10f09529e4a748a5c1"} Apr 16 18:00:10.949485 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:10.949485 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc" event={"ID":"bcd58984-1dbd-4fff-b16d-420a04eb32ed","Type":"ContainerStarted","Data":"9c4fd4147919a693b1b9e0622de38496a731a19ac83ab204412b9a04e32a9102"} Apr 16 18:00:11.034770 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:11.034735 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bc5d3d7-7e42-4b9d-b345-945560f025df" path="/var/lib/kubelet/pods/0bc5d3d7-7e42-4b9d-b345-945560f025df/volumes" Apr 16 18:00:13.268821 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:13.268803 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-primary-d67b90-predictor-769658cf7d-mgsw8" Apr 16 18:00:13.371618 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:13.371558 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/a320e17f-b2fc-4b71-80f0-78f2096cdbf2-kserve-provision-location\") pod \"a320e17f-b2fc-4b71-80f0-78f2096cdbf2\" (UID: \"a320e17f-b2fc-4b71-80f0-78f2096cdbf2\") " Apr 16 18:00:13.371850 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:13.371830 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a320e17f-b2fc-4b71-80f0-78f2096cdbf2-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "a320e17f-b2fc-4b71-80f0-78f2096cdbf2" (UID: "a320e17f-b2fc-4b71-80f0-78f2096cdbf2"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 18:00:13.472777 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:13.472749 2571 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/a320e17f-b2fc-4b71-80f0-78f2096cdbf2-kserve-provision-location\") on node \"ip-10-0-138-134.ec2.internal\" DevicePath \"\"" Apr 16 18:00:13.960435 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:13.960399 2571 generic.go:358] "Generic (PLEG): container finished" podID="a320e17f-b2fc-4b71-80f0-78f2096cdbf2" containerID="92fc39dbdc7b222700dd67b41accb7565e126b62220575787b56e439143165f0" exitCode=0 Apr 16 18:00:13.960608 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:13.960476 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-primary-d67b90-predictor-769658cf7d-mgsw8" Apr 16 18:00:13.960608 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:13.960482 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-primary-d67b90-predictor-769658cf7d-mgsw8" event={"ID":"a320e17f-b2fc-4b71-80f0-78f2096cdbf2","Type":"ContainerDied","Data":"92fc39dbdc7b222700dd67b41accb7565e126b62220575787b56e439143165f0"} Apr 16 18:00:13.960608 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:13.960519 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-primary-d67b90-predictor-769658cf7d-mgsw8" event={"ID":"a320e17f-b2fc-4b71-80f0-78f2096cdbf2","Type":"ContainerDied","Data":"ed2632d616c6ff42e1cf01e8635d7a1502c8eee6ccbc9fe8d576f4a05558834e"} Apr 16 18:00:13.960608 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:13.960535 2571 scope.go:117] "RemoveContainer" containerID="92fc39dbdc7b222700dd67b41accb7565e126b62220575787b56e439143165f0" Apr 16 18:00:13.969759 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:13.969743 2571 scope.go:117] "RemoveContainer" containerID="1e171d0d2e2d358d4629a57e6ef3f0d8c85e049819991e19fc86e1a6c00ac9cd" Apr 16 18:00:13.976983 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:13.976962 2571 scope.go:117] "RemoveContainer" containerID="92fc39dbdc7b222700dd67b41accb7565e126b62220575787b56e439143165f0" Apr 16 18:00:13.979593 ip-10-0-138-134 kubenswrapper[2571]: E0416 18:00:13.977212 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92fc39dbdc7b222700dd67b41accb7565e126b62220575787b56e439143165f0\": container with ID starting with 92fc39dbdc7b222700dd67b41accb7565e126b62220575787b56e439143165f0 not found: ID does not exist" containerID="92fc39dbdc7b222700dd67b41accb7565e126b62220575787b56e439143165f0" Apr 16 18:00:13.979593 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:13.977660 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92fc39dbdc7b222700dd67b41accb7565e126b62220575787b56e439143165f0"} err="failed to get container status \"92fc39dbdc7b222700dd67b41accb7565e126b62220575787b56e439143165f0\": rpc error: code = NotFound desc = could not find container \"92fc39dbdc7b222700dd67b41accb7565e126b62220575787b56e439143165f0\": container with ID starting with 92fc39dbdc7b222700dd67b41accb7565e126b62220575787b56e439143165f0 not found: ID does not exist" Apr 16 18:00:13.979593 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:13.977699 2571 scope.go:117] "RemoveContainer" containerID="1e171d0d2e2d358d4629a57e6ef3f0d8c85e049819991e19fc86e1a6c00ac9cd" Apr 16 18:00:13.980388 ip-10-0-138-134 kubenswrapper[2571]: E0416 18:00:13.979997 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e171d0d2e2d358d4629a57e6ef3f0d8c85e049819991e19fc86e1a6c00ac9cd\": container with ID starting with 1e171d0d2e2d358d4629a57e6ef3f0d8c85e049819991e19fc86e1a6c00ac9cd not found: ID does not exist" containerID="1e171d0d2e2d358d4629a57e6ef3f0d8c85e049819991e19fc86e1a6c00ac9cd" Apr 16 18:00:13.980388 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:13.980048 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e171d0d2e2d358d4629a57e6ef3f0d8c85e049819991e19fc86e1a6c00ac9cd"} err="failed to get container status \"1e171d0d2e2d358d4629a57e6ef3f0d8c85e049819991e19fc86e1a6c00ac9cd\": rpc error: code = NotFound desc = could not find container \"1e171d0d2e2d358d4629a57e6ef3f0d8c85e049819991e19fc86e1a6c00ac9cd\": container with ID starting with 1e171d0d2e2d358d4629a57e6ef3f0d8c85e049819991e19fc86e1a6c00ac9cd not found: ID does not exist" Apr 16 18:00:13.983777 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:13.983747 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-primary-d67b90-predictor-769658cf7d-mgsw8"] Apr 16 18:00:13.987646 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:13.987624 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-primary-d67b90-predictor-769658cf7d-mgsw8"] Apr 16 18:00:14.966320 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:14.966292 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc_bcd58984-1dbd-4fff-b16d-420a04eb32ed/storage-initializer/0.log" Apr 16 18:00:14.966696 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:14.966331 2571 generic.go:358] "Generic (PLEG): container finished" podID="bcd58984-1dbd-4fff-b16d-420a04eb32ed" containerID="bd5fcf008e93a4da4793633382ca56a73c5542909e9d0f10f09529e4a748a5c1" exitCode=1 Apr 16 18:00:14.966696 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:14.966374 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc" event={"ID":"bcd58984-1dbd-4fff-b16d-420a04eb32ed","Type":"ContainerDied","Data":"bd5fcf008e93a4da4793633382ca56a73c5542909e9d0f10f09529e4a748a5c1"} Apr 16 18:00:15.036084 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:15.036062 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a320e17f-b2fc-4b71-80f0-78f2096cdbf2" path="/var/lib/kubelet/pods/a320e17f-b2fc-4b71-80f0-78f2096cdbf2/volumes" Apr 16 18:00:15.971566 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:15.971541 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc_bcd58984-1dbd-4fff-b16d-420a04eb32ed/storage-initializer/0.log" Apr 16 18:00:15.971974 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:15.971652 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc" event={"ID":"bcd58984-1dbd-4fff-b16d-420a04eb32ed","Type":"ContainerStarted","Data":"9ac04c7f5a9f076732fa9df787f5feb65942710da39885f095ed8a5ae40b1f38"} Apr 16 18:00:18.983669 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:18.983643 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc_bcd58984-1dbd-4fff-b16d-420a04eb32ed/storage-initializer/1.log" Apr 16 18:00:18.984047 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:18.984004 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc_bcd58984-1dbd-4fff-b16d-420a04eb32ed/storage-initializer/0.log" Apr 16 18:00:18.984047 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:18.984038 2571 generic.go:358] "Generic (PLEG): container finished" podID="bcd58984-1dbd-4fff-b16d-420a04eb32ed" containerID="9ac04c7f5a9f076732fa9df787f5feb65942710da39885f095ed8a5ae40b1f38" exitCode=1 Apr 16 18:00:18.984137 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:18.984064 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc" event={"ID":"bcd58984-1dbd-4fff-b16d-420a04eb32ed","Type":"ContainerDied","Data":"9ac04c7f5a9f076732fa9df787f5feb65942710da39885f095ed8a5ae40b1f38"} Apr 16 18:00:18.984137 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:18.984093 2571 scope.go:117] "RemoveContainer" containerID="bd5fcf008e93a4da4793633382ca56a73c5542909e9d0f10f09529e4a748a5c1" Apr 16 18:00:18.984429 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:18.984410 2571 scope.go:117] "RemoveContainer" containerID="bd5fcf008e93a4da4793633382ca56a73c5542909e9d0f10f09529e4a748a5c1" Apr 16 18:00:18.995261 ip-10-0-138-134 kubenswrapper[2571]: E0416 18:00:18.995228 2571 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_storage-initializer_isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc_kserve-ci-e2e-test_bcd58984-1dbd-4fff-b16d-420a04eb32ed_0 in pod sandbox 9c4fd4147919a693b1b9e0622de38496a731a19ac83ab204412b9a04e32a9102 from index: no such id: 'bd5fcf008e93a4da4793633382ca56a73c5542909e9d0f10f09529e4a748a5c1'" containerID="bd5fcf008e93a4da4793633382ca56a73c5542909e9d0f10f09529e4a748a5c1" Apr 16 18:00:18.995330 ip-10-0-138-134 kubenswrapper[2571]: E0416 18:00:18.995277 2571 kuberuntime_container.go:951] "Unhandled Error" err="failed to remove pod init container \"storage-initializer\": rpc error: code = Unknown desc = failed to delete container k8s_storage-initializer_isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc_kserve-ci-e2e-test_bcd58984-1dbd-4fff-b16d-420a04eb32ed_0 in pod sandbox 9c4fd4147919a693b1b9e0622de38496a731a19ac83ab204412b9a04e32a9102 from index: no such id: 'bd5fcf008e93a4da4793633382ca56a73c5542909e9d0f10f09529e4a748a5c1'; Skipping pod \"isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc_kserve-ci-e2e-test(bcd58984-1dbd-4fff-b16d-420a04eb32ed)\"" logger="UnhandledError" Apr 16 18:00:18.996578 ip-10-0-138-134 kubenswrapper[2571]: E0416 18:00:18.996558 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-initializer\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-initializer pod=isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc_kserve-ci-e2e-test(bcd58984-1dbd-4fff-b16d-420a04eb32ed)\"" pod="kserve-ci-e2e-test/isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc" podUID="bcd58984-1dbd-4fff-b16d-420a04eb32ed" Apr 16 18:00:19.580083 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:19.580054 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc"] Apr 16 18:00:19.680801 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:19.680769 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-33f7f-predictor-8648f56f89-qnmqj"] Apr 16 18:00:19.681202 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:19.681185 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0bc5d3d7-7e42-4b9d-b345-945560f025df" containerName="storage-initializer" Apr 16 18:00:19.681246 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:19.681204 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bc5d3d7-7e42-4b9d-b345-945560f025df" containerName="storage-initializer" Apr 16 18:00:19.681246 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:19.681224 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a320e17f-b2fc-4b71-80f0-78f2096cdbf2" containerName="kserve-container" Apr 16 18:00:19.681246 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:19.681230 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="a320e17f-b2fc-4b71-80f0-78f2096cdbf2" containerName="kserve-container" Apr 16 18:00:19.681246 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:19.681239 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a320e17f-b2fc-4b71-80f0-78f2096cdbf2" containerName="storage-initializer" Apr 16 18:00:19.681246 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:19.681245 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="a320e17f-b2fc-4b71-80f0-78f2096cdbf2" containerName="storage-initializer" Apr 16 18:00:19.681395 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:19.681306 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="0bc5d3d7-7e42-4b9d-b345-945560f025df" containerName="storage-initializer" Apr 16 18:00:19.681395 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:19.681320 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="a320e17f-b2fc-4b71-80f0-78f2096cdbf2" containerName="kserve-container" Apr 16 18:00:19.681395 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:19.681381 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0bc5d3d7-7e42-4b9d-b345-945560f025df" containerName="storage-initializer" Apr 16 18:00:19.681395 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:19.681388 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bc5d3d7-7e42-4b9d-b345-945560f025df" containerName="storage-initializer" Apr 16 18:00:19.681509 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:19.681445 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="0bc5d3d7-7e42-4b9d-b345-945560f025df" containerName="storage-initializer" Apr 16 18:00:19.685723 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:19.685706 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/raw-sklearn-33f7f-predictor-8648f56f89-qnmqj" Apr 16 18:00:19.687946 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:19.687927 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"default-dockercfg-kfjrd\"" Apr 16 18:00:19.693806 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:19.693781 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-33f7f-predictor-8648f56f89-qnmqj"] Apr 16 18:00:19.823049 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:19.823017 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/f95ee321-a0f8-4022-80c5-5e74153f19bc-kserve-provision-location\") pod \"raw-sklearn-33f7f-predictor-8648f56f89-qnmqj\" (UID: \"f95ee321-a0f8-4022-80c5-5e74153f19bc\") " pod="kserve-ci-e2e-test/raw-sklearn-33f7f-predictor-8648f56f89-qnmqj" Apr 16 18:00:19.923950 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:19.923922 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/f95ee321-a0f8-4022-80c5-5e74153f19bc-kserve-provision-location\") pod \"raw-sklearn-33f7f-predictor-8648f56f89-qnmqj\" (UID: \"f95ee321-a0f8-4022-80c5-5e74153f19bc\") " pod="kserve-ci-e2e-test/raw-sklearn-33f7f-predictor-8648f56f89-qnmqj" Apr 16 18:00:19.924275 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:19.924256 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/f95ee321-a0f8-4022-80c5-5e74153f19bc-kserve-provision-location\") pod \"raw-sklearn-33f7f-predictor-8648f56f89-qnmqj\" (UID: \"f95ee321-a0f8-4022-80c5-5e74153f19bc\") " pod="kserve-ci-e2e-test/raw-sklearn-33f7f-predictor-8648f56f89-qnmqj" Apr 16 18:00:19.989296 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:19.989274 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc_bcd58984-1dbd-4fff-b16d-420a04eb32ed/storage-initializer/1.log" Apr 16 18:00:19.996014 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:19.995997 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/raw-sklearn-33f7f-predictor-8648f56f89-qnmqj" Apr 16 18:00:20.129626 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:20.129602 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-33f7f-predictor-8648f56f89-qnmqj"] Apr 16 18:00:20.131529 ip-10-0-138-134 kubenswrapper[2571]: W0416 18:00:20.131502 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf95ee321_a0f8_4022_80c5_5e74153f19bc.slice/crio-cb802561cfa60b78264f79aacc46e8bcb7b2910471e961b7e8eeb2bdbe5c7e45 WatchSource:0}: Error finding container cb802561cfa60b78264f79aacc46e8bcb7b2910471e961b7e8eeb2bdbe5c7e45: Status 404 returned error can't find the container with id cb802561cfa60b78264f79aacc46e8bcb7b2910471e961b7e8eeb2bdbe5c7e45 Apr 16 18:00:20.145167 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:20.145146 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc_bcd58984-1dbd-4fff-b16d-420a04eb32ed/storage-initializer/1.log" Apr 16 18:00:20.145264 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:20.145203 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc" Apr 16 18:00:20.225575 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:20.225548 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/bcd58984-1dbd-4fff-b16d-420a04eb32ed-cabundle-cert\") pod \"bcd58984-1dbd-4fff-b16d-420a04eb32ed\" (UID: \"bcd58984-1dbd-4fff-b16d-420a04eb32ed\") " Apr 16 18:00:20.225831 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:20.225650 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/bcd58984-1dbd-4fff-b16d-420a04eb32ed-kserve-provision-location\") pod \"bcd58984-1dbd-4fff-b16d-420a04eb32ed\" (UID: \"bcd58984-1dbd-4fff-b16d-420a04eb32ed\") " Apr 16 18:00:20.225932 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:20.225899 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcd58984-1dbd-4fff-b16d-420a04eb32ed-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "bcd58984-1dbd-4fff-b16d-420a04eb32ed" (UID: "bcd58984-1dbd-4fff-b16d-420a04eb32ed"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 18:00:20.225989 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:20.225938 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcd58984-1dbd-4fff-b16d-420a04eb32ed-cabundle-cert" (OuterVolumeSpecName: "cabundle-cert") pod "bcd58984-1dbd-4fff-b16d-420a04eb32ed" (UID: "bcd58984-1dbd-4fff-b16d-420a04eb32ed"). InnerVolumeSpecName "cabundle-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 18:00:20.326776 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:20.326755 2571 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/bcd58984-1dbd-4fff-b16d-420a04eb32ed-kserve-provision-location\") on node \"ip-10-0-138-134.ec2.internal\" DevicePath \"\"" Apr 16 18:00:20.326776 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:20.326776 2571 reconciler_common.go:299] "Volume detached for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/bcd58984-1dbd-4fff-b16d-420a04eb32ed-cabundle-cert\") on node \"ip-10-0-138-134.ec2.internal\" DevicePath \"\"" Apr 16 18:00:20.994686 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:20.994650 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc_bcd58984-1dbd-4fff-b16d-420a04eb32ed/storage-initializer/1.log" Apr 16 18:00:20.995300 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:20.994799 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc" Apr 16 18:00:20.995300 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:20.994799 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc" event={"ID":"bcd58984-1dbd-4fff-b16d-420a04eb32ed","Type":"ContainerDied","Data":"9c4fd4147919a693b1b9e0622de38496a731a19ac83ab204412b9a04e32a9102"} Apr 16 18:00:20.995300 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:20.994852 2571 scope.go:117] "RemoveContainer" containerID="9ac04c7f5a9f076732fa9df787f5feb65942710da39885f095ed8a5ae40b1f38" Apr 16 18:00:20.996398 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:20.996372 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-33f7f-predictor-8648f56f89-qnmqj" event={"ID":"f95ee321-a0f8-4022-80c5-5e74153f19bc","Type":"ContainerStarted","Data":"2a511d544145745c5a8c1ad0962ee2f162994e32e2611ca3cfb76758fc6af928"} Apr 16 18:00:20.996484 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:20.996406 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-33f7f-predictor-8648f56f89-qnmqj" event={"ID":"f95ee321-a0f8-4022-80c5-5e74153f19bc","Type":"ContainerStarted","Data":"cb802561cfa60b78264f79aacc46e8bcb7b2910471e961b7e8eeb2bdbe5c7e45"} Apr 16 18:00:21.043458 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:21.043427 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc"] Apr 16 18:00:21.045789 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:21.045768 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-init-fail-67255c-predictor-54b48f6c9f-phtfc"] Apr 16 18:00:23.035081 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:23.035036 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcd58984-1dbd-4fff-b16d-420a04eb32ed" path="/var/lib/kubelet/pods/bcd58984-1dbd-4fff-b16d-420a04eb32ed/volumes" Apr 16 18:00:25.012729 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:25.012689 2571 generic.go:358] "Generic (PLEG): container finished" podID="f95ee321-a0f8-4022-80c5-5e74153f19bc" containerID="2a511d544145745c5a8c1ad0962ee2f162994e32e2611ca3cfb76758fc6af928" exitCode=0 Apr 16 18:00:25.013163 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:25.012765 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-33f7f-predictor-8648f56f89-qnmqj" event={"ID":"f95ee321-a0f8-4022-80c5-5e74153f19bc","Type":"ContainerDied","Data":"2a511d544145745c5a8c1ad0962ee2f162994e32e2611ca3cfb76758fc6af928"} Apr 16 18:00:26.018091 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:26.018053 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-33f7f-predictor-8648f56f89-qnmqj" event={"ID":"f95ee321-a0f8-4022-80c5-5e74153f19bc","Type":"ContainerStarted","Data":"5fe3d145c4233222bd8d95b3a530e930c7892c57dc5857ede73f844b1066b383"} Apr 16 18:00:26.018471 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:26.018450 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/raw-sklearn-33f7f-predictor-8648f56f89-qnmqj" Apr 16 18:00:26.019610 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:26.019584 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-33f7f-predictor-8648f56f89-qnmqj" podUID="f95ee321-a0f8-4022-80c5-5e74153f19bc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.44:8080: connect: connection refused" Apr 16 18:00:26.037666 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:26.037624 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/raw-sklearn-33f7f-predictor-8648f56f89-qnmqj" podStartSLOduration=7.03760903 podStartE2EDuration="7.03760903s" podCreationTimestamp="2026-04-16 18:00:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:00:26.036564126 +0000 UTC m=+1147.509793598" watchObservedRunningTime="2026-04-16 18:00:26.03760903 +0000 UTC m=+1147.510838507" Apr 16 18:00:27.022016 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:27.021973 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-33f7f-predictor-8648f56f89-qnmqj" podUID="f95ee321-a0f8-4022-80c5-5e74153f19bc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.44:8080: connect: connection refused" Apr 16 18:00:37.022144 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:37.022061 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-33f7f-predictor-8648f56f89-qnmqj" podUID="f95ee321-a0f8-4022-80c5-5e74153f19bc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.44:8080: connect: connection refused" Apr 16 18:00:47.022073 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:47.022033 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-33f7f-predictor-8648f56f89-qnmqj" podUID="f95ee321-a0f8-4022-80c5-5e74153f19bc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.44:8080: connect: connection refused" Apr 16 18:00:57.022538 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:00:57.022441 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-33f7f-predictor-8648f56f89-qnmqj" podUID="f95ee321-a0f8-4022-80c5-5e74153f19bc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.44:8080: connect: connection refused" Apr 16 18:01:07.022126 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:07.022079 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-33f7f-predictor-8648f56f89-qnmqj" podUID="f95ee321-a0f8-4022-80c5-5e74153f19bc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.44:8080: connect: connection refused" Apr 16 18:01:17.021833 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:17.021795 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-33f7f-predictor-8648f56f89-qnmqj" podUID="f95ee321-a0f8-4022-80c5-5e74153f19bc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.44:8080: connect: connection refused" Apr 16 18:01:27.023377 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:27.023336 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/raw-sklearn-33f7f-predictor-8648f56f89-qnmqj" Apr 16 18:01:29.770226 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:29.770197 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-33f7f-predictor-8648f56f89-qnmqj"] Apr 16 18:01:29.770678 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:29.770421 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/raw-sklearn-33f7f-predictor-8648f56f89-qnmqj" podUID="f95ee321-a0f8-4022-80c5-5e74153f19bc" containerName="kserve-container" containerID="cri-o://5fe3d145c4233222bd8d95b3a530e930c7892c57dc5857ede73f844b1066b383" gracePeriod=30 Apr 16 18:01:29.845437 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:29.845410 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-runtime-c5b19-predictor-854c5ccb9-zn6xz"] Apr 16 18:01:29.845759 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:29.845748 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bcd58984-1dbd-4fff-b16d-420a04eb32ed" containerName="storage-initializer" Apr 16 18:01:29.845805 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:29.845761 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcd58984-1dbd-4fff-b16d-420a04eb32ed" containerName="storage-initializer" Apr 16 18:01:29.845894 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:29.845814 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="bcd58984-1dbd-4fff-b16d-420a04eb32ed" containerName="storage-initializer" Apr 16 18:01:29.845894 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:29.845822 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="bcd58984-1dbd-4fff-b16d-420a04eb32ed" containerName="storage-initializer" Apr 16 18:01:29.845982 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:29.845902 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bcd58984-1dbd-4fff-b16d-420a04eb32ed" containerName="storage-initializer" Apr 16 18:01:29.845982 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:29.845909 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcd58984-1dbd-4fff-b16d-420a04eb32ed" containerName="storage-initializer" Apr 16 18:01:29.850152 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:29.850134 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/raw-sklearn-runtime-c5b19-predictor-854c5ccb9-zn6xz" Apr 16 18:01:29.859300 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:29.859275 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-runtime-c5b19-predictor-854c5ccb9-zn6xz"] Apr 16 18:01:29.938012 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:29.937975 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/801c0355-563e-435d-a227-c6f3857ac5b2-kserve-provision-location\") pod \"raw-sklearn-runtime-c5b19-predictor-854c5ccb9-zn6xz\" (UID: \"801c0355-563e-435d-a227-c6f3857ac5b2\") " pod="kserve-ci-e2e-test/raw-sklearn-runtime-c5b19-predictor-854c5ccb9-zn6xz" Apr 16 18:01:30.038950 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:30.038877 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/801c0355-563e-435d-a227-c6f3857ac5b2-kserve-provision-location\") pod \"raw-sklearn-runtime-c5b19-predictor-854c5ccb9-zn6xz\" (UID: \"801c0355-563e-435d-a227-c6f3857ac5b2\") " pod="kserve-ci-e2e-test/raw-sklearn-runtime-c5b19-predictor-854c5ccb9-zn6xz" Apr 16 18:01:30.039192 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:30.039173 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/801c0355-563e-435d-a227-c6f3857ac5b2-kserve-provision-location\") pod \"raw-sklearn-runtime-c5b19-predictor-854c5ccb9-zn6xz\" (UID: \"801c0355-563e-435d-a227-c6f3857ac5b2\") " pod="kserve-ci-e2e-test/raw-sklearn-runtime-c5b19-predictor-854c5ccb9-zn6xz" Apr 16 18:01:30.160552 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:30.160530 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/raw-sklearn-runtime-c5b19-predictor-854c5ccb9-zn6xz" Apr 16 18:01:30.299415 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:30.299342 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-runtime-c5b19-predictor-854c5ccb9-zn6xz"] Apr 16 18:01:30.302586 ip-10-0-138-134 kubenswrapper[2571]: W0416 18:01:30.302561 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod801c0355_563e_435d_a227_c6f3857ac5b2.slice/crio-914296b3867b083ebc9dd57aae73684bf95c08035bf89d01f0514c5608b9dfd0 WatchSource:0}: Error finding container 914296b3867b083ebc9dd57aae73684bf95c08035bf89d01f0514c5608b9dfd0: Status 404 returned error can't find the container with id 914296b3867b083ebc9dd57aae73684bf95c08035bf89d01f0514c5608b9dfd0 Apr 16 18:01:31.232894 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:31.232846 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-runtime-c5b19-predictor-854c5ccb9-zn6xz" event={"ID":"801c0355-563e-435d-a227-c6f3857ac5b2","Type":"ContainerStarted","Data":"28275d057530c0597b55e2a59416e801d993b0349dab389bdec9663e80e111ad"} Apr 16 18:01:31.232894 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:31.232892 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-runtime-c5b19-predictor-854c5ccb9-zn6xz" event={"ID":"801c0355-563e-435d-a227-c6f3857ac5b2","Type":"ContainerStarted","Data":"914296b3867b083ebc9dd57aae73684bf95c08035bf89d01f0514c5608b9dfd0"} Apr 16 18:01:33.610526 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:33.610505 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/raw-sklearn-33f7f-predictor-8648f56f89-qnmqj" Apr 16 18:01:33.667302 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:33.667238 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/f95ee321-a0f8-4022-80c5-5e74153f19bc-kserve-provision-location\") pod \"f95ee321-a0f8-4022-80c5-5e74153f19bc\" (UID: \"f95ee321-a0f8-4022-80c5-5e74153f19bc\") " Apr 16 18:01:33.667540 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:33.667518 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f95ee321-a0f8-4022-80c5-5e74153f19bc-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "f95ee321-a0f8-4022-80c5-5e74153f19bc" (UID: "f95ee321-a0f8-4022-80c5-5e74153f19bc"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 18:01:33.768574 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:33.768549 2571 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/f95ee321-a0f8-4022-80c5-5e74153f19bc-kserve-provision-location\") on node \"ip-10-0-138-134.ec2.internal\" DevicePath \"\"" Apr 16 18:01:34.244986 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:34.244952 2571 generic.go:358] "Generic (PLEG): container finished" podID="801c0355-563e-435d-a227-c6f3857ac5b2" containerID="28275d057530c0597b55e2a59416e801d993b0349dab389bdec9663e80e111ad" exitCode=0 Apr 16 18:01:34.245126 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:34.245029 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-runtime-c5b19-predictor-854c5ccb9-zn6xz" event={"ID":"801c0355-563e-435d-a227-c6f3857ac5b2","Type":"ContainerDied","Data":"28275d057530c0597b55e2a59416e801d993b0349dab389bdec9663e80e111ad"} Apr 16 18:01:34.246416 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:34.246391 2571 generic.go:358] "Generic (PLEG): container finished" podID="f95ee321-a0f8-4022-80c5-5e74153f19bc" containerID="5fe3d145c4233222bd8d95b3a530e930c7892c57dc5857ede73f844b1066b383" exitCode=0 Apr 16 18:01:34.246537 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:34.246446 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-33f7f-predictor-8648f56f89-qnmqj" event={"ID":"f95ee321-a0f8-4022-80c5-5e74153f19bc","Type":"ContainerDied","Data":"5fe3d145c4233222bd8d95b3a530e930c7892c57dc5857ede73f844b1066b383"} Apr 16 18:01:34.246537 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:34.246454 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/raw-sklearn-33f7f-predictor-8648f56f89-qnmqj" Apr 16 18:01:34.246537 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:34.246470 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-33f7f-predictor-8648f56f89-qnmqj" event={"ID":"f95ee321-a0f8-4022-80c5-5e74153f19bc","Type":"ContainerDied","Data":"cb802561cfa60b78264f79aacc46e8bcb7b2910471e961b7e8eeb2bdbe5c7e45"} Apr 16 18:01:34.246537 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:34.246484 2571 scope.go:117] "RemoveContainer" containerID="5fe3d145c4233222bd8d95b3a530e930c7892c57dc5857ede73f844b1066b383" Apr 16 18:01:34.261356 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:34.261336 2571 scope.go:117] "RemoveContainer" containerID="2a511d544145745c5a8c1ad0962ee2f162994e32e2611ca3cfb76758fc6af928" Apr 16 18:01:34.273643 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:34.273619 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-33f7f-predictor-8648f56f89-qnmqj"] Apr 16 18:01:34.274140 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:34.274124 2571 scope.go:117] "RemoveContainer" containerID="5fe3d145c4233222bd8d95b3a530e930c7892c57dc5857ede73f844b1066b383" Apr 16 18:01:34.274382 ip-10-0-138-134 kubenswrapper[2571]: E0416 18:01:34.274365 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fe3d145c4233222bd8d95b3a530e930c7892c57dc5857ede73f844b1066b383\": container with ID starting with 5fe3d145c4233222bd8d95b3a530e930c7892c57dc5857ede73f844b1066b383 not found: ID does not exist" containerID="5fe3d145c4233222bd8d95b3a530e930c7892c57dc5857ede73f844b1066b383" Apr 16 18:01:34.274453 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:34.274388 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fe3d145c4233222bd8d95b3a530e930c7892c57dc5857ede73f844b1066b383"} err="failed to get container status \"5fe3d145c4233222bd8d95b3a530e930c7892c57dc5857ede73f844b1066b383\": rpc error: code = NotFound desc = could not find container \"5fe3d145c4233222bd8d95b3a530e930c7892c57dc5857ede73f844b1066b383\": container with ID starting with 5fe3d145c4233222bd8d95b3a530e930c7892c57dc5857ede73f844b1066b383 not found: ID does not exist" Apr 16 18:01:34.274453 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:34.274404 2571 scope.go:117] "RemoveContainer" containerID="2a511d544145745c5a8c1ad0962ee2f162994e32e2611ca3cfb76758fc6af928" Apr 16 18:01:34.274632 ip-10-0-138-134 kubenswrapper[2571]: E0416 18:01:34.274615 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a511d544145745c5a8c1ad0962ee2f162994e32e2611ca3cfb76758fc6af928\": container with ID starting with 2a511d544145745c5a8c1ad0962ee2f162994e32e2611ca3cfb76758fc6af928 not found: ID does not exist" containerID="2a511d544145745c5a8c1ad0962ee2f162994e32e2611ca3cfb76758fc6af928" Apr 16 18:01:34.274677 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:34.274637 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a511d544145745c5a8c1ad0962ee2f162994e32e2611ca3cfb76758fc6af928"} err="failed to get container status \"2a511d544145745c5a8c1ad0962ee2f162994e32e2611ca3cfb76758fc6af928\": rpc error: code = NotFound desc = could not find container \"2a511d544145745c5a8c1ad0962ee2f162994e32e2611ca3cfb76758fc6af928\": container with ID starting with 2a511d544145745c5a8c1ad0962ee2f162994e32e2611ca3cfb76758fc6af928 not found: ID does not exist" Apr 16 18:01:34.277079 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:34.277061 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-33f7f-predictor-8648f56f89-qnmqj"] Apr 16 18:01:35.035594 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:35.035560 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f95ee321-a0f8-4022-80c5-5e74153f19bc" path="/var/lib/kubelet/pods/f95ee321-a0f8-4022-80c5-5e74153f19bc/volumes" Apr 16 18:01:35.251113 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:35.251080 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-runtime-c5b19-predictor-854c5ccb9-zn6xz" event={"ID":"801c0355-563e-435d-a227-c6f3857ac5b2","Type":"ContainerStarted","Data":"ba7bf9b758ffe8d161b2c8a82ae931403f22bdcc2aa32723f34f83b2b2d167cc"} Apr 16 18:01:35.251399 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:35.251373 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/raw-sklearn-runtime-c5b19-predictor-854c5ccb9-zn6xz" Apr 16 18:01:35.252632 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:35.252608 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-runtime-c5b19-predictor-854c5ccb9-zn6xz" podUID="801c0355-563e-435d-a227-c6f3857ac5b2" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.45:8080: connect: connection refused" Apr 16 18:01:35.268285 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:35.268245 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/raw-sklearn-runtime-c5b19-predictor-854c5ccb9-zn6xz" podStartSLOduration=6.268229789 podStartE2EDuration="6.268229789s" podCreationTimestamp="2026-04-16 18:01:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:01:35.266470026 +0000 UTC m=+1216.739699499" watchObservedRunningTime="2026-04-16 18:01:35.268229789 +0000 UTC m=+1216.741459262" Apr 16 18:01:36.255821 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:36.255773 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-runtime-c5b19-predictor-854c5ccb9-zn6xz" podUID="801c0355-563e-435d-a227-c6f3857ac5b2" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.45:8080: connect: connection refused" Apr 16 18:01:46.256225 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:46.256095 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-runtime-c5b19-predictor-854c5ccb9-zn6xz" podUID="801c0355-563e-435d-a227-c6f3857ac5b2" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.45:8080: connect: connection refused" Apr 16 18:01:56.255900 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:01:56.255834 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-runtime-c5b19-predictor-854c5ccb9-zn6xz" podUID="801c0355-563e-435d-a227-c6f3857ac5b2" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.45:8080: connect: connection refused" Apr 16 18:02:06.256813 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:02:06.256725 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-runtime-c5b19-predictor-854c5ccb9-zn6xz" podUID="801c0355-563e-435d-a227-c6f3857ac5b2" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.45:8080: connect: connection refused" Apr 16 18:02:16.256282 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:02:16.256241 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-runtime-c5b19-predictor-854c5ccb9-zn6xz" podUID="801c0355-563e-435d-a227-c6f3857ac5b2" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.45:8080: connect: connection refused" Apr 16 18:02:26.256375 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:02:26.256326 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-runtime-c5b19-predictor-854c5ccb9-zn6xz" podUID="801c0355-563e-435d-a227-c6f3857ac5b2" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.45:8080: connect: connection refused" Apr 16 18:02:36.257036 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:02:36.257009 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/raw-sklearn-runtime-c5b19-predictor-854c5ccb9-zn6xz" Apr 16 18:02:39.944275 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:02:39.944247 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-runtime-c5b19-predictor-854c5ccb9-zn6xz"] Apr 16 18:02:39.944698 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:02:39.944465 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/raw-sklearn-runtime-c5b19-predictor-854c5ccb9-zn6xz" podUID="801c0355-563e-435d-a227-c6f3857ac5b2" containerName="kserve-container" containerID="cri-o://ba7bf9b758ffe8d161b2c8a82ae931403f22bdcc2aa32723f34f83b2b2d167cc" gracePeriod=30 Apr 16 18:02:43.683281 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:02:43.683253 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/raw-sklearn-runtime-c5b19-predictor-854c5ccb9-zn6xz" Apr 16 18:02:43.781298 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:02:43.781232 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/801c0355-563e-435d-a227-c6f3857ac5b2-kserve-provision-location\") pod \"801c0355-563e-435d-a227-c6f3857ac5b2\" (UID: \"801c0355-563e-435d-a227-c6f3857ac5b2\") " Apr 16 18:02:43.781517 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:02:43.781494 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/801c0355-563e-435d-a227-c6f3857ac5b2-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "801c0355-563e-435d-a227-c6f3857ac5b2" (UID: "801c0355-563e-435d-a227-c6f3857ac5b2"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 18:02:43.882005 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:02:43.881975 2571 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/801c0355-563e-435d-a227-c6f3857ac5b2-kserve-provision-location\") on node \"ip-10-0-138-134.ec2.internal\" DevicePath \"\"" Apr 16 18:02:44.492458 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:02:44.492419 2571 generic.go:358] "Generic (PLEG): container finished" podID="801c0355-563e-435d-a227-c6f3857ac5b2" containerID="ba7bf9b758ffe8d161b2c8a82ae931403f22bdcc2aa32723f34f83b2b2d167cc" exitCode=0 Apr 16 18:02:44.492635 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:02:44.492468 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-runtime-c5b19-predictor-854c5ccb9-zn6xz" event={"ID":"801c0355-563e-435d-a227-c6f3857ac5b2","Type":"ContainerDied","Data":"ba7bf9b758ffe8d161b2c8a82ae931403f22bdcc2aa32723f34f83b2b2d167cc"} Apr 16 18:02:44.492635 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:02:44.492498 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-runtime-c5b19-predictor-854c5ccb9-zn6xz" event={"ID":"801c0355-563e-435d-a227-c6f3857ac5b2","Type":"ContainerDied","Data":"914296b3867b083ebc9dd57aae73684bf95c08035bf89d01f0514c5608b9dfd0"} Apr 16 18:02:44.492635 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:02:44.492496 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/raw-sklearn-runtime-c5b19-predictor-854c5ccb9-zn6xz" Apr 16 18:02:44.492635 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:02:44.492567 2571 scope.go:117] "RemoveContainer" containerID="ba7bf9b758ffe8d161b2c8a82ae931403f22bdcc2aa32723f34f83b2b2d167cc" Apr 16 18:02:44.502414 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:02:44.502393 2571 scope.go:117] "RemoveContainer" containerID="28275d057530c0597b55e2a59416e801d993b0349dab389bdec9663e80e111ad" Apr 16 18:02:44.510471 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:02:44.510454 2571 scope.go:117] "RemoveContainer" containerID="ba7bf9b758ffe8d161b2c8a82ae931403f22bdcc2aa32723f34f83b2b2d167cc" Apr 16 18:02:44.510732 ip-10-0-138-134 kubenswrapper[2571]: E0416 18:02:44.510713 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba7bf9b758ffe8d161b2c8a82ae931403f22bdcc2aa32723f34f83b2b2d167cc\": container with ID starting with ba7bf9b758ffe8d161b2c8a82ae931403f22bdcc2aa32723f34f83b2b2d167cc not found: ID does not exist" containerID="ba7bf9b758ffe8d161b2c8a82ae931403f22bdcc2aa32723f34f83b2b2d167cc" Apr 16 18:02:44.510781 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:02:44.510738 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba7bf9b758ffe8d161b2c8a82ae931403f22bdcc2aa32723f34f83b2b2d167cc"} err="failed to get container status \"ba7bf9b758ffe8d161b2c8a82ae931403f22bdcc2aa32723f34f83b2b2d167cc\": rpc error: code = NotFound desc = could not find container \"ba7bf9b758ffe8d161b2c8a82ae931403f22bdcc2aa32723f34f83b2b2d167cc\": container with ID starting with ba7bf9b758ffe8d161b2c8a82ae931403f22bdcc2aa32723f34f83b2b2d167cc not found: ID does not exist" Apr 16 18:02:44.510781 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:02:44.510754 2571 scope.go:117] "RemoveContainer" containerID="28275d057530c0597b55e2a59416e801d993b0349dab389bdec9663e80e111ad" Apr 16 18:02:44.511039 ip-10-0-138-134 kubenswrapper[2571]: E0416 18:02:44.511019 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28275d057530c0597b55e2a59416e801d993b0349dab389bdec9663e80e111ad\": container with ID starting with 28275d057530c0597b55e2a59416e801d993b0349dab389bdec9663e80e111ad not found: ID does not exist" containerID="28275d057530c0597b55e2a59416e801d993b0349dab389bdec9663e80e111ad" Apr 16 18:02:44.511086 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:02:44.511046 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28275d057530c0597b55e2a59416e801d993b0349dab389bdec9663e80e111ad"} err="failed to get container status \"28275d057530c0597b55e2a59416e801d993b0349dab389bdec9663e80e111ad\": rpc error: code = NotFound desc = could not find container \"28275d057530c0597b55e2a59416e801d993b0349dab389bdec9663e80e111ad\": container with ID starting with 28275d057530c0597b55e2a59416e801d993b0349dab389bdec9663e80e111ad not found: ID does not exist" Apr 16 18:02:44.513412 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:02:44.513388 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-runtime-c5b19-predictor-854c5ccb9-zn6xz"] Apr 16 18:02:44.516943 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:02:44.516884 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-runtime-c5b19-predictor-854c5ccb9-zn6xz"] Apr 16 18:02:45.035844 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:02:45.035809 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="801c0355-563e-435d-a227-c6f3857ac5b2" path="/var/lib/kubelet/pods/801c0355-563e-435d-a227-c6f3857ac5b2/volumes" Apr 16 18:03:04.946606 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:04.946565 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xznzl/must-gather-xh49v"] Apr 16 18:03:04.947165 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:04.947141 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="801c0355-563e-435d-a227-c6f3857ac5b2" containerName="kserve-container" Apr 16 18:03:04.947276 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:04.947168 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="801c0355-563e-435d-a227-c6f3857ac5b2" containerName="kserve-container" Apr 16 18:03:04.947276 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:04.947188 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f95ee321-a0f8-4022-80c5-5e74153f19bc" containerName="storage-initializer" Apr 16 18:03:04.947276 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:04.947196 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="f95ee321-a0f8-4022-80c5-5e74153f19bc" containerName="storage-initializer" Apr 16 18:03:04.947276 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:04.947218 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f95ee321-a0f8-4022-80c5-5e74153f19bc" containerName="kserve-container" Apr 16 18:03:04.947276 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:04.947227 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="f95ee321-a0f8-4022-80c5-5e74153f19bc" containerName="kserve-container" Apr 16 18:03:04.947276 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:04.947245 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="801c0355-563e-435d-a227-c6f3857ac5b2" containerName="storage-initializer" Apr 16 18:03:04.947276 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:04.947253 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="801c0355-563e-435d-a227-c6f3857ac5b2" containerName="storage-initializer" Apr 16 18:03:04.947640 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:04.947352 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="f95ee321-a0f8-4022-80c5-5e74153f19bc" containerName="kserve-container" Apr 16 18:03:04.947640 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:04.947367 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="801c0355-563e-435d-a227-c6f3857ac5b2" containerName="kserve-container" Apr 16 18:03:04.950545 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:04.950523 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xznzl/must-gather-xh49v" Apr 16 18:03:04.952774 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:04.952748 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-xznzl\"/\"default-dockercfg-b8sdr\"" Apr 16 18:03:04.952908 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:04.952757 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-xznzl\"/\"kube-root-ca.crt\"" Apr 16 18:03:04.953000 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:04.952881 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-xznzl\"/\"openshift-service-ca.crt\"" Apr 16 18:03:04.958645 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:04.958627 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-xznzl/must-gather-xh49v"] Apr 16 18:03:05.038521 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:05.038499 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kdxg\" (UniqueName: \"kubernetes.io/projected/97ee05a2-3ce3-49eb-aa41-287e6eafc833-kube-api-access-4kdxg\") pod \"must-gather-xh49v\" (UID: \"97ee05a2-3ce3-49eb-aa41-287e6eafc833\") " pod="openshift-must-gather-xznzl/must-gather-xh49v" Apr 16 18:03:05.038620 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:05.038546 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/97ee05a2-3ce3-49eb-aa41-287e6eafc833-must-gather-output\") pod \"must-gather-xh49v\" (UID: \"97ee05a2-3ce3-49eb-aa41-287e6eafc833\") " pod="openshift-must-gather-xznzl/must-gather-xh49v" Apr 16 18:03:05.139338 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:05.139313 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/97ee05a2-3ce3-49eb-aa41-287e6eafc833-must-gather-output\") pod \"must-gather-xh49v\" (UID: \"97ee05a2-3ce3-49eb-aa41-287e6eafc833\") " pod="openshift-must-gather-xznzl/must-gather-xh49v" Apr 16 18:03:05.139432 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:05.139385 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4kdxg\" (UniqueName: \"kubernetes.io/projected/97ee05a2-3ce3-49eb-aa41-287e6eafc833-kube-api-access-4kdxg\") pod \"must-gather-xh49v\" (UID: \"97ee05a2-3ce3-49eb-aa41-287e6eafc833\") " pod="openshift-must-gather-xznzl/must-gather-xh49v" Apr 16 18:03:05.139613 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:05.139595 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/97ee05a2-3ce3-49eb-aa41-287e6eafc833-must-gather-output\") pod \"must-gather-xh49v\" (UID: \"97ee05a2-3ce3-49eb-aa41-287e6eafc833\") " pod="openshift-must-gather-xznzl/must-gather-xh49v" Apr 16 18:03:05.147684 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:05.147657 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kdxg\" (UniqueName: \"kubernetes.io/projected/97ee05a2-3ce3-49eb-aa41-287e6eafc833-kube-api-access-4kdxg\") pod \"must-gather-xh49v\" (UID: \"97ee05a2-3ce3-49eb-aa41-287e6eafc833\") " pod="openshift-must-gather-xznzl/must-gather-xh49v" Apr 16 18:03:05.280344 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:05.280287 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xznzl/must-gather-xh49v" Apr 16 18:03:05.399055 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:05.399019 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-xznzl/must-gather-xh49v"] Apr 16 18:03:05.403719 ip-10-0-138-134 kubenswrapper[2571]: W0416 18:03:05.403686 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97ee05a2_3ce3_49eb_aa41_287e6eafc833.slice/crio-836f09b7f8bedabcfe13a3313c8963d7d3a111a302539e7aaeb555e693bedefa WatchSource:0}: Error finding container 836f09b7f8bedabcfe13a3313c8963d7d3a111a302539e7aaeb555e693bedefa: Status 404 returned error can't find the container with id 836f09b7f8bedabcfe13a3313c8963d7d3a111a302539e7aaeb555e693bedefa Apr 16 18:03:05.572957 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:05.572877 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xznzl/must-gather-xh49v" event={"ID":"97ee05a2-3ce3-49eb-aa41-287e6eafc833","Type":"ContainerStarted","Data":"836f09b7f8bedabcfe13a3313c8963d7d3a111a302539e7aaeb555e693bedefa"} Apr 16 18:03:06.579350 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:06.579265 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xznzl/must-gather-xh49v" event={"ID":"97ee05a2-3ce3-49eb-aa41-287e6eafc833","Type":"ContainerStarted","Data":"d80f880dcb1113088ce64aa551134576b0e40e0d3d8553c7c8b755dce0353e82"} Apr 16 18:03:06.579350 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:06.579304 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xznzl/must-gather-xh49v" event={"ID":"97ee05a2-3ce3-49eb-aa41-287e6eafc833","Type":"ContainerStarted","Data":"075f25ea483ae4342ae86997389c35ece8e53c3ec650252fbd9d4a1578119a8f"} Apr 16 18:03:06.595940 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:06.595892 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-xznzl/must-gather-xh49v" podStartSLOduration=1.734550595 podStartE2EDuration="2.595877852s" podCreationTimestamp="2026-04-16 18:03:04 +0000 UTC" firstStartedPulling="2026-04-16 18:03:05.405486924 +0000 UTC m=+1306.878716379" lastFinishedPulling="2026-04-16 18:03:06.266814182 +0000 UTC m=+1307.740043636" observedRunningTime="2026-04-16 18:03:06.594517178 +0000 UTC m=+1308.067746662" watchObservedRunningTime="2026-04-16 18:03:06.595877852 +0000 UTC m=+1308.069107325" Apr 16 18:03:07.727879 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:07.727834 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_global-pull-secret-syncer-mtgtk_a7cbf992-bfb2-4889-ba57-9de812ce16d4/global-pull-secret-syncer/0.log" Apr 16 18:03:07.811263 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:07.811234 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_konnectivity-agent-hc5pn_21c668ba-6d2b-43f2-926f-50b6a51598db/konnectivity-agent/0.log" Apr 16 18:03:07.858558 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:07.858529 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_kube-apiserver-proxy-ip-10-0-138-134.ec2.internal_612d2783cb89512547948baa230ca5ee/haproxy/0.log" Apr 16 18:03:11.312937 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:11.312903 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_cluster-monitoring-operator-6667474d89-cr6kk_dbefec75-4a25-443d-8f9e-1fc4a14fce37/cluster-monitoring-operator/0.log" Apr 16 18:03:11.339078 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:11.339048 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7479c89684-tqm8x_3d1c5352-c486-4390-8ad5-dc9351d290a0/kube-state-metrics/0.log" Apr 16 18:03:11.361141 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:11.361114 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7479c89684-tqm8x_3d1c5352-c486-4390-8ad5-dc9351d290a0/kube-rbac-proxy-main/0.log" Apr 16 18:03:11.389036 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:11.389002 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7479c89684-tqm8x_3d1c5352-c486-4390-8ad5-dc9351d290a0/kube-rbac-proxy-self/0.log" Apr 16 18:03:11.418168 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:11.418079 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_metrics-server-76fcbb669c-ql7h9_c69c0a1b-8ad3-4ab8-8a36-483c4c9a1629/metrics-server/0.log" Apr 16 18:03:11.643205 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:11.643166 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-q9rn9_4b6b506a-5c22-4f23-ae73-9f8da5854996/node-exporter/0.log" Apr 16 18:03:11.664955 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:11.664926 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-q9rn9_4b6b506a-5c22-4f23-ae73-9f8da5854996/kube-rbac-proxy/0.log" Apr 16 18:03:11.687672 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:11.687597 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-q9rn9_4b6b506a-5c22-4f23-ae73-9f8da5854996/init-textfile/0.log" Apr 16 18:03:11.713685 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:11.713660 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-5669946b84-pxgnm_de996f4c-ad08-48ba-bbdf-ff07993fa471/kube-rbac-proxy-main/0.log" Apr 16 18:03:11.735067 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:11.735034 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-5669946b84-pxgnm_de996f4c-ad08-48ba-bbdf-ff07993fa471/kube-rbac-proxy-self/0.log" Apr 16 18:03:11.757482 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:11.757451 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-5669946b84-pxgnm_de996f4c-ad08-48ba-bbdf-ff07993fa471/openshift-state-metrics/0.log" Apr 16 18:03:11.929796 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:11.929765 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-78f957474d-sh7h8_b097f1da-b001-41dc-86d1-53dd5913ed6e/prometheus-operator/0.log" Apr 16 18:03:11.948687 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:11.948605 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-78f957474d-sh7h8_b097f1da-b001-41dc-86d1-53dd5913ed6e/kube-rbac-proxy/0.log" Apr 16 18:03:11.973122 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:11.973098 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-admission-webhook-9cb97cd87-4hkn2_c5bf826c-3f1f-4726-8780-2c528e5a6bb0/prometheus-operator-admission-webhook/0.log" Apr 16 18:03:12.080889 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:12.080841 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-6f975cbf96-5wv4r_925b7d14-293d-4b38-9183-53e2d8a5d716/thanos-query/0.log" Apr 16 18:03:12.100975 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:12.100930 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-6f975cbf96-5wv4r_925b7d14-293d-4b38-9183-53e2d8a5d716/kube-rbac-proxy-web/0.log" Apr 16 18:03:12.122458 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:12.122423 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-6f975cbf96-5wv4r_925b7d14-293d-4b38-9183-53e2d8a5d716/kube-rbac-proxy/0.log" Apr 16 18:03:12.143701 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:12.143674 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-6f975cbf96-5wv4r_925b7d14-293d-4b38-9183-53e2d8a5d716/prom-label-proxy/0.log" Apr 16 18:03:12.168632 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:12.168604 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-6f975cbf96-5wv4r_925b7d14-293d-4b38-9183-53e2d8a5d716/kube-rbac-proxy-rules/0.log" Apr 16 18:03:12.193467 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:12.193434 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-6f975cbf96-5wv4r_925b7d14-293d-4b38-9183-53e2d8a5d716/kube-rbac-proxy-metrics/0.log" Apr 16 18:03:14.135245 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:14.135213 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-66b4746c66-lncl6_acbe6a76-08fe-460f-957f-725cfd01cee8/console/0.log" Apr 16 18:03:14.167324 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:14.167298 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_downloads-586b57c7b4-r4f69_42ff6ad5-c964-401b-813a-00dfd31def98/download-server/0.log" Apr 16 18:03:14.869179 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:14.869130 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xznzl/perf-node-gather-daemonset-c6dvx"] Apr 16 18:03:14.879041 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:14.879011 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xznzl/perf-node-gather-daemonset-c6dvx" Apr 16 18:03:14.884028 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:14.884002 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-xznzl/perf-node-gather-daemonset-c6dvx"] Apr 16 18:03:15.032646 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:15.032610 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/6b0d6d69-4c82-4da6-ab8b-b256462b0843-podres\") pod \"perf-node-gather-daemonset-c6dvx\" (UID: \"6b0d6d69-4c82-4da6-ab8b-b256462b0843\") " pod="openshift-must-gather-xznzl/perf-node-gather-daemonset-c6dvx" Apr 16 18:03:15.032804 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:15.032682 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6b0d6d69-4c82-4da6-ab8b-b256462b0843-sys\") pod \"perf-node-gather-daemonset-c6dvx\" (UID: \"6b0d6d69-4c82-4da6-ab8b-b256462b0843\") " pod="openshift-must-gather-xznzl/perf-node-gather-daemonset-c6dvx" Apr 16 18:03:15.032804 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:15.032708 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/6b0d6d69-4c82-4da6-ab8b-b256462b0843-proc\") pod \"perf-node-gather-daemonset-c6dvx\" (UID: \"6b0d6d69-4c82-4da6-ab8b-b256462b0843\") " pod="openshift-must-gather-xznzl/perf-node-gather-daemonset-c6dvx" Apr 16 18:03:15.032804 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:15.032734 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljzdf\" (UniqueName: \"kubernetes.io/projected/6b0d6d69-4c82-4da6-ab8b-b256462b0843-kube-api-access-ljzdf\") pod \"perf-node-gather-daemonset-c6dvx\" (UID: \"6b0d6d69-4c82-4da6-ab8b-b256462b0843\") " pod="openshift-must-gather-xznzl/perf-node-gather-daemonset-c6dvx" Apr 16 18:03:15.032804 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:15.032753 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b0d6d69-4c82-4da6-ab8b-b256462b0843-lib-modules\") pod \"perf-node-gather-daemonset-c6dvx\" (UID: \"6b0d6d69-4c82-4da6-ab8b-b256462b0843\") " pod="openshift-must-gather-xznzl/perf-node-gather-daemonset-c6dvx" Apr 16 18:03:15.133995 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:15.133927 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6b0d6d69-4c82-4da6-ab8b-b256462b0843-sys\") pod \"perf-node-gather-daemonset-c6dvx\" (UID: \"6b0d6d69-4c82-4da6-ab8b-b256462b0843\") " pod="openshift-must-gather-xznzl/perf-node-gather-daemonset-c6dvx" Apr 16 18:03:15.133995 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:15.133962 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/6b0d6d69-4c82-4da6-ab8b-b256462b0843-proc\") pod \"perf-node-gather-daemonset-c6dvx\" (UID: \"6b0d6d69-4c82-4da6-ab8b-b256462b0843\") " pod="openshift-must-gather-xznzl/perf-node-gather-daemonset-c6dvx" Apr 16 18:03:15.134186 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:15.134025 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/6b0d6d69-4c82-4da6-ab8b-b256462b0843-proc\") pod \"perf-node-gather-daemonset-c6dvx\" (UID: \"6b0d6d69-4c82-4da6-ab8b-b256462b0843\") " pod="openshift-must-gather-xznzl/perf-node-gather-daemonset-c6dvx" Apr 16 18:03:15.134186 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:15.134027 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ljzdf\" (UniqueName: \"kubernetes.io/projected/6b0d6d69-4c82-4da6-ab8b-b256462b0843-kube-api-access-ljzdf\") pod \"perf-node-gather-daemonset-c6dvx\" (UID: \"6b0d6d69-4c82-4da6-ab8b-b256462b0843\") " pod="openshift-must-gather-xznzl/perf-node-gather-daemonset-c6dvx" Apr 16 18:03:15.134186 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:15.134063 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b0d6d69-4c82-4da6-ab8b-b256462b0843-lib-modules\") pod \"perf-node-gather-daemonset-c6dvx\" (UID: \"6b0d6d69-4c82-4da6-ab8b-b256462b0843\") " pod="openshift-must-gather-xznzl/perf-node-gather-daemonset-c6dvx" Apr 16 18:03:15.134186 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:15.134090 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6b0d6d69-4c82-4da6-ab8b-b256462b0843-sys\") pod \"perf-node-gather-daemonset-c6dvx\" (UID: \"6b0d6d69-4c82-4da6-ab8b-b256462b0843\") " pod="openshift-must-gather-xznzl/perf-node-gather-daemonset-c6dvx" Apr 16 18:03:15.134407 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:15.134218 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/6b0d6d69-4c82-4da6-ab8b-b256462b0843-podres\") pod \"perf-node-gather-daemonset-c6dvx\" (UID: \"6b0d6d69-4c82-4da6-ab8b-b256462b0843\") " pod="openshift-must-gather-xznzl/perf-node-gather-daemonset-c6dvx" Apr 16 18:03:15.134407 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:15.134267 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b0d6d69-4c82-4da6-ab8b-b256462b0843-lib-modules\") pod \"perf-node-gather-daemonset-c6dvx\" (UID: \"6b0d6d69-4c82-4da6-ab8b-b256462b0843\") " pod="openshift-must-gather-xznzl/perf-node-gather-daemonset-c6dvx" Apr 16 18:03:15.134407 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:15.134339 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/6b0d6d69-4c82-4da6-ab8b-b256462b0843-podres\") pod \"perf-node-gather-daemonset-c6dvx\" (UID: \"6b0d6d69-4c82-4da6-ab8b-b256462b0843\") " pod="openshift-must-gather-xznzl/perf-node-gather-daemonset-c6dvx" Apr 16 18:03:15.141735 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:15.141711 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljzdf\" (UniqueName: \"kubernetes.io/projected/6b0d6d69-4c82-4da6-ab8b-b256462b0843-kube-api-access-ljzdf\") pod \"perf-node-gather-daemonset-c6dvx\" (UID: \"6b0d6d69-4c82-4da6-ab8b-b256462b0843\") " pod="openshift-must-gather-xznzl/perf-node-gather-daemonset-c6dvx" Apr 16 18:03:15.151725 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:15.151707 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-5pktg_971c52dd-85d8-47b8-b0b5-7369b7459c82/dns/0.log" Apr 16 18:03:15.172432 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:15.172403 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-5pktg_971c52dd-85d8-47b8-b0b5-7369b7459c82/kube-rbac-proxy/0.log" Apr 16 18:03:15.193398 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:15.193379 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xznzl/perf-node-gather-daemonset-c6dvx" Apr 16 18:03:15.285272 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:15.285245 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-cgldj_3d328985-f90b-469d-a8a4-9962d8311ef2/dns-node-resolver/0.log" Apr 16 18:03:15.313509 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:15.313480 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-xznzl/perf-node-gather-daemonset-c6dvx"] Apr 16 18:03:15.315744 ip-10-0-138-134 kubenswrapper[2571]: W0416 18:03:15.315719 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod6b0d6d69_4c82_4da6_ab8b_b256462b0843.slice/crio-c7a545c10c9627501603080586a8d92f51a4153ed7b2cd89c7fdcd8c07636e22 WatchSource:0}: Error finding container c7a545c10c9627501603080586a8d92f51a4153ed7b2cd89c7fdcd8c07636e22: Status 404 returned error can't find the container with id c7a545c10c9627501603080586a8d92f51a4153ed7b2cd89c7fdcd8c07636e22 Apr 16 18:03:15.620101 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:15.620067 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xznzl/perf-node-gather-daemonset-c6dvx" event={"ID":"6b0d6d69-4c82-4da6-ab8b-b256462b0843","Type":"ContainerStarted","Data":"abe39bb1061f786ec5778656fd33df80dba3fb9f5cf849a1b1ca517c037641c8"} Apr 16 18:03:15.620101 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:15.620105 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xznzl/perf-node-gather-daemonset-c6dvx" event={"ID":"6b0d6d69-4c82-4da6-ab8b-b256462b0843","Type":"ContainerStarted","Data":"c7a545c10c9627501603080586a8d92f51a4153ed7b2cd89c7fdcd8c07636e22"} Apr 16 18:03:15.620313 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:15.620249 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-must-gather-xznzl/perf-node-gather-daemonset-c6dvx" Apr 16 18:03:15.638877 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:15.636960 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-xznzl/perf-node-gather-daemonset-c6dvx" podStartSLOduration=1.636943392 podStartE2EDuration="1.636943392s" podCreationTimestamp="2026-04-16 18:03:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:03:15.634573728 +0000 UTC m=+1317.107803200" watchObservedRunningTime="2026-04-16 18:03:15.636943392 +0000 UTC m=+1317.110172865" Apr 16 18:03:15.746751 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:15.746684 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-hdwwl_dd4660ce-5c75-41dc-a8ed-0e0e55ac0a01/node-ca/0.log" Apr 16 18:03:16.463510 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:16.463480 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-6c44cbbfb4-k2krf_9fc88243-e880-437b-8e63-4cc27b6580f3/router/0.log" Apr 16 18:03:16.775349 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:16.775270 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-2l84s_b6580a74-c19a-4cd2-b8e2-e8a8423dc761/serve-healthcheck-canary/0.log" Apr 16 18:03:17.187989 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:17.187952 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-5785d4fcdd-7pxpm_25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611/insights-operator/0.log" Apr 16 18:03:17.188257 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:17.188232 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-5785d4fcdd-7pxpm_25d7e7ca-5b2b-4a80-8a62-70ab5d3ba611/insights-operator/1.log" Apr 16 18:03:17.401002 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:17.400978 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-xkqb8_737fd733-20da-4813-9abe-61f8e3e58fa2/kube-rbac-proxy/0.log" Apr 16 18:03:17.426122 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:17.426099 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-xkqb8_737fd733-20da-4813-9abe-61f8e3e58fa2/exporter/0.log" Apr 16 18:03:17.463196 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:17.463125 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-xkqb8_737fd733-20da-4813-9abe-61f8e3e58fa2/extractor/0.log" Apr 16 18:03:19.375400 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:19.375330 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_model-serving-api-86f7b4b499-bjw8k_f2494067-51ef-4a80-a329-f02b5ba3e970/server/0.log" Apr 16 18:03:19.527729 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:19.527700 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_odh-model-controller-696fc77849-tlp6f_99e5ce53-3010-4a08-a9af-f04c84f63ae5/manager/0.log" Apr 16 18:03:21.637378 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:21.637347 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-xznzl/perf-node-gather-daemonset-c6dvx" Apr 16 18:03:23.421320 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:23.421290 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-756bb7d76f-gzcr7_27efa923-840a-4df7-8dcb-d30a622b5c3f/kube-storage-version-migrator-operator/1.log" Apr 16 18:03:23.423026 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:23.422992 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-756bb7d76f-gzcr7_27efa923-840a-4df7-8dcb-d30a622b5c3f/kube-storage-version-migrator-operator/0.log" Apr 16 18:03:24.455067 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:24.455033 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2lqx9_b9b6b64b-dc77-4472-8311-249ac8242441/kube-multus/0.log" Apr 16 18:03:24.798127 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:24.798095 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-tlljg_3e976806-7125-4a84-96f3-609791878cd8/kube-multus-additional-cni-plugins/0.log" Apr 16 18:03:24.820243 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:24.820223 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-tlljg_3e976806-7125-4a84-96f3-609791878cd8/egress-router-binary-copy/0.log" Apr 16 18:03:24.842789 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:24.842766 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-tlljg_3e976806-7125-4a84-96f3-609791878cd8/cni-plugins/0.log" Apr 16 18:03:24.863335 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:24.863312 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-tlljg_3e976806-7125-4a84-96f3-609791878cd8/bond-cni-plugin/0.log" Apr 16 18:03:24.883318 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:24.883300 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-tlljg_3e976806-7125-4a84-96f3-609791878cd8/routeoverride-cni/0.log" Apr 16 18:03:24.903972 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:24.903949 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-tlljg_3e976806-7125-4a84-96f3-609791878cd8/whereabouts-cni-bincopy/0.log" Apr 16 18:03:24.923832 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:24.923814 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-tlljg_3e976806-7125-4a84-96f3-609791878cd8/whereabouts-cni/0.log" Apr 16 18:03:25.078613 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:25.078542 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-x4kh5_5b6b22f4-0d78-4198-b821-0f4f52115d9c/network-metrics-daemon/0.log" Apr 16 18:03:25.095500 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:25.095467 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-x4kh5_5b6b22f4-0d78-4198-b821-0f4f52115d9c/kube-rbac-proxy/0.log" Apr 16 18:03:25.843592 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:25.843563 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69xw9_8976bd90-ca55-4016-a1b8-fffeec56d443/ovn-controller/0.log" Apr 16 18:03:25.872710 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:25.872685 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69xw9_8976bd90-ca55-4016-a1b8-fffeec56d443/ovn-acl-logging/0.log" Apr 16 18:03:25.895396 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:25.895374 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69xw9_8976bd90-ca55-4016-a1b8-fffeec56d443/kube-rbac-proxy-node/0.log" Apr 16 18:03:25.918451 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:25.918430 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69xw9_8976bd90-ca55-4016-a1b8-fffeec56d443/kube-rbac-proxy-ovn-metrics/0.log" Apr 16 18:03:25.936019 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:25.935957 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69xw9_8976bd90-ca55-4016-a1b8-fffeec56d443/northd/0.log" Apr 16 18:03:25.958508 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:25.958481 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69xw9_8976bd90-ca55-4016-a1b8-fffeec56d443/nbdb/0.log" Apr 16 18:03:25.982192 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:25.982145 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69xw9_8976bd90-ca55-4016-a1b8-fffeec56d443/sbdb/0.log" Apr 16 18:03:26.151575 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:26.151543 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69xw9_8976bd90-ca55-4016-a1b8-fffeec56d443/ovnkube-controller/0.log" Apr 16 18:03:27.605445 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:27.605416 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-target-wrf29_679af96e-47e1-4212-8ee0-e6d82d302834/network-check-target-container/0.log" Apr 16 18:03:28.490722 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:28.490694 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_iptables-alerter-wjvxt_14054c8d-e316-4c67-a78f-0c44fc1bddc2/iptables-alerter/0.log" Apr 16 18:03:29.121944 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:29.121915 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_tuned-fv6lf_5e111816-868e-46a5-9605-122507f445ea/tuned/0.log" Apr 16 18:03:30.744946 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:30.744918 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-667775844f-kqxzm_0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96/cluster-samples-operator/0.log" Apr 16 18:03:30.761629 ip-10-0-138-134 kubenswrapper[2571]: I0416 18:03:30.761605 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-667775844f-kqxzm_0cfa6e0b-cd50-46cd-8ff2-e8a24ee9ed96/cluster-samples-operator-watch/0.log"