Warning: Failed to get DSC: the server could not find the requested resource Initial Kueue managementState: === RUN TestDefaultClusterTrainingRuntimes test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'Smoke' --- SKIP: TestDefaultClusterTrainingRuntimes (0.00s) === RUN TestDefaultTrainingHubRuntimesMatchDefaultClusterRuntimes test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'Smoke' --- SKIP: TestDefaultTrainingHubRuntimesMatchDefaultClusterRuntimes (0.00s) === RUN TestRunTrainJobWithDefaultClusterTrainingRuntimes cluster_training_runtimes_test.go:161: Running TrainJob with ClusterTrainingRuntime: torch-distributed cluster_training_runtimes_test.go:167: Created TrainJob test-ns-zrbm8/test-trainjob-r845r successfully cluster_training_runtimes_test.go:178: TrainJob with ClusterTrainingRuntime 'torch-distributed' completed successfully cluster_training_runtimes_test.go:161: Running TrainJob with ClusterTrainingRuntime: torch-distributed-rocm cluster_training_runtimes_test.go:167: Created TrainJob test-ns-hgngd/test-trainjob-kqfd8 successfully cluster_training_runtimes_test.go:178: TrainJob with ClusterTrainingRuntime 'torch-distributed-rocm' completed successfully cluster_training_runtimes_test.go:161: Running TrainJob with ClusterTrainingRuntime: torch-distributed-cpu cluster_training_runtimes_test.go:167: Created TrainJob test-ns-87dcb/test-trainjob-wtdjj successfully cluster_training_runtimes_test.go:178: TrainJob with ClusterTrainingRuntime 'torch-distributed-cpu' completed successfully cluster_training_runtimes_test.go:161: Running TrainJob with ClusterTrainingRuntime: torch-distributed-cuda128-torch29-py312 cluster_training_runtimes_test.go:167: Created TrainJob test-ns-qqzv7/test-trainjob-4sxmt successfully cluster_training_runtimes_test.go:178: TrainJob with ClusterTrainingRuntime 'torch-distributed-cuda128-torch29-py312' completed successfully cluster_training_runtimes_test.go:161: Running TrainJob with ClusterTrainingRuntime: torch-distributed-rocm64-torch29-py312 cluster_training_runtimes_test.go:167: Created TrainJob test-ns-2mwd8/test-trainjob-wh52x successfully cluster_training_runtimes_test.go:171: Timed out after 1200.000s. Expected <*v1alpha1.TrainJob | 0xc0006b9d40>: { TypeMeta: {Kind: "", APIVersion: ""}, ObjectMeta: { Name: "test-trainjob-wh52x", GenerateName: "test-trainjob-", Namespace: "test-ns-2mwd8", SelfLink: "", UID: "a7811af5-2c63-4e90-8f55-ff4c7efd633d", ResourceVersion: "19349", Generation: 1, CreationTimestamp: { Time: 2026-04-16T10:30:18Z, }, DeletionTimestamp: nil, DeletionGracePeriodSeconds: nil, Labels: nil, Annotations: nil, OwnerReferences: nil, Finalizers: nil, ManagedFields: [ { Manager: "manager", Operation: "Update", APIVersion: "trainer.kubeflow.org/v1alpha1", Time: { Time: 2026-04-16T10:30:18Z, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:status\":{\".\":{},\"f:jobsStatus\":{\".\":{},\"k:{\\\"name\\\":\\\"node\\\"}\":{\".\":{},\"f:active\":{},\"f:failed\":{},\"f:name\":{},\"f:ready\":{},\"f:succeeded\":{},\"f:suspended\":{}}}}}", }, Subresource: "status", }, { Manager: "trainer.test", Operation: "Update", APIVersion: "trainer.kubeflow.org/v1alpha1", Time: { Time: 2026-04-16T10:30:18Z, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\".\":{},\"f:managedBy\":{},\"f:runtimeRef\":{\".\":{},\"f:apiGroup\":{},\"f:kind\":{},\"f:name\":{}},\"f:suspend\":{},\"f:trainer\":{\".\":{},\"f:command\":{}}}}", }, Subresource: "", }, ], }, Spec: { RuntimeRef: { Name: "torch-distributed-rocm64-torch29-py312", APIGroup: "trainer.kubeflow.org", Kind: "ClusterTrainingRuntime", }, Initializer: nil, Trainer: { Image: nil, Command: [ "python", "-c", "import torch; print(f'PyTorch version: {torch.__version__}'); print('Training completed successfully')", ], Args: nil, Env: nil, NumNodes: nil, ResourcesPerNode: nil, NumProcPerNode: nil, }, Labels: nil, Annotations: nil, PodTemplateOverrides: nil, Suspend: false, ManagedBy: "trainer.kubeflow.org/trainjob-controller", }, Status: { Conditions: nil, JobsStatus: [ {Name: "node", Ready: 0, Succeeded: 0, Failed: 0, Active: 1, Suspended: 0}, ], }, } to satisfy predicate : 0x1c71ec0 test.go:169: Retrieving Pod Container test-ns-2mwd8/test-trainjob-wh52x-node-0-0-585nw/node logs test.go:169: Failed to retrieve logs for Pod Container test-ns-2mwd8/test-trainjob-wh52x-node-0-0-585nw/node, logs cannot be stored test.go:152: Creating ephemeral output directory as TEST_OUTPUT_DIR env variable is unset test.go:160: Output directory has been created at: /tmp/TestRunTrainJobWithDefaultClusterTrainingRuntimes176483398 test.go:169: Retrieving Pod Container test-ns-qqzv7/test-trainjob-4sxmt-node-0-0-rrv8x/node logs test.go:169: Retrieving Pod Container test-ns-87dcb/test-trainjob-wtdjj-node-0-0-772z2/node logs test.go:169: Retrieving Pod Container test-ns-hgngd/test-trainjob-kqfd8-node-0-0-bpmlp/node logs test.go:169: Retrieving Pod Container test-ns-zrbm8/test-trainjob-r845r-node-0-0-lj7dx/node logs --- FAIL: TestRunTrainJobWithDefaultClusterTrainingRuntimes (2221.27s) === RUN TestJobSetWorkflow jobset_workflow_test.go:46: Created PersistentVolumeClaim test-ns-28lhx/pvc-dzpxd successfully utils_runtimes.go:122: Using image from ClusterTrainingRuntime "torch-distributed-cpu": quay.io/opendatahub/odh-th06-cpu-torch291-py312@sha256:24e292cfb6cec39b44b1eec90ff30f0a304786dc5f638a2b7712f415eb9c8287 jobset_workflow_test.go:53: Created TrainingRuntime test-ns-28lhx/test-trainingruntime-7fkk6 jobset_workflow_test.go:56: Created TrainJob test-ns-28lhx/test-trainjob-4xc4f jobset_workflow_test.go:62: JobSet created with 3 replicated jobs (dataset-initializer, model-initializer, node) jobset_workflow_test.go:65: Monitoring job execution ... jobset_workflow_test.go:65: dataset-initializer job is created: test-trainjob-4xc4f-dataset-initializer-0 jobset_workflow_test.go:65: dataset-initializer job is completed: test-trainjob-4xc4f-dataset-initializer-0 jobset_workflow_test.go:65: model-initializer job is created: test-trainjob-4xc4f-model-initializer-0 jobset_workflow_test.go:65: model-initializer job is completed: test-trainjob-4xc4f-model-initializer-0 jobset_workflow_test.go:65: node job is created: test-trainjob-4xc4f-node-0 jobset_workflow_test.go:65: Sequential job execution is verified successfully: dataset-initializer → model-initializer → node jobset_workflow_test.go:70: TrainJob test-ns-28lhx/test-trainjob-4xc4f completed test.go:169: Retrieving Pod Container test-ns-28lhx/test-trainjob-4xc4f-dataset-initializer-0-0-4mckq/dataset-initializer logs test.go:152: Creating ephemeral output directory as TEST_OUTPUT_DIR env variable is unset test.go:160: Output directory has been created at: /tmp/TestJobSetWorkflow66690733 test.go:169: Retrieving Pod Container test-ns-28lhx/test-trainjob-4xc4f-model-initializer-0-0-n9tl6/model-initializer logs test.go:169: Retrieving Pod Container test-ns-28lhx/test-trainjob-4xc4f-node-0-0-c9kd5/node logs --- PASS: TestJobSetWorkflow (208.26s) === RUN TestFailedJobSetWorkflow jobset_workflow_test.go:81: Created PersistentVolumeClaim test-ns-b74lg/pvc-whh74 successfully utils_runtimes.go:122: Using image from ClusterTrainingRuntime "torch-distributed-cpu": quay.io/opendatahub/odh-th06-cpu-torch291-py312@sha256:24e292cfb6cec39b44b1eec90ff30f0a304786dc5f638a2b7712f415eb9c8287 jobset_workflow_test.go:88: Created TrainingRuntime test-ns-b74lg/test-trainingruntime-s4lg9 jobset_workflow_test.go:91: Created TrainJob test-ns-b74lg/test-trainjob-fail-5sz7g jobset_workflow_test.go:100: JobSet failed as expected jobset_workflow_test.go:105: TrainJob failed as expected test.go:169: Retrieving Pod Container test-ns-b74lg/test-trainjob-fail-5sz7g-dataset-initializer-0-0-kdzp6/dataset-initializer logs test.go:152: Creating ephemeral output directory as TEST_OUTPUT_DIR env variable is unset test.go:160: Output directory has been created at: /tmp/TestFailedJobSetWorkflow1115006711 --- PASS: TestFailedJobSetWorkflow (13.17s) === RUN TestKubeflowSdkSanity environment.go:75: Expected environment variable NOTEBOOK_USER_NAME not found, please use this environment variable to specify name of the authenticated Notebook user. test.go:152: Creating ephemeral output directory as TEST_OUTPUT_DIR env variable is unset test.go:160: Output directory has been created at: /tmp/TestKubeflowSdkSanity3532247662 --- FAIL: TestKubeflowSdkSanity (0.10s) === RUN TestKubeflowSdkKueueIntegration kueue_operator.go:99: SetupKueue: Setting kueue to Unmanaged managementState in DataScienceCluster... kueue_operator.go:101: Should be able to set DSC kueue to Unmanaged Unexpected error: <*errors.StatusError | 0xc0008d2b40>: the server could not find the requested resource { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: { Name: "", Group: "", Kind: "", UID: "", Causes: [ { Type: "UnexpectedServerResponse", Message: "404 page not found", Field: "", }, ], RetryAfterSeconds: 0, }, Code: 404, }, } occurred --- FAIL: TestKubeflowSdkKueueIntegration (0.00s) === RUN TestSftTrainingHubSingleNodeSingleGPU test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'KFTO-CUDA' --- SKIP: TestSftTrainingHubSingleNodeSingleGPU (0.00s) === RUN TestOsftTrainingHubSingleNodeSingleGPU test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'KFTO-CUDA' --- SKIP: TestOsftTrainingHubSingleNodeSingleGPU (0.00s) === RUN TestLoraTrainingHubSingleNodeSingleGPU test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'KFTO-CUDA' --- SKIP: TestLoraTrainingHubSingleNodeSingleGPU (0.00s) === RUN TestOsftTrainingHubMultiNodeMultiGPU test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'KFTO-CUDA' --- SKIP: TestOsftTrainingHubMultiNodeMultiGPU (0.00s) === RUN TestLoraTrainingHubMultiNodeMultiGPU test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'KFTO-CUDA' --- SKIP: TestLoraTrainingHubMultiNodeMultiGPU (0.00s) === RUN TestSftTrainingHubMultiNodeMultiGPU test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'KFTO-CUDA' --- SKIP: TestSftTrainingHubMultiNodeMultiGPU (0.00s) === RUN TestRhaiTrainingProgressionCPU test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'Tier1' --- SKIP: TestRhaiTrainingProgressionCPU (0.00s) === RUN TestRhaiJitCheckpointingCPU test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'Tier1' --- SKIP: TestRhaiJitCheckpointingCPU (0.00s) === RUN TestRhaiFeaturesCPU test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'Tier1' --- SKIP: TestRhaiFeaturesCPU (0.00s) === RUN TestRhaiTrainingProgressionCuda test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'KFTO-CUDA' --- SKIP: TestRhaiTrainingProgressionCuda (0.00s) === RUN TestRhaiJitCheckpointingCuda test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'KFTO-CUDA' --- SKIP: TestRhaiJitCheckpointingCuda (0.00s) === RUN TestRhaiFeaturesCuda test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'KFTO-CUDA' --- SKIP: TestRhaiFeaturesCuda (0.00s) === RUN TestRhaiTrainingProgressionRocm test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'KFTO-ROCm' --- SKIP: TestRhaiTrainingProgressionRocm (0.00s) === RUN TestRhaiJitCheckpointingRocm test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'KFTO-ROCm' --- SKIP: TestRhaiJitCheckpointingRocm (0.00s) === RUN TestRhaiFeaturesRocm test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'KFTO-ROCm' --- SKIP: TestRhaiFeaturesRocm (0.00s) === RUN TestRhaiTrainingProgressionMultiGpuCuda test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'KFTO-CUDA' --- SKIP: TestRhaiTrainingProgressionMultiGpuCuda (0.00s) === RUN TestRhaiJitCheckpointingMultiGpuCuda test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'KFTO-CUDA' --- SKIP: TestRhaiJitCheckpointingMultiGpuCuda (0.00s) === RUN TestRhaiFeaturesMultiGpuCuda test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'KFTO-CUDA' --- SKIP: TestRhaiFeaturesMultiGpuCuda (0.00s) === RUN TestRhaiTrainingProgressionMultiGpuRocm test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'KFTO-ROCm' --- SKIP: TestRhaiTrainingProgressionMultiGpuRocm (0.00s) === RUN TestRhaiJitCheckpointingMultiGpuRocm test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'KFTO-ROCm' --- SKIP: TestRhaiJitCheckpointingMultiGpuRocm (0.00s) === RUN TestRhaiFeaturesMultiGpuRocm test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'KFTO-ROCm' --- SKIP: TestRhaiFeaturesMultiGpuRocm (0.00s) === RUN TestTrainingFailureScenarios test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'KFTO-CUDA' --- SKIP: TestTrainingFailureScenarios (0.00s) === RUN TestTorchrunTrainingFailure test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'KFTO-CUDA' --- SKIP: TestTorchrunTrainingFailure (0.00s) === RUN TestRhaiS3CheckpointingCPU test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'Tier1' --- SKIP: TestRhaiS3CheckpointingCPU (0.00s) === RUN TestRhaiS3FsdpFullStateCheckpointingCuda test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'KFTO-CUDA' --- SKIP: TestRhaiS3FsdpFullStateCheckpointingCuda (0.00s) === RUN TestRhaiS3FsdpFullStateCheckpointingMultiProcessCuda test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'KFTO-CUDA' --- SKIP: TestRhaiS3FsdpFullStateCheckpointingMultiProcessCuda (0.00s) === RUN TestRhaiS3FsdpSharedStateCheckpointingCuda test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'KFTO-CUDA' --- SKIP: TestRhaiS3FsdpSharedStateCheckpointingCuda (0.00s) === RUN TestRhaiS3FsdpSharedStateCheckpointingMultiGpuCuda test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'KFTO-CUDA' --- SKIP: TestRhaiS3FsdpSharedStateCheckpointingMultiGpuCuda (0.00s) === RUN TestRhaiS3DeepspeedStage0CheckpointingCuda test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'KFTO-CUDA' --- SKIP: TestRhaiS3DeepspeedStage0CheckpointingCuda (0.00s) === RUN TestRhaiS3DeepspeedStage0CheckpointingMultiGpuCuda test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'KFTO-CUDA' --- SKIP: TestRhaiS3DeepspeedStage0CheckpointingMultiGpuCuda (0.00s) === RUN TestPyTorchDDPMultiNodeMultiCPUWithTorchCuda28 test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'Tier1' --- SKIP: TestPyTorchDDPMultiNodeMultiCPUWithTorchCuda28 (0.00s) === RUN TestPyTorchDDPSingleNodeSingleGPUWithTorchCuda test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'KFTO-CUDA' --- SKIP: TestPyTorchDDPSingleNodeSingleGPUWithTorchCuda (0.00s) === RUN TestPyTorchDDPSingleNodeMultiGPUWithTorchCuda test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'KFTO-CUDA' --- SKIP: TestPyTorchDDPSingleNodeMultiGPUWithTorchCuda (0.00s) === RUN TestPyTorchDDPMultiNodeSingleGPUWithTorchCuda test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'KFTO-CUDA' --- SKIP: TestPyTorchDDPMultiNodeSingleGPUWithTorchCuda (0.00s) === RUN TestPyTorchDDPMultiNodeMultiGPUWithTorchCuda test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'KFTO-CUDA' --- SKIP: TestPyTorchDDPMultiNodeMultiGPUWithTorchCuda (0.00s) === RUN TestPyTorchDDPSingleNodeSingleGPUWithTorchRocm test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'KFTO-ROCm' --- SKIP: TestPyTorchDDPSingleNodeSingleGPUWithTorchRocm (0.00s) === RUN TestPyTorchDDPSingleNodeMultiGPUWithTorchRocm test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'KFTO-ROCm' --- SKIP: TestPyTorchDDPSingleNodeMultiGPUWithTorchRocm (0.00s) === RUN TestPyTorchDDPMultiNodeSingleGPUWithTorchRocm test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'KFTO-ROCm' --- SKIP: TestPyTorchDDPMultiNodeSingleGPUWithTorchRocm (0.00s) === RUN TestPyTorchDDPMultiNodeMultiGPUWithTorchRocm test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'KFTO-ROCm' --- SKIP: TestPyTorchDDPMultiNodeMultiGPUWithTorchRocm (0.00s) === RUN TestKueueDefaultLocalQueueLabelInjection kueue_operator.go:99: SetupKueue: Setting kueue to Unmanaged managementState in DataScienceCluster... kueue_operator.go:101: Should be able to set DSC kueue to Unmanaged Unexpected error: <*errors.StatusError | 0xc0009ad9a0>: the server could not find the requested resource { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: { Name: "", Group: "", Kind: "", UID: "", Causes: [ { Type: "UnexpectedServerResponse", Message: "404 page not found", Field: "", }, ], RetryAfterSeconds: 0, }, Code: 404, }, } occurred --- FAIL: TestKueueDefaultLocalQueueLabelInjection (0.00s) === RUN TestKueueWorkloadPreemptionSuspendsTrainJob kueue_operator.go:99: SetupKueue: Setting kueue to Unmanaged managementState in DataScienceCluster... kueue_operator.go:101: Should be able to set DSC kueue to Unmanaged Unexpected error: <*errors.StatusError | 0xc0009dfcc0>: the server could not find the requested resource { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: { Name: "", Group: "", Kind: "", UID: "", Causes: [ { Type: "UnexpectedServerResponse", Message: "404 page not found", Field: "", }, ], RetryAfterSeconds: 0, }, Code: 404, }, } occurred --- FAIL: TestKueueWorkloadPreemptionSuspendsTrainJob (0.00s) === RUN TestKueueWorkloadInadmissibleWithNonExistentLocalQueue kueue_operator.go:99: SetupKueue: Setting kueue to Unmanaged managementState in DataScienceCluster... kueue_operator.go:101: Should be able to set DSC kueue to Unmanaged Unexpected error: <*errors.StatusError | 0xc0008be0a0>: the server could not find the requested resource { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: { Name: "", Group: "", Kind: "", UID: "", Causes: [ { Type: "UnexpectedServerResponse", Message: "404 page not found", Field: "", }, ], RetryAfterSeconds: 0, }, Code: 404, }, } occurred --- FAIL: TestKueueWorkloadInadmissibleWithNonExistentLocalQueue (0.00s) === RUN TestSetupUpgradeTrainJob trainer_kueue_upgrade_training_test.go:57: Skip due to issue RHOAIENG-48867 --- SKIP: TestSetupUpgradeTrainJob (0.00s) === RUN TestRunUpgradeTrainJob trainer_kueue_upgrade_training_test.go:125: Skip due to issue RHOAIENG-48867 --- SKIP: TestRunUpgradeTrainJob (0.00s) === RUN TestSetupSpecificRuntimeUpgradeTrainJob test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'Pre-Upgrade' --- SKIP: TestSetupSpecificRuntimeUpgradeTrainJob (0.00s) === RUN TestRunSpecificRuntimeUpgradeTrainJob test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'Post-Upgrade' --- SKIP: TestRunSpecificRuntimeUpgradeTrainJob (0.00s) === RUN TestKubeflowTrainerSmoke test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'Smoke' --- SKIP: TestKubeflowTrainerSmoke (0.00s) === RUN TestSetupTrainingRuntime test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'Pre-Upgrade' --- SKIP: TestSetupTrainingRuntime (0.00s) === RUN TestVerifyTrainingRuntime test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'Post-Upgrade' --- SKIP: TestVerifyTrainingRuntime (0.00s) === RUN TestSetupSleepTrainJob test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'Pre-Upgrade' --- SKIP: TestSetupSleepTrainJob (0.00s) === RUN TestVerifySleepTrainJob test_tag.go:37: Test tier 'Sanity' doesn't match expected tier 'Post-Upgrade' --- SKIP: TestVerifySleepTrainJob (0.00s) FAIL TearDown: Setting kueue managementState to Removed in DataScienceCluster... TearDown: Failed to set Kueue to Removed: TearDown: failed to set kueue to Removed: the server could not find the requested resource ok github.com/opendatahub-io/distributed-workloads/tests/trainer 2442.858s