./mage -v ci:teste2e Running target: CI:TestE2E I0905 08:35:23.629205 16358 magefile.go:521] setting up new custom bundle for testing... I0905 08:35:24.133167 16358 util.go:521] found credentials for image ref quay.io/redhat-appstudio-qe/test-images:pipeline-bundle-1757061324-xqgp -> user: redhat-appstudio-qe+redhat_appstudio_quality Creating Tekton Bundle: - Added Pipeline: docker-build to image I0905 08:35:25.778771 16358 bundle.go:57] image digest for a new tekton bundle quay.io/redhat-appstudio-qe/test-images:pipeline-bundle-1757061324-xqgp: quay.io/redhat-appstudio-qe/test-images@sha256:2b71fa02441a3537305d96d2b4b9da01ddcd41bd7def2ea65a60439c238857b1 I0905 08:35:25.778792 16358 magefile.go:527] To use the custom docker bundle locally, run below cmd: export CUSTOM_DOCKER_BUILD_PIPELINE_BUNDLE=quay.io/redhat-appstudio-qe/test-images:pipeline-bundle-1757061324-xqgp I0905 08:35:25.778809 16358 build_service.go:49] checking if repository is build-service I0905 08:35:25.778817 16358 e2e_repo.go:347] checking if repository is e2e-tests I0905 08:35:25.778823 16358 release_service.go:50] checking if repository is release-service I0905 08:35:26.772812 16358 release_service_catalog.go:104] checking if repository is release-service-catalog I0905 08:35:26.772829 16358 integration_service.go:49] checking if repository is integration-service I0905 08:35:26.772834 16358 image_controller.go:49] checking if repository is image-controller I0905 08:35:26.772837 16358 image_controller.go:30] require sprayproxy registering is set to TRUE I0905 08:35:26.772840 16358 image_controller.go:33] setting 'image-controller' test label exec: go "install" "-mod=mod" "github.com/onsi/ginkgo/v2/ginkgo" go: downloading github.com/google/pprof v0.0.0-20241210010833-40e02aabc2ad go: downloading github.com/go-task/slim-sprig/v3 v3.0.0 I0905 08:35:30.759802 16358 install.go:188] cloning 'https://github.com/redhat-appstudio/infra-deployments' with git ref 'refs/heads/main' Enumerating objects: 61804, done. Counting objects: 1% (1/88) Counting objects: 2% (2/88) Counting objects: 3% (3/88) Counting objects: 4% (4/88) Counting objects: 5% (5/88) Counting objects: 6% (6/88) Counting objects: 7% (7/88) Counting objects: 9% (8/88) Counting objects: 10% (9/88) Counting objects: 11% (10/88) Counting objects: 12% (11/88) Counting objects: 13% (12/88) Counting objects: 14% (13/88) Counting objects: 15% (14/88) Counting objects: 17% (15/88) Counting objects: 18% (16/88) Counting objects: 19% (17/88) Counting objects: 20% (18/88) Counting objects: 21% (19/88) Counting objects: 22% (20/88) Counting objects: 23% (21/88) Counting objects: 25% (22/88) Counting objects: 26% (23/88) Counting objects: 27% (24/88) Counting objects: 28% (25/88) Counting objects: 29% (26/88) Counting objects: 30% (27/88) Counting objects: 31% (28/88) Counting objects: 32% (29/88) Counting objects: 34% (30/88) Counting objects: 35% (31/88) Counting objects: 36% (32/88) Counting objects: 37% (33/88) Counting objects: 38% (34/88) Counting objects: 39% (35/88) Counting objects: 40% (36/88) Counting objects: 42% (37/88) Counting objects: 43% (38/88) Counting objects: 44% (39/88) Counting objects: 45% (40/88) Counting objects: 46% (41/88) Counting objects: 47% (42/88) Counting objects: 48% (43/88) Counting objects: 50% (44/88) Counting objects: 51% (45/88) Counting objects: 52% (46/88) Counting objects: 53% (47/88) Counting objects: 54% (48/88) Counting objects: 55% (49/88) Counting objects: 56% (50/88) Counting objects: 57% (51/88) Counting objects: 59% (52/88) Counting objects: 60% (53/88) Counting objects: 61% (54/88) Counting objects: 62% (55/88) Counting objects: 63% (56/88) Counting objects: 64% (57/88) Counting objects: 65% (58/88) Counting objects: 67% (59/88) Counting objects: 68% (60/88) Counting objects: 69% (61/88) Counting objects: 70% (62/88) Counting objects: 71% (63/88) Counting objects: 72% (64/88) Counting objects: 73% (65/88) Counting objects: 75% (66/88) Counting objects: 76% (67/88) Counting objects: 77% (68/88) Counting objects: 78% (69/88) Counting objects: 79% (70/88) Counting objects: 80% (71/88) Counting objects: 81% (72/88) Counting objects: 82% (73/88) Counting objects: 84% (74/88) Counting objects: 85% (75/88) Counting objects: 86% (76/88) Counting objects: 87% (77/88) Counting objects: 88% (78/88) Counting objects: 89% (79/88) Counting objects: 90% (80/88) Counting objects: 92% (81/88) Counting objects: 93% (82/88) Counting objects: 94% (83/88) Counting objects: 95% (84/88) Counting objects: 96% (85/88) Counting objects: 97% (86/88) Counting objects: 98% (87/88) Counting objects: 100% (88/88) Counting objects: 100% (88/88), done. Compressing objects: 1% (1/58) Compressing objects: 3% (2/58) Compressing objects: 5% (3/58) Compressing objects: 6% (4/58) Compressing objects: 8% (5/58) Compressing objects: 10% (6/58) Compressing objects: 12% (7/58) Compressing objects: 13% (8/58) Compressing objects: 15% (9/58) Compressing objects: 17% (10/58) Compressing objects: 18% (11/58) Compressing objects: 20% (12/58) Compressing objects: 22% (13/58) Compressing objects: 24% (14/58) Compressing objects: 25% (15/58) Compressing objects: 27% (16/58) Compressing objects: 29% (17/58) Compressing objects: 31% (18/58) Compressing objects: 32% (19/58) Compressing objects: 34% (20/58) Compressing objects: 36% (21/58) Compressing objects: 37% (22/58) Compressing objects: 39% (23/58) Compressing objects: 41% (24/58) Compressing objects: 43% (25/58) Compressing objects: 44% (26/58) Compressing objects: 46% (27/58) Compressing objects: 48% (28/58) Compressing objects: 50% (29/58) Compressing objects: 51% (30/58) Compressing objects: 53% (31/58) Compressing objects: 55% (32/58) Compressing objects: 56% (33/58) Compressing objects: 58% (34/58) Compressing objects: 60% (35/58) Compressing objects: 62% (36/58) Compressing objects: 63% (37/58) Compressing objects: 65% (38/58) Compressing objects: 67% (39/58) Compressing objects: 68% (40/58) Compressing objects: 70% (41/58) Compressing objects: 72% (42/58) Compressing objects: 74% (43/58) Compressing objects: 75% (44/58) Compressing objects: 77% (45/58) Compressing objects: 79% (46/58) Compressing objects: 81% (47/58) Compressing objects: 82% (48/58) Compressing objects: 84% (49/58) Compressing objects: 86% (50/58) Compressing objects: 87% (51/58) Compressing objects: 89% (52/58) Compressing objects: 91% (53/58) Compressing objects: 93% (54/58) Compressing objects: 94% (55/58) Compressing objects: 96% (56/58) Compressing objects: 98% (57/58) Compressing objects: 100% (58/58) Compressing objects: 100% (58/58), done. Total 61804 (delta 63), reused 31 (delta 30), pack-reused 61716 (from 4) From https://github.com/redhat-appstudio/infra-deployments * branch main -> FETCH_HEAD Already up to date. Installing the OpenShift GitOps operator subscription: clusterrole.rbac.authorization.k8s.io/appstudio-openshift-gitops-argocd-application-controller created clusterrole.rbac.authorization.k8s.io/appstudio-openshift-gitops-argocd-server created clusterrolebinding.rbac.authorization.k8s.io/appstudio-openshift-gitops-argocd-application-controller created clusterrolebinding.rbac.authorization.k8s.io/appstudio-openshift-gitops-argocd-server created subscription.operators.coreos.com/openshift-gitops-operator created Waiting for default project (and namespace) to exist: ............................................OK Waiting for OpenShift GitOps Route: OK argocd.argoproj.io/openshift-gitops patched argocd.argoproj.io/openshift-gitops patched Switch the Route to use re-encryption argocd.argoproj.io/openshift-gitops patched Restarting ArgoCD Server pod "openshift-gitops-server-79db7fb858-lfpcj" deleted Allow any authenticated users to be admin on the Argo CD instance argocd.argoproj.io/openshift-gitops patched Mark Pending PVC as Healthy, workaround for WaitForFirstConsumer StorageClasses. Warning: unknown field "spec.resourceCustomizations" argocd.argoproj.io/openshift-gitops patched (no change) Setting kustomize build options argocd.argoproj.io/openshift-gitops patched Setting ignore Aggregated Roles argocd.argoproj.io/openshift-gitops patched Setting ArgoCD tracking method to annotation argocd.argoproj.io/openshift-gitops patched Restarting GitOps server deployment.apps/openshift-gitops-server restarted ========================================================================= Argo CD URL is: https://openshift-gitops-server-openshift-gitops.apps.rosa.kx-1a0487e7d6.4331.p3.openshiftapps.com (NOTE: It may take a few moments for the route to become available) Waiting for the route: .........OK Login/password uses your OpenShift credentials ('Login with OpenShift' button) Setting secrets for Quality Dashboard namespace/quality-dashboard created secret/quality-dashboard-secrets created Creating secret for CI Helper App namespace/ci-helper-app created secret/ci-helper-app-secrets created [WARN] Namespace 'image-controller' does not exist. Creating it... namespace/image-controller created secret/sealights-token created [INFO] Secret 'sealights-token' has been created/updated in namespace 'image-controller'. [WARN] Namespace 'integration-service' does not exist. Creating it... namespace/integration-service created secret/sealights-token created [INFO] Secret 'sealights-token' has been created/updated in namespace 'integration-service'. [WARN] Namespace 'release-service' does not exist. Creating it... namespace/release-service created secret/sealights-token created [INFO] Secret 'sealights-token' has been created/updated in namespace 'release-service'. [WARN] Namespace 'build-service' does not exist. Creating it... namespace/build-service created secret/sealights-token created [INFO] Secret 'sealights-token' has been created/updated in namespace 'build-service'. Setting secrets for pipeline-service tekton-results namespace already exists, skipping creation tekton-logging namespace already exists, skipping creation namespace/product-kubearchive-logging created Creating DB secret secret/tekton-results-database created Creating S3 secret secret/tekton-results-s3 created Creating MinIO config secret/minio-storage-configuration created Creating S3 secret secret/tekton-results-s3 created Creating MinIO config MinIO config already exists, skipping creation Creating Postgres TLS certs ..+.+.....+......+....+.....+.+..............................+...+...+.....+...+......+.+..+.+..+++++++++++++++++++++++++++++++++++++++*.+.+......+...+.....+...+++++++++++++++++++++++++++++++++++++++*........+............+..+...+......+.+.....+....+..+..........+...............+.....+.........+......+.........+...+.+..+...+................+.........+.....+.....................+.............+...+.....+.......+.....+.+......+.....+....+......+......+.....+....+........+.......+........+......+......+...+.......+...+..+................+...+..............+.+..+.......+............+........+.+...............+...........+......+.........+....+..+.......+..+.+...+......+...+........+....+...+........+..........+..+......+.......+...+..+....+......+.....+....+.....+.+........+....+..+.+..+...+.......+..++++++ .+.........+++++++++++++++++++++++++++++++++++++++*......+.....+++++++++++++++++++++++++++++++++++++++*..+.....+...+..........+..................+.....+.......+.....+..................+......+...+.......+...+........++++++ ----- Certificate request self-signature ok subject=CN=cluster.local ..+...+.+.........+.....+.......+...+.....+......+.......+..+...+....+......+..+...+..................+.......+..............+......+............+++++++++++++++++++++++++++++++++++++++*....+...........+...+..........+.........+++++++++++++++++++++++++++++++++++++++*........+..........+..+...+...+.......+...+.....+...+..........+.........+............+...+........+...+....+..+...+......+.........+......+.+........+...+...............+.........+.+...+..+...................+...........+...+.+.....+..........+...............+......+.....+.........+.+.....+..........+..+..........+.....+...............+...............+.+....................+.+...+..+......+.......+..+.+...+..+.........+.+..+...+.......+...........+....+......+.....+....+......+...........+...+............+.+........+.+...+...+...............+..............+...+.+...............+..+...+.+......+..+..................+....+.....................+...+...........+.............+.....+.+..............+......+.+......+...+.....+......+.+............+.....++++++ ...+......+......+...+..+.+.........+.....+...+...+++++++++++++++++++++++++++++++++++++++*.+++++++++++++++++++++++++++++++++++++++*..+.......+.....+...+......+.+......+..+...+....+.........+..+.........+....+...........+...+......+.+...+.....+...+...+....+.....................+.................+.......+...+...+............+..+.+......+........+.+.....+.......+.....+.........+.........+.+..............+.+......+..++++++ ----- Certificate request self-signature ok subject=CN=postgres-postgresql.tekton-results.svc.cluster.local secret/postgresql-tls created configmap/rds-root-crt created namespace/application-service created Creating a has secret from legacy token secret/has-github-token created Creating a secret with a token for Image Controller Warning: resource namespaces/image-controller is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by oc apply. oc apply should only be used on resources created declaratively by either oc create --save-config or oc apply. The missing annotation will be patched automatically. namespace/image-controller configured secret/quaytoken created Configuring the cluster with a pull secret for Docker Hub Saved credentials for docker.io into /tmp/tmp.4vImBPopAT secret/pull-secret data updated Saved credentials for docker.io into /tmp/tmp.4vImBPopAT secret/docker-io-pull created serviceaccount/appstudio-pipeline created Setting secrets for Dora metrics exporter namespace/dora-metrics created secret/exporters-secret created Setting Cluster Mode: preview Switched to a new branch 'preview-main-zabq' labeling node/ip-10-0-142-45.ec2.internal... node/ip-10-0-142-45.ec2.internal labeled successfully labeled node/ip-10-0-142-45.ec2.internal labeling node/ip-10-0-157-50.ec2.internal... node/ip-10-0-157-50.ec2.internal labeled successfully labeled node/ip-10-0-157-50.ec2.internal labeling node/ip-10-0-160-244.ec2.internal... node/ip-10-0-160-244.ec2.internal labeled successfully labeled node/ip-10-0-160-244.ec2.internal verifying labels... all nodes labeled successfully. Detected OCP minor version: 16 Changing AppStudio Gitlab Org to "redhat-appstudio-qe" [preview-main-zabq c307afb3] Preview mode, do not merge into main 7 files changed, 17 insertions(+), 26 deletions(-) remote: remote: Create a pull request for 'preview-main-zabq' on GitHub by visiting: remote: https://github.com/redhat-appstudio-qe/infra-deployments/pull/new/preview-main-zabq remote: To https://github.com/redhat-appstudio-qe/infra-deployments.git * [new branch] preview-main-zabq -> preview-main-zabq branch 'preview-main-zabq' set up to track 'qe/preview-main-zabq'. application.argoproj.io/all-application-sets created Waiting for sync of all-application-sets argoCD app Waiting for sync of all-application-sets argoCD app Waiting for sync of all-application-sets argoCD app Waiting for sync of all-application-sets argoCD app Waiting for sync of all-application-sets argoCD app application.argoproj.io/project-controller-in-cluster-local patched application.argoproj.io/build-service-in-cluster-local patched application.argoproj.io/kyverno-in-cluster-local patched application.argoproj.io/kubearchive-in-cluster-local patched application.argoproj.io/dev-sso-in-cluster-local patched application.argoproj.io/build-templates-in-cluster-local patched application.argoproj.io/application-api-in-cluster-local patched application.argoproj.io/vector-kubearchive-log-collector-in-cluster-local patched (no change) application.argoproj.io/mintmaker-in-cluster-local patched application.argoproj.io/multi-platform-controller-in-cluster-local patched application.argoproj.io/workspaces-in-cluster-local patched application.argoproj.io/konflux-rbac-in-cluster-local patched application.argoproj.io/disable-csvcopy-in-cluster-local patched application.argoproj.io/tracing-workload-otel-collector-in-cluster-local patched application.argoproj.io/enterprise-contract-in-cluster-local patched application.argoproj.io/has-in-cluster-local patched application.argoproj.io/spacerequest-cleaner-in-cluster-local patched application.argoproj.io/perf-team-prometheus-reader-in-cluster-local patched application.argoproj.io/cert-manager-in-cluster-local patched application.argoproj.io/integration-in-cluster-local patched application.argoproj.io/repository-validator-in-cluster-local patched application.argoproj.io/ingresscontroller-in-cluster-local patched application.argoproj.io/kueue-in-cluster-local patched application.argoproj.io/internal-services-in-cluster-local patched application.argoproj.io/tempo-in-cluster-local patched application.argoproj.io/release-in-cluster-local patched application.argoproj.io/dora-metrics-in-cluster-local patched application.argoproj.io/tracing-workload-tracing-in-cluster-local patched application.argoproj.io/vector-tekton-logs-collector-in-cluster-local patched application.argoproj.io/knative-eventing-in-cluster-local patched application.argoproj.io/pipeline-service-in-cluster-local patched application.argoproj.io/monitoring-workload-prometheus-in-cluster-local patched application.argoproj.io/policies-in-cluster-local patched application.argoproj.io/monitoring-workload-grafana-in-cluster-local patched application.argoproj.io/image-controller-in-cluster-local patched application.argoproj.io/konflux-kite-in-cluster-local patched application.argoproj.io/all-application-sets patched application.argoproj.io/crossplane-control-plane-in-cluster-local patched policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync policies-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync All Applications are synced and Healthy All required tekton resources are installed and ready Tekton CRDs are ready Setup Pac with existing QE sprayproxy and github App namespace/openshift-pipelines configured namespace/build-service configured namespace/integration-service configured secret/pipelines-as-code-secret created secret/pipelines-as-code-secret created secret/pipelines-as-code-secret created secret/pipelines-as-code-secret created Configured pipelines-as-code-secret secret in openshift-pipelines namespace Switched to branch 'main' Your branch is up to date with 'upstream/main'. [controller-runtime] log.SetLogger(...) was never called; logs will not be displayed. Detected at: > goroutine 98 [running]: > runtime/debug.Stack() > /usr/lib/golang/src/runtime/debug/stack.go:26 +0x5e > sigs.k8s.io/controller-runtime/pkg/log.eventuallyFulfillRoot() > /opt/app-root/src/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.5/pkg/log/log.go:60 +0xcd > sigs.k8s.io/controller-runtime/pkg/log.(*delegatingLogSink).WithName(0xc0007aa380, {0x2f93fcc, 0x14}) > /opt/app-root/src/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.5/pkg/log/deleg.go:147 +0x3e > github.com/go-logr/logr.Logger.WithName({{0x36ed850, 0xc0007aa380}, 0x0}, {0x2f93fcc?, 0x0?}) > /opt/app-root/src/go/pkg/mod/github.com/go-logr/logr@v1.4.2/logr.go:345 +0x36 > sigs.k8s.io/controller-runtime/pkg/client.newClient(0x2d71ca0?, {0x0, 0xc00056a310, {0x0, 0x0}, 0x0, {0x0, 0x0}, 0x0}) > /opt/app-root/src/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.5/pkg/client/client.go:129 +0xf1 > sigs.k8s.io/controller-runtime/pkg/client.New(0xc0009e5448?, {0x0, 0xc00056a310, {0x0, 0x0}, 0x0, {0x0, 0x0}, 0x0}) > /opt/app-root/src/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.5/pkg/client/client.go:110 +0x7d > github.com/konflux-ci/e2e-tests/pkg/clients/kubernetes.NewAdminKubernetesClient() > /tmp/tmp.RjMkFavENS/pkg/clients/kubernetes/client.go:157 +0xa5 > github.com/konflux-ci/e2e-tests/pkg/clients/sprayproxy.GetPaCHost() > /tmp/tmp.RjMkFavENS/pkg/clients/sprayproxy/sprayproxy.go:93 +0x1c > github.com/konflux-ci/e2e-tests/magefiles/rulesengine/repos.registerPacServer() > /tmp/tmp.RjMkFavENS/magefiles/rulesengine/repos/common.go:426 +0x78 > github.com/konflux-ci/e2e-tests/magefiles/rulesengine/repos.init.func8(0xc00031e488?) > /tmp/tmp.RjMkFavENS/magefiles/rulesengine/repos/common.go:378 +0x25 > github.com/konflux-ci/e2e-tests/magefiles/rulesengine.ActionFunc.Execute(0x2f988c4?, 0x16?) > /tmp/tmp.RjMkFavENS/magefiles/rulesengine/types.go:279 +0x19 > github.com/konflux-ci/e2e-tests/magefiles/rulesengine.(*Rule).Apply(...) > /tmp/tmp.RjMkFavENS/magefiles/rulesengine/types.go:315 > github.com/konflux-ci/e2e-tests/magefiles/rulesengine.(*Rule).Check(0x5246220, 0xc00031e488) > /tmp/tmp.RjMkFavENS/magefiles/rulesengine/types.go:348 +0xb3 > github.com/konflux-ci/e2e-tests/magefiles/rulesengine.All.Check({0x5238e40?, 0x0?, 0xc000e03be8?}, 0xc00031e488) > /tmp/tmp.RjMkFavENS/magefiles/rulesengine/types.go:245 +0x4f > github.com/konflux-ci/e2e-tests/magefiles/rulesengine.(*Rule).Eval(...) > /tmp/tmp.RjMkFavENS/magefiles/rulesengine/types.go:308 > github.com/konflux-ci/e2e-tests/magefiles/rulesengine.(*Rule).Check(0x5246340, 0xc00031e488) > /tmp/tmp.RjMkFavENS/magefiles/rulesengine/types.go:340 +0x2b > github.com/konflux-ci/e2e-tests/magefiles/rulesengine.None.Check({0x52156b0?, 0x20?, 0x1f1bc39?}, 0xc00031e488) > /tmp/tmp.RjMkFavENS/magefiles/rulesengine/types.go:263 +0x4f > github.com/konflux-ci/e2e-tests/magefiles/rulesengine.Any.Check({0x5238f40?, 0xc0015f1c30?, 0x1?}, 0xc00031e488) > /tmp/tmp.RjMkFavENS/magefiles/rulesengine/types.go:227 +0x63 > github.com/konflux-ci/e2e-tests/magefiles/rulesengine.All.Check({0x5244ee0?, 0x7fda5a2099a8?, 0x70?}, 0xc00031e488) > /tmp/tmp.RjMkFavENS/magefiles/rulesengine/types.go:245 +0x4f > github.com/konflux-ci/e2e-tests/magefiles/rulesengine.(*Rule).Eval(...) > /tmp/tmp.RjMkFavENS/magefiles/rulesengine/types.go:308 > github.com/konflux-ci/e2e-tests/magefiles/rulesengine.(*RuleEngine).runLoadedCatalog(0x527d990, {0xc00080d188?, 0xc0010e5e60?, 0x47?}, 0xc00031e488) > /tmp/tmp.RjMkFavENS/magefiles/rulesengine/types.go:129 +0x119 > github.com/konflux-ci/e2e-tests/magefiles/rulesengine.(*RuleEngine).RunRulesOfCategory(0x527d990, {0x2f68e23, 0x2}, 0xc00031e488) > /tmp/tmp.RjMkFavENS/magefiles/rulesengine/types.go:121 +0x1b4 > main.CI.TestE2E({}) > /tmp/tmp.RjMkFavENS/magefiles/magefile.go:322 +0x18a > main.main.func19({0xc0004eb790?, 0x178e94e?}) > /tmp/tmp.RjMkFavENS/magefiles/mage_output_file.go:827 +0xf > main.main.func12.1() > /tmp/tmp.RjMkFavENS/magefiles/mage_output_file.go:302 +0x5b > created by main.main.func12 in goroutine 1 > /tmp/tmp.RjMkFavENS/magefiles/mage_output_file.go:297 +0xbe I0905 08:56:43.027078 16358 common.go:434] Registered PaC server: https://pipelines-as-code-controller-openshift-pipelines.apps.rosa.kx-1a0487e7d6.4331.p3.openshiftapps.com I0905 08:56:43.095331 16358 common.go:459] The PaC servers registered in Sprayproxy: https://pipelines-as-code-controller-openshift-pipelines.apps.konflux-4-16-us-west-2-6vvtr.konflux-qe.devcluster.openshift.com, https://pipelines-as-code-controller-openshift-pipelines.apps.rosa.kx-1a0487e7d6.4331.p3.openshiftapps.com, https://pipelines-as-code-controller-openshift-pipelines.apps.ci-op-b0mhjdv0-c15f2.rhtap-perfscale.devcluster.openshift.com, https://pipelines-as-code-controller-openshift-pipelines.apps.konflux-4-16-us-west-2-t8z9p.konflux-qe.devcluster.openshift.com W0905 08:56:43.096172 16358 install.go:178] folder /tmp/tmp.RjMkFavENS/tmp/infra-deployments already exists... removing I0905 08:56:43.260242 16358 install.go:188] cloning 'https://github.com/redhat-appstudio/infra-deployments' with git ref 'refs/heads/main' Enumerating objects: 61813, done. Counting objects: 0% (1/102) Counting objects: 1% (2/102) Counting objects: 2% (3/102) Counting objects: 3% (4/102) Counting objects: 4% (5/102) Counting objects: 5% (6/102) Counting objects: 6% (7/102) Counting objects: 7% (8/102) Counting objects: 8% (9/102) Counting objects: 9% (10/102) Counting objects: 10% (11/102) Counting objects: 11% (12/102) Counting objects: 12% (13/102) Counting objects: 13% (14/102) Counting objects: 14% (15/102) Counting objects: 15% (16/102) Counting objects: 16% (17/102) Counting objects: 17% (18/102) Counting objects: 18% (19/102) Counting objects: 19% (20/102) Counting objects: 20% (21/102) Counting objects: 21% (22/102) Counting objects: 22% (23/102) Counting objects: 23% (24/102) Counting objects: 24% (25/102) Counting objects: 25% (26/102) Counting objects: 26% (27/102) Counting objects: 27% (28/102) Counting objects: 28% (29/102) Counting objects: 29% (30/102) Counting objects: 30% (31/102) Counting objects: 31% (32/102) Counting objects: 32% (33/102) Counting objects: 33% (34/102) Counting objects: 34% (35/102) Counting objects: 35% (36/102) Counting objects: 36% (37/102) Counting objects: 37% (38/102) Counting objects: 38% (39/102) Counting objects: 39% (40/102) Counting objects: 40% (41/102) Counting objects: 41% (42/102) Counting objects: 42% (43/102) Counting objects: 43% (44/102) Counting objects: 44% (45/102) Counting objects: 45% (46/102) Counting objects: 46% (47/102) Counting objects: 47% (48/102) Counting objects: 48% (49/102) Counting objects: 49% (50/102) Counting objects: 50% (51/102) Counting objects: 51% (53/102) Counting objects: 52% (54/102) Counting objects: 53% (55/102) Counting objects: 54% (56/102) Counting objects: 55% (57/102) Counting objects: 56% (58/102) Counting objects: 57% (59/102) Counting objects: 58% (60/102) Counting objects: 59% (61/102) Counting objects: 60% (62/102) Counting objects: 61% (63/102) Counting objects: 62% (64/102) Counting objects: 63% (65/102) Counting objects: 64% (66/102) Counting objects: 65% (67/102) Counting objects: 66% (68/102) Counting objects: 67% (69/102) Counting objects: 68% (70/102) Counting objects: 69% (71/102) Counting objects: 70% (72/102) Counting objects: 71% (73/102) Counting objects: 72% (74/102) Counting objects: 73% (75/102) Counting objects: 74% (76/102) Counting objects: 75% (77/102) Counting objects: 76% (78/102) Counting objects: 77% (79/102) Counting objects: 78% (80/102) Counting objects: 79% (81/102) Counting objects: 80% (82/102) Counting objects: 81% (83/102) Counting objects: 82% (84/102) Counting objects: 83% (85/102) Counting objects: 84% (86/102) Counting objects: 85% (87/102) Counting objects: 86% (88/102) Counting objects: 87% (89/102) Counting objects: 88% (90/102) Counting objects: 89% (91/102) Counting objects: 90% (92/102) Counting objects: 91% (93/102) Counting objects: 92% (94/102) Counting objects: 93% (95/102) Counting objects: 94% (96/102) Counting objects: 95% (97/102) Counting objects: 96% (98/102) Counting objects: 97% (99/102) Counting objects: 98% (100/102) Counting objects: 99% (101/102) Counting objects: 100% (102/102) Counting objects: 100% (102/102), done. Compressing objects: 1% (1/73) Compressing objects: 2% (2/73) Compressing objects: 4% (3/73) Compressing objects: 5% (4/73) Compressing objects: 6% (5/73) Compressing objects: 8% (6/73) Compressing objects: 9% (7/73) Compressing objects: 10% (8/73) Compressing objects: 12% (9/73) Compressing objects: 13% (10/73) Compressing objects: 15% (11/73) Compressing objects: 16% (12/73) Compressing objects: 17% (13/73) Compressing objects: 19% (14/73) Compressing objects: 20% (15/73) Compressing objects: 21% (16/73) Compressing objects: 23% (17/73) Compressing objects: 24% (18/73) Compressing objects: 26% (19/73) Compressing objects: 27% (20/73) Compressing objects: 28% (21/73) Compressing objects: 30% (22/73) Compressing objects: 31% (23/73) Compressing objects: 32% (24/73) Compressing objects: 34% (25/73) Compressing objects: 35% (26/73) Compressing objects: 36% (27/73) Compressing objects: 38% (28/73) Compressing objects: 39% (29/73) Compressing objects: 41% (30/73) Compressing objects: 42% (31/73) Compressing objects: 43% (32/73) Compressing objects: 45% (33/73) Compressing objects: 46% (34/73) Compressing objects: 47% (35/73) Compressing objects: 49% (36/73) Compressing objects: 50% (37/73) Compressing objects: 52% (38/73) Compressing objects: 53% (39/73) Compressing objects: 54% (40/73) Compressing objects: 56% (41/73) Compressing objects: 57% (42/73) Compressing objects: 58% (43/73) Compressing objects: 60% (44/73) Compressing objects: 61% (45/73) Compressing objects: 63% (46/73) Compressing objects: 64% (47/73) Compressing objects: 65% (48/73) Compressing objects: 67% (49/73) Compressing objects: 68% (50/73) Compressing objects: 69% (51/73) Compressing objects: 71% (52/73) Compressing objects: 72% (53/73) Compressing objects: 73% (54/73) Compressing objects: 75% (55/73) Compressing objects: 76% (56/73) Compressing objects: 78% (57/73) Compressing objects: 79% (58/73) Compressing objects: 80% (59/73) Compressing objects: 82% (60/73) Compressing objects: 83% (61/73) Compressing objects: 84% (62/73) Compressing objects: 86% (63/73) Compressing objects: 87% (64/73) Compressing objects: 89% (65/73) Compressing objects: 90% (66/73) Compressing objects: 91% (67/73) Compressing objects: 93% (68/73) Compressing objects: 94% (69/73) Compressing objects: 95% (70/73) Compressing objects: 97% (71/73) Compressing objects: 98% (72/73) Compressing objects: 100% (73/73) Compressing objects: 100% (73/73), done. Total 61813 (delta 67), reused 29 (delta 29), pack-reused 61711 (from 4) From https://github.com/redhat-appstudio/infra-deployments * branch main -> FETCH_HEAD Already up to date. Installing the OpenShift GitOps operator subscription: clusterrole.rbac.authorization.k8s.io/appstudio-openshift-gitops-argocd-application-controller unchanged clusterrole.rbac.authorization.k8s.io/appstudio-openshift-gitops-argocd-server unchanged clusterrolebinding.rbac.authorization.k8s.io/appstudio-openshift-gitops-argocd-application-controller unchanged clusterrolebinding.rbac.authorization.k8s.io/appstudio-openshift-gitops-argocd-server unchanged subscription.operators.coreos.com/openshift-gitops-operator unchanged Waiting for default project (and namespace) to exist: OK Waiting for OpenShift GitOps Route: OK argocd.argoproj.io/openshift-gitops patched (no change) argocd.argoproj.io/openshift-gitops patched (no change) Switch the Route to use re-encryption argocd.argoproj.io/openshift-gitops patched (no change) Restarting ArgoCD Server pod "openshift-gitops-server-86b8dcb949-xr5cb" deleted Allow any authenticated users to be admin on the Argo CD instance argocd.argoproj.io/openshift-gitops patched (no change) Mark Pending PVC as Healthy, workaround for WaitForFirstConsumer StorageClasses. Warning: unknown field "spec.resourceCustomizations" argocd.argoproj.io/openshift-gitops patched (no change) Setting kustomize build options argocd.argoproj.io/openshift-gitops patched (no change) Setting ignore Aggregated Roles argocd.argoproj.io/openshift-gitops patched (no change) Setting ArgoCD tracking method to annotation argocd.argoproj.io/openshift-gitops patched (no change) Restarting GitOps server deployment.apps/openshift-gitops-server restarted ========================================================================= Argo CD URL is: https://openshift-gitops-server-openshift-gitops.apps.rosa.kx-1a0487e7d6.4331.p3.openshiftapps.com (NOTE: It may take a few moments for the route to become available) Waiting for the route: ..OK Login/password uses your OpenShift credentials ('Login with OpenShift' button) Setting secrets for Quality Dashboard namespace/quality-dashboard configured Creating secret for CI Helper App namespace/ci-helper-app configured [INFO] Updating existing secret 'sealights-token' in namespace 'image-controller'. secret "sealights-token" deleted secret/sealights-token created [INFO] Secret 'sealights-token' has been created/updated in namespace 'image-controller'. [INFO] Updating existing secret 'sealights-token' in namespace 'integration-service'. secret "sealights-token" deleted secret/sealights-token created [INFO] Secret 'sealights-token' has been created/updated in namespace 'integration-service'. [INFO] Updating existing secret 'sealights-token' in namespace 'release-service'. secret "sealights-token" deleted secret/sealights-token created [INFO] Secret 'sealights-token' has been created/updated in namespace 'release-service'. [INFO] Updating existing secret 'sealights-token' in namespace 'build-service'. secret "sealights-token" deleted secret/sealights-token created [INFO] Secret 'sealights-token' has been created/updated in namespace 'build-service'. Setting secrets for pipeline-service tekton-results namespace already exists, skipping creation tekton-logging namespace already exists, skipping creation product-kubearchive-logging namespace already exists, skipping creation Creating DB secret DB secret already exists, skipping creation Creating S3 secret S3 secret already exists, skipping creation Creating S3 secret S3 secret already exists, skipping creation Creating Postgres TLS certs Postgres DB cert secret already exists, skipping creation namespace/application-service configured Creating a has secret from legacy token secret/has-github-token configured Creating a secret with a token for Image Controller namespace/image-controller configured secret/quaytoken configured Configuring the cluster with a pull secret for Docker Hub Saved credentials for docker.io into /tmp/tmp.WQ4UPwoU0y secret/pull-secret data updated Saved credentials for docker.io into /tmp/tmp.WQ4UPwoU0y secret/docker-io-pull configured serviceaccount/appstudio-pipeline configured Setting secrets for Dora metrics exporter namespace/dora-metrics configured Setting Cluster Mode: preview Switched to a new branch 'preview-main-tjzh' labeling node/ip-10-0-142-45.ec2.internal... node/ip-10-0-142-45.ec2.internal not labeled successfully labeled node/ip-10-0-142-45.ec2.internal labeling node/ip-10-0-157-50.ec2.internal... node/ip-10-0-157-50.ec2.internal not labeled successfully labeled node/ip-10-0-157-50.ec2.internal labeling node/ip-10-0-160-244.ec2.internal... node/ip-10-0-160-244.ec2.internal not labeled successfully labeled node/ip-10-0-160-244.ec2.internal verifying labels... all nodes labeled successfully. Detected OCP minor version: 16 Changing AppStudio Gitlab Org to "redhat-appstudio-qe" [preview-main-tjzh 83165816] Preview mode, do not merge into main 7 files changed, 17 insertions(+), 26 deletions(-) remote: remote: Create a pull request for 'preview-main-tjzh' on GitHub by visiting: remote: https://github.com/redhat-appstudio-qe/infra-deployments/pull/new/preview-main-tjzh remote: To https://github.com/redhat-appstudio-qe/infra-deployments.git * [new branch] preview-main-tjzh -> preview-main-tjzh branch 'preview-main-tjzh' set up to track 'qe/preview-main-tjzh'. application.argoproj.io/all-application-sets configured application.argoproj.io/kubearchive-in-cluster-local patched application.argoproj.io/knative-eventing-in-cluster-local patched application.argoproj.io/monitoring-workload-grafana-in-cluster-local patched application.argoproj.io/image-controller-in-cluster-local patched application.argoproj.io/build-templates-in-cluster-local patched application.argoproj.io/konflux-rbac-in-cluster-local patched application.argoproj.io/cert-manager-in-cluster-local patched application.argoproj.io/has-in-cluster-local patched application.argoproj.io/repository-validator-in-cluster-local patched application.argoproj.io/workspaces-in-cluster-local patched application.argoproj.io/pipeline-service-in-cluster-local patched application.argoproj.io/all-application-sets patched application.argoproj.io/dev-sso-in-cluster-local patched application.argoproj.io/perf-team-prometheus-reader-in-cluster-local patched application.argoproj.io/multi-platform-controller-in-cluster-local patched application.argoproj.io/kueue-in-cluster-local patched application.argoproj.io/enterprise-contract-in-cluster-local patched application.argoproj.io/build-service-in-cluster-local patched application.argoproj.io/monitoring-workload-prometheus-in-cluster-local patched application.argoproj.io/ingresscontroller-in-cluster-local patched application.argoproj.io/vector-tekton-logs-collector-in-cluster-local patched application.argoproj.io/integration-in-cluster-local patched application.argoproj.io/tracing-workload-otel-collector-in-cluster-local patched application.argoproj.io/internal-services-in-cluster-local patched application.argoproj.io/konflux-kite-in-cluster-local patched application.argoproj.io/spacerequest-cleaner-in-cluster-local patched application.argoproj.io/disable-csvcopy-in-cluster-local patched application.argoproj.io/crossplane-control-plane-in-cluster-local patched application.argoproj.io/tracing-workload-tracing-in-cluster-local patched application.argoproj.io/mintmaker-in-cluster-local patched application.argoproj.io/project-controller-in-cluster-local patched application.argoproj.io/application-api-in-cluster-local patched application.argoproj.io/tempo-in-cluster-local patched application.argoproj.io/postgres patched application.argoproj.io/release-in-cluster-local patched application.argoproj.io/policies-in-cluster-local patched application.argoproj.io/dora-metrics-in-cluster-local patched application.argoproj.io/vector-kubearchive-log-collector-in-cluster-local patched (no change) application.argoproj.io/kyverno-in-cluster-local patched All Applications are synced and Healthy All required tekton resources are installed and ready Tekton CRDs are ready Setup Pac with existing QE sprayproxy and github App namespace/openshift-pipelines configured namespace/build-service configured namespace/integration-service configured secret/pipelines-as-code-secret configured secret/pipelines-as-code-secret configured secret/pipelines-as-code-secret configured secret/pipelines-as-code-secret configured Configured pipelines-as-code-secret secret in openshift-pipelines namespace Switched to branch 'main' Your branch is up to date with 'upstream/main'. I0905 09:05:35.714433 16358 common.go:434] Registered PaC server: https://pipelines-as-code-controller-openshift-pipelines.apps.rosa.kx-1a0487e7d6.4331.p3.openshiftapps.com I0905 09:05:35.782438 16358 common.go:459] The PaC servers registered in Sprayproxy: https://pipelines-as-code-controller-openshift-pipelines.apps.konflux-4-16-us-west-2-6vvtr.konflux-qe.devcluster.openshift.com, https://pipelines-as-code-controller-openshift-pipelines.apps.rosa.kx-1a0487e7d6.4331.p3.openshiftapps.com, https://pipelines-as-code-controller-openshift-pipelines.apps.ci-op-b0mhjdv0-c15f2.rhtap-perfscale.devcluster.openshift.com, https://pipelines-as-code-controller-openshift-pipelines.apps.konflux-4-16-us-west-2-t8z9p.konflux-qe.devcluster.openshift.com I0905 09:05:35.782458 16358 types.go:155] The following rules have matched image-controller repo CI Workflow Rule. I0905 09:05:35.782464 16358 types.go:180] Will apply rules exec: ginkgo "--seed=1757061323" "--timeout=1h30m0s" "--grace-period=30s" "--output-interceptor-mode=none" "--label-filter=image-controller" "--no-color" "--json-report=e2e-report.json" "--junit-report=e2e-report.xml" "--procs=20" "--nodes=20" "--p" "--output-dir=/workspace/artifact-dir" "./cmd" "--" go: downloading github.com/IBM/go-sdk-core/v5 v5.15.3 go: downloading github.com/IBM/vpc-go-sdk v0.48.0 go: downloading github.com/aws/aws-sdk-go-v2 v1.32.7 go: downloading github.com/aws/aws-sdk-go-v2/config v1.28.7 go: downloading github.com/aws/aws-sdk-go-v2/service/ec2 v1.135.0 go: downloading github.com/konflux-ci/build-service v0.0.0-20240611083846-2dee6cfe6fe4 go: downloading github.com/go-playground/validator/v10 v10.17.0 go: downloading github.com/go-openapi/strfmt v0.22.0 go: downloading github.com/go-openapi/errors v0.21.0 go: downloading github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 go: downloading github.com/oklog/ulid v1.3.1 go: downloading go.mongodb.org/mongo-driver v1.13.1 go: downloading github.com/mitchellh/mapstructure v1.5.0 go: downloading github.com/leodido/go-urn v1.3.0 go: downloading github.com/go-playground/universal-translator v0.18.1 go: downloading github.com/gabriel-vasile/mimetype v1.4.3 go: downloading github.com/aws/aws-sdk-go-v2/credentials v1.17.48 go: downloading github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.22 go: downloading github.com/aws/aws-sdk-go-v2/internal/ini v1.8.1 go: downloading github.com/aws/aws-sdk-go-v2/service/sso v1.24.8 go: downloading github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.7 go: downloading github.com/aws/aws-sdk-go-v2/service/sts v1.33.3 go: downloading github.com/aws/smithy-go v1.22.1 go: downloading github.com/go-playground/locales v0.14.1 go: downloading github.com/google/go-github/v45 v45.2.0 go: downloading github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.26 go: downloading github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.1 go: downloading github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.7 go: downloading github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.26 Running Suite: Red Hat App Studio E2E tests - /tmp/tmp.RjMkFavENS/cmd ===================================================================== Random Seed: 1757061323 Will run 62 of 401 specs Running in parallel across 20 processes S ------------------------------ P [PENDING] [integration-service-suite Integration Service E2E tests ITS PipelineRun Resolution] with happy path for general flow of Integration service when a new Component is created should have a related PaC init PR created [integration-service, pipelinerun-resolution] /tmp/tmp.RjMkFavENS/tests/integration-service/pipelinerun-resolution.go:77 ------------------------------ SS ------------------------------ P [PENDING] [integration-service-suite Integration Service E2E tests ITS PipelineRun Resolution] with happy path for general flow of Integration service when a new Component is created triggers a build PipelineRun [integration-service, pipelinerun-resolution] /tmp/tmp.RjMkFavENS/tests/integration-service/pipelinerun-resolution.go:98 ------------------------------ SS ------------------------------ P [PENDING] [integration-service-suite Integration Service E2E tests ITS PipelineRun Resolution] with happy path for general flow of Integration service when a new Component is created verifies if the build PipelineRun contains the finalizer [integration-service, pipelinerun-resolution] /tmp/tmp.RjMkFavENS/tests/integration-service/pipelinerun-resolution.go:103 ------------------------------ SSSS ------------------------------ P [PENDING] [integration-service-suite Integration Service E2E tests ITS PipelineRun Resolution] with happy path for general flow of Integration service when a new Component is created waits for build PipelineRun to succeed [integration-service, pipelinerun-resolution] /tmp/tmp.RjMkFavENS/tests/integration-service/pipelinerun-resolution.go:114 ------------------------------ SSSS ------------------------------ P [PENDING] [integration-service-suite Integration Service E2E tests ITS PipelineRun Resolution] with happy path for general flow of Integration service when the build pipelineRun run succeeded checks if the BuildPipelineRun have the annotation of chains signed [integration-service, pipelinerun-resolution] /tmp/tmp.RjMkFavENS/tests/integration-service/pipelinerun-resolution.go:122 ------------------------------ S ------------------------------ P [PENDING] [integration-service-suite Integration Service E2E tests ITS PipelineRun Resolution] with happy path for general flow of Integration service when the build pipelineRun run succeeded checks if the Snapshot is created [integration-service, pipelinerun-resolution] /tmp/tmp.RjMkFavENS/tests/integration-service/pipelinerun-resolution.go:126 ------------------------------ S ------------------------------ P [PENDING] [integration-service-suite Integration Service E2E tests ITS PipelineRun Resolution] with happy path for general flow of Integration service when the build pipelineRun run succeeded checks if the Build PipelineRun got annotated with Snapshot name [integration-service, pipelinerun-resolution] /tmp/tmp.RjMkFavENS/tests/integration-service/pipelinerun-resolution.go:132 ------------------------------ S ------------------------------ P [PENDING] [integration-service-suite Integration Service E2E tests ITS PipelineRun Resolution] with happy path for general flow of Integration service when the build pipelineRun run succeeded verifies that the finalizer has been removed from the build pipelinerun [integration-service, pipelinerun-resolution] /tmp/tmp.RjMkFavENS/tests/integration-service/pipelinerun-resolution.go:136 ------------------------------ SSS ------------------------------ P [PENDING] [integration-service-suite Integration Service E2E tests ITS PipelineRun Resolution] with happy path for general flow of Integration service when the build pipelineRun run succeeded checks if all of the integrationPipelineRuns passed [integration-service, pipelinerun-resolution, slow] /tmp/tmp.RjMkFavENS/tests/integration-service/pipelinerun-resolution.go:149 ------------------------------ SS ------------------------------ P [PENDING] [release-pipelines-suite e2e tests for release-to-github pipeline] Release-to-github happy path Post-release verification verifies if release CR is created [release-pipelines, release-to-github, releaseToGithub] /tmp/tmp.RjMkFavENS/tests/release/pipelines/release_to_github.go:139 ------------------------------ S ------------------------------ P [PENDING] [integration-service-suite Integration Service E2E tests ITS PipelineRun Resolution] with happy path for general flow of Integration service when the build pipelineRun run succeeded checks if the passed status of integration test is reported in the Snapshot [integration-service, pipelinerun-resolution] /tmp/tmp.RjMkFavENS/tests/integration-service/pipelinerun-resolution.go:153 ------------------------------ S ------------------------------ P [PENDING] [release-pipelines-suite e2e tests for release-to-github pipeline] Release-to-github happy path Post-release verification verifies the release pipelinerun is running and succeeds [release-pipelines, release-to-github, releaseToGithub] /tmp/tmp.RjMkFavENS/tests/release/pipelines/release_to_github.go:149 ------------------------------ SS ------------------------------ P [PENDING] [integration-service-suite Integration Service E2E tests ITS PipelineRun Resolution] with happy path for general flow of Integration service when the build pipelineRun run succeeded checks if the finalizer was removed from all of the related Integration pipelineRuns [integration-service, pipelinerun-resolution] /tmp/tmp.RjMkFavENS/tests/integration-service/pipelinerun-resolution.go:169 ------------------------------ SSS ------------------------------ P [PENDING] [release-pipelines-suite e2e tests for release-to-github pipeline] Release-to-github happy path Post-release verification verifies release CR completed and set succeeded. [release-pipelines, release-to-github, releaseToGithub] /tmp/tmp.RjMkFavENS/tests/release/pipelines/release_to_github.go:182 ------------------------------ SSSSSS ------------------------------ P [PENDING] [release-pipelines-suite e2e tests for release-to-github pipeline] Release-to-github happy path Post-release verification verifies if the Release exists in github repo [release-pipelines, release-to-github, releaseToGithub] /tmp/tmp.RjMkFavENS/tests/release/pipelines/release_to_github.go:193 ------------------------------ S ------------------------------ P [PENDING] [integration-service-suite Integration Service E2E tests ITS PipelineRun Resolution] with happy path for general flow of Integration service when integration pipelineRun is created it passes, annotations and labels not overwritten by integration service checks integration pipelineRun passed [integration-service, pipelinerun-resolution] /tmp/tmp.RjMkFavENS/tests/integration-service/pipelinerun-resolution.go:176 ------------------------------ SSSSSSSSS ------------------------------ P [PENDING] [integration-service-suite Integration Service E2E tests ITS PipelineRun Resolution] with happy path for general flow of Integration service when integration pipelineRun is created it passes, annotations and labels not overwritten by integration service verifies that existing labels and annotations are not overwritten by integration service [integration-service, pipelinerun-resolution] /tmp/tmp.RjMkFavENS/tests/integration-service/pipelinerun-resolution.go:180 ------------------------------ SSSSSSSSS ------------------------------ P [PENDING] [integration-service-suite Integration Service E2E tests ITS PipelineRun Resolution] with happy path for general flow of Integration service when integration pipelineRun is created it passes, annotations and labels not overwritten by integration service verifies that PipelinesAsCode specific annotations with dynamic values are preserved [integration-service, pipelinerun-resolution] /tmp/tmp.RjMkFavENS/tests/integration-service/pipelinerun-resolution.go:226 ------------------------------ SSSSSS ------------------------------ P [PENDING] [integration-service-suite Integration Service E2E tests ITS PipelineRun Resolution] with happy path for general flow of Integration service when integration pipelineRun is created it passes, annotations and labels not overwritten by integration service verifies that ResolutionRequest is deleted after pipeline resolution [integration-service, pipelinerun-resolution] /tmp/tmp.RjMkFavENS/tests/integration-service/pipelinerun-resolution.go:263 ------------------------------ SSSS ------------------------------ P [PENDING] [integration-service-suite Integration Service E2E tests ITS PipelineRun Resolution] with happy path for general flow of Integration service when integration pipelineRun is created it passes, annotations and labels not overwritten by integration service verifies that no orphaned ResolutionRequests remain in namespace after test completion [integration-service, pipelinerun-resolution] /tmp/tmp.RjMkFavENS/tests/integration-service/pipelinerun-resolution.go:287 ------------------------------ SSS ------------------------------ P [PENDING] [integration-service-suite Integration Service E2E tests ITS PipelineRun Resolution] with happy path for general flow of Integration service when integration pipelineRun is created it passes, annotations and labels not overwritten by integration service validates that second integration pipelineRun failed due to resolution not pipelinerun [integration-service, pipelinerun-resolution] /tmp/tmp.RjMkFavENS/tests/integration-service/pipelinerun-resolution.go:320 ------------------------------ SSSSS ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] aws host-pool allocation when the Component with multi-platform-build is created a PipelineRun is triggered [multi-platform, aws-host-pool] /tmp/tmp.RjMkFavENS/tests/build/multi-platform.go:120 ------------------------------ SS ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] aws host-pool allocation when the Component with multi-platform-build is created the build-container task from component pipelinerun is buildah-remote [multi-platform, aws-host-pool] /tmp/tmp.RjMkFavENS/tests/build/multi-platform.go:124 ------------------------------ SSSS ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] aws host-pool allocation when the Component with multi-platform-build is created The multi platform secret is populated [multi-platform, aws-host-pool] /tmp/tmp.RjMkFavENS/tests/build/multi-platform.go:127 ------------------------------ SSSSSSSSSSSSS ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] aws host-pool allocation when the Component with multi-platform-build is created that PipelineRun completes successfully [multi-platform, aws-host-pool] /tmp/tmp.RjMkFavENS/tests/build/multi-platform.go:148 ------------------------------ SSSSS ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] aws host-pool allocation when the Component with multi-platform-build is created test that cleanup happened successfully [multi-platform, aws-host-pool] /tmp/tmp.RjMkFavENS/tests/build/multi-platform.go:152 ------------------------------ SSSSSSSSSSS ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] aws dynamic allocation when the Component with multi-platform-build is created a PipelineRun is triggered [multi-platform, aws-dynamic] /tmp/tmp.RjMkFavENS/tests/build/multi-platform.go:251 ------------------------------ SSS ------------------------------ P [PENDING] [build-service-suite Build service E2E tests] test build secret lookup when two secrets are created when second component is deleted, pac pr branch should not exist in the repo [build-service, pac-build, secret-lookup] /tmp/tmp.RjMkFavENS/tests/build/build.go:1097 ------------------------------ SSSS ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] aws dynamic allocation when the Component with multi-platform-build is created the build-container task from component pipelinerun is buildah-remote [multi-platform, aws-dynamic] /tmp/tmp.RjMkFavENS/tests/build/multi-platform.go:255 ------------------------------ SSSSS ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] aws dynamic allocation when the Component with multi-platform-build is created The multi platform secret is populated [multi-platform, aws-dynamic] /tmp/tmp.RjMkFavENS/tests/build/multi-platform.go:259 ------------------------------ SSSSS ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] aws dynamic allocation when the Component with multi-platform-build is created that PipelineRun completes successfully [multi-platform, aws-dynamic] /tmp/tmp.RjMkFavENS/tests/build/multi-platform.go:263 ------------------------------ SSSSSSSS ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] aws dynamic allocation when the Component with multi-platform-build is created check cleanup happened successfully [multi-platform, aws-dynamic] /tmp/tmp.RjMkFavENS/tests/build/multi-platform.go:267 ------------------------------ S ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] ibm system z dynamic allocation when the Component with multi-platform-build is created a PipelineRun is triggered [multi-platform, ibmz-dynamic] /tmp/tmp.RjMkFavENS/tests/build/multi-platform.go:341 ------------------------------ SSSSSS ------------------------------ P [PENDING] [task-suite tkn bundle task] creates Tekton bundles with different params when context points to a file [build-templates] /tmp/tmp.RjMkFavENS/tests/build/tkn-bundle.go:177 ------------------------------ SSSSSS ------------------------------ P [PENDING] [task-suite tkn bundle task] creates Tekton bundles with different params creates Tekton bundles from specific context [build-templates] /tmp/tmp.RjMkFavENS/tests/build/tkn-bundle.go:188 ------------------------------ SSSSSSSSSSS ------------------------------ P [PENDING] [task-suite tkn bundle task] creates Tekton bundles with different params when context is the root directory [build-templates] /tmp/tmp.RjMkFavENS/tests/build/tkn-bundle.go:198 ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] ibm system z dynamic allocation when the Component with multi-platform-build is created the build-container task from component pipelinerun is buildah-remote [multi-platform, ibmz-dynamic] /tmp/tmp.RjMkFavENS/tests/build/multi-platform.go:345 ------------------------------ SSSSSSSS ------------------------------ P [PENDING] [task-suite tkn bundle task] creates Tekton bundles with different params creates Tekton bundles when context points to a file and a directory [build-templates] /tmp/tmp.RjMkFavENS/tests/build/tkn-bundle.go:207 ------------------------------ SSSSSSS ------------------------------ P [PENDING] [task-suite tkn bundle task] creates Tekton bundles with different params creates Tekton bundles when using negation [build-templates] /tmp/tmp.RjMkFavENS/tests/build/tkn-bundle.go:217 ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] ibm system z dynamic allocation when the Component with multi-platform-build is created The multi platform secret is populated [multi-platform, ibmz-dynamic] /tmp/tmp.RjMkFavENS/tests/build/multi-platform.go:349 ------------------------------ SSSSSSS ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] ibm system z dynamic allocation when the Component with multi-platform-build is created that PipelineRun completes successfully [multi-platform, ibmz-dynamic] /tmp/tmp.RjMkFavENS/tests/build/multi-platform.go:353 ------------------------------ SSS ------------------------------ P [PENDING] [task-suite tkn bundle task] creates Tekton bundles with different params allows overriding HOME environment variable [build-templates] /tmp/tmp.RjMkFavENS/tests/build/tkn-bundle.go:227 ------------------------------ SSSS ------------------------------ P [PENDING] [task-suite tkn bundle task] creates Tekton bundles with different params allows overriding STEP image [build-templates] /tmp/tmp.RjMkFavENS/tests/build/tkn-bundle.go:236 ------------------------------ SS ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] ibm system z dynamic allocation when the Component with multi-platform-build is created check cleanup happened successfully [multi-platform, ibmz-dynamic] /tmp/tmp.RjMkFavENS/tests/build/multi-platform.go:357 ------------------------------ SSSSSSS ------------------------------ P [PENDING] [build-service-suite Build templates E2E test] HACBS pipelines when Pipeline Results are stored for component with Git source URL https://github.com/redhat-appstudio-qe/devfile-sample-python-basic and Pipeline docker-build should have Pipeline Logs [build, build-templates, HACBS, pipeline-service, pipeline] /tmp/tmp.RjMkFavENS/tests/build/build_templates.go:478 ------------------------------ SS ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] ibm power pc dynamic allocation when the Component with multi-platform-build is created a PipelineRun is triggered [multi-platform, ibmp-dynamic] /tmp/tmp.RjMkFavENS/tests/build/multi-platform.go:432 ------------------------------ SSSSS ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] ibm power pc dynamic allocation when the Component with multi-platform-build is created the build-container task from component pipelinerun is buildah-remote [multi-platform, ibmp-dynamic] /tmp/tmp.RjMkFavENS/tests/build/multi-platform.go:436 ------------------------------ SSSSSS ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] ibm power pc dynamic allocation when the Component with multi-platform-build is created The multi platform secret is populated [multi-platform, ibmp-dynamic] /tmp/tmp.RjMkFavENS/tests/build/multi-platform.go:440 ------------------------------ S ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] ibm power pc dynamic allocation when the Component with multi-platform-build is created that PipelineRun completes successfully [multi-platform, ibmp-dynamic] /tmp/tmp.RjMkFavENS/tests/build/multi-platform.go:444 ------------------------------ SSSSS ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] ibm power pc dynamic allocation when the Component with multi-platform-build is created check cleanup happened successfully [multi-platform, ibmp-dynamic] /tmp/tmp.RjMkFavENS/tests/build/multi-platform.go:448 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ P [PENDING] [build-service-suite Build templates E2E test] HACBS pipelines when Pipeline Results are stored for component with Git source URL https://github.com/redhat-appstudio-qe/devfile-sample-python-basic and Pipeline docker-build-oci-ta should have Pipeline Logs [build, build-templates, HACBS, pipeline-service, pipeline] /tmp/tmp.RjMkFavENS/tests/build/build_templates.go:478 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•••••••••••• ------------------------------ • [FAILED] [271.385 seconds] [build-service-suite Build service E2E tests] test PaC component build github when a new Component with specified custom branch is created [It] the PipelineRun should eventually finish successfully [build-service, github-webhook, pac-build, pipeline, image-controller, build-custom-branch] /tmp/tmp.RjMkFavENS/tests/build/build.go:352 Timeline >> PipelineRun gh-test-custom-branch-ijphyu-on-pull-request-6knps reason: ResolvingTaskRef PipelineRun gh-test-custom-branch-ijphyu-on-pull-request-6knps reason: Running PipelineRun gh-test-custom-branch-ijphyu-on-pull-request-6knps reason: Running PipelineRun gh-test-custom-branch-ijphyu-on-pull-request-6knps reason: Running PipelineRun gh-test-custom-branch-ijphyu-on-pull-request-6knps reason: Running PipelineRun gh-test-custom-branch-ijphyu-on-pull-request-6knps reason: Running PipelineRun gh-test-custom-branch-ijphyu-on-pull-request-6knps reason: Running PipelineRun gh-test-custom-branch-ijphyu-on-pull-request-6knps reason: Running PipelineRun gh-test-custom-branch-ijphyu-on-pull-request-6knps reason: Running PipelineRun gh-test-custom-branch-ijphyu-on-pull-request-6knps reason: Running PipelineRun gh-test-custom-branch-ijphyu-on-pull-request-6knps reason: Running PipelineRun gh-test-custom-branch-ijphyu-on-pull-request-6knps reason: Running PipelineRun gh-test-custom-branch-ijphyu-on-pull-request-6knps reason: Running PipelineRun gh-test-custom-branch-ijphyu-on-pull-request-6knps reason: Failed attempt 1/3: PipelineRun "gh-test-custom-branch-ijphyu-on-pull-request-6knps" failed: pod: gh-test-custom-branch-ijphyu-on-pull-request-6knps-init-pod | init container: prepare 2025/09/05 09:08:01 Entrypoint initialization pod: gh-test-custom-branch-ijphyu-on-pull-request-6knps-init-pod | init container: place-scripts 2025/09/05 09:08:01 Decoded script /tekton/scripts/script-0-q7q6b pod: gh-test-custom-branch-ijphyu-on-pull-request-6knps-init-pod | container step-init: Build Initialize: quay.io/redhat-appstudio-qe/build-e2e-ouwb/gh-test-custom-branch-ijphyu:on-pr-411355e703f3675d05645e04644345bdf639d50a Determine if Image Already Exists PipelineRun gh-test-custom-branch-ijphyu-on-pull-request-c7s75 reason: PipelineRun gh-test-custom-branch-ijphyu-on-pull-request-c7s75 reason: Running PipelineRun gh-test-custom-branch-ijphyu-on-pull-request-c7s75 reason: Running PipelineRun gh-test-custom-branch-ijphyu-on-pull-request-c7s75 reason: Running PipelineRun gh-test-custom-branch-ijphyu-on-pull-request-c7s75 reason: Running PipelineRun gh-test-custom-branch-ijphyu-on-pull-request-c7s75 reason: Running PipelineRun gh-test-custom-branch-ijphyu-on-pull-request-c7s75 reason: Failed attempt 2/3: PipelineRun "gh-test-custom-branch-ijphyu-on-pull-request-c7s75" failed: pod: gh-test-custom-branch-ijphyu-on-pull-request-c7s75-init-pod | init container: prepare 2025/09/05 09:10:16 Entrypoint initialization pod: gh-test-custom-branch-ijphyu-on-pull-request-c7s75-init-pod | init container: place-scripts 2025/09/05 09:10:17 Decoded script /tekton/scripts/script-0-r522w pod: gh-test-custom-branch-ijphyu-on-pull-request-c7s75-init-pod | container step-init: Build Initialize: quay.io/redhat-appstudio-qe/build-e2e-ouwb/gh-test-custom-branch-ijphyu:on-pr-fe1a82ba2f755f091417c5cdeb2991349638f998 Determine if Image Already Exists PipelineRun gh-test-custom-branch-ijphyu-on-pull-request-q9rcd reason: PipelineRun gh-test-custom-branch-ijphyu-on-pull-request-q9rcd reason: Running PipelineRun gh-test-custom-branch-ijphyu-on-pull-request-q9rcd reason: Running PipelineRun gh-test-custom-branch-ijphyu-on-pull-request-q9rcd reason: Running PipelineRun gh-test-custom-branch-ijphyu-on-pull-request-q9rcd reason: Running PipelineRun gh-test-custom-branch-ijphyu-on-pull-request-q9rcd reason: Running PipelineRun gh-test-custom-branch-ijphyu-on-pull-request-q9rcd reason: Running PipelineRun gh-test-custom-branch-ijphyu-on-pull-request-q9rcd reason: Failed attempt 3/3: PipelineRun "gh-test-custom-branch-ijphyu-on-pull-request-q9rcd" failed: pod: gh-test-custom-branch-ijphyu-on-pull-request-q9rcd-init-pod | init container: prepare 2025/09/05 09:11:22 Entrypoint initialization pod: gh-test-custom-branch-ijphyu-on-pull-request-q9rcd-init-pod | init container: place-scripts 2025/09/05 09:11:23 Decoded script /tekton/scripts/script-0-7phxx pod: gh-test-custom-branch-ijphyu-on-pull-request-q9rcd-init-pod | container step-init: Build Initialize: quay.io/redhat-appstudio-qe/build-e2e-ouwb/gh-test-custom-branch-ijphyu:on-pr-a732dc768218fcaba942800b278437949b337bae Determine if Image Already Exists [FAILED] in [It] - /tmp/tmp.RjMkFavENS/tests/build/build.go:354 @ 09/05/25 09:12:22.001 << Timeline [FAILED] Expected success, but got an error: <*errors.errorString | 0xc0018fe2f0>: pod: gh-test-custom-branch-ijphyu-on-pull-request-q9rcd-init-pod | init container: prepare 2025/09/05 09:11:22 Entrypoint initialization pod: gh-test-custom-branch-ijphyu-on-pull-request-q9rcd-init-pod | init container: place-scripts 2025/09/05 09:11:23 Decoded script /tekton/scripts/script-0-7phxx pod: gh-test-custom-branch-ijphyu-on-pull-request-q9rcd-init-pod | container step-init: Build Initialize: quay.io/redhat-appstudio-qe/build-e2e-ouwb/gh-test-custom-branch-ijphyu:on-pr-a732dc768218fcaba942800b278437949b337bae Determine if Image Already Exists { s: "\n pod: gh-test-custom-branch-ijphyu-on-pull-request-q9rcd-init-pod | init container: prepare\n2025/09/05 09:11:22 Entrypoint initialization\n\n pod: gh-test-custom-branch-ijphyu-on-pull-request-q9rcd-init-pod | init container: place-scripts\n2025/09/05 09:11:23 Decoded script /tekton/scripts/script-0-7phxx\n\npod: gh-test-custom-branch-ijphyu-on-pull-request-q9rcd-init-pod | container step-init: \nBuild Initialize: quay.io/redhat-appstudio-qe/build-e2e-ouwb/gh-test-custom-branch-ijphyu:on-pr-a732dc768218fcaba942800b278437949b337bae\n\nDetermine if Image Already Exists\n", } In [It] at: /tmp/tmp.RjMkFavENS/tests/build/build.go:354 @ 09/05/25 09:12:22.001 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS Summarizing 1 Failure: [FAIL] [build-service-suite Build service E2E tests] test PaC component build github when a new Component with specified custom branch is created [It] the PipelineRun should eventually finish successfully [build-service, github-webhook, pac-build, pipeline, image-controller, build-custom-branch] /tmp/tmp.RjMkFavENS/tests/build/build.go:354 Ran 13 of 401 Specs in 316.182 seconds FAIL! -- 12 Passed | 1 Failed | 51 Pending | 337 Skipped Ginkgo ran 1 suite in 6m47.486807406s Test Suite Failed E0905 09:12:23.282362 16358 types.go:186] Failed to execute rule: image-controller repo CI Workflow Rule: Execute the full workflow for e2e-tests repo in CI Error: running "ginkgo --seed=1757061323 --timeout=1h30m0s --grace-period=30s --output-interceptor-mode=none --label-filter=image-controller --no-color --json-report=e2e-report.json --junit-report=e2e-report.xml --procs=20 --nodes=20 --p --output-dir=/workspace/artifact-dir ./cmd --" failed with exit code 1 make: *** [Makefile:25: ci/test/e2e] Error 1