./mage -v ci:teste2e go: downloading go.starlark.net v0.0.0-20251222184526-15019ee33dea Running target: CI:TestE2E I1223 01:25:32.733955 16893 magefile.go:529] setting up new custom bundle for testing... I1223 01:25:33.251823 16893 util.go:512] found credentials for image ref quay.io/redhat-appstudio-qe/test-images:pipeline-bundle-1766453133-slvd -> user: redhat-appstudio-qe+redhat_appstudio_quality Creating Tekton Bundle: - Added Pipeline: docker-build to image I1223 01:25:34.433849 16893 bundle.go:57] image digest for a new tekton bundle quay.io/redhat-appstudio-qe/test-images:pipeline-bundle-1766453133-slvd: quay.io/redhat-appstudio-qe/test-images@sha256:9538811bfb0c84372208fb9239ecfe82f8f46a225e291b034e946e15b292024a I1223 01:25:34.433873 16893 magefile.go:535] To use the custom docker bundle locally, run below cmd: export CUSTOM_DOCKER_BUILD_PIPELINE_BUNDLE=quay.io/redhat-appstudio-qe/test-images:pipeline-bundle-1766453133-slvd I1223 01:25:34.433894 16893 e2e_repo.go:347] checking if repository is e2e-tests I1223 01:25:34.433899 16893 e2e_repo.go:335] multi-platform tests and require sprayproxy registering are set to TRUE exec: git "diff" "--name-status" "upstream/main..HEAD" I1223 01:25:34.436655 16893 util.go:451] The following files, go.mod, go.sum, were changed! exec: go "install" "-mod=mod" "github.com/onsi/ginkgo/v2/ginkgo" go: downloading github.com/go-task/slim-sprig/v3 v3.0.0 go: downloading github.com/google/pprof v0.0.0-20241210010833-40e02aabc2ad I1223 01:25:37.641975 16893 install.go:188] cloning 'https://github.com/redhat-appstudio/infra-deployments' with git ref 'refs/heads/main' Enumerating objects: 71007, done. Counting objects: 2% (1/34) Counting objects: 5% (2/34) Counting objects: 8% (3/34) Counting objects: 11% (4/34) Counting objects: 14% (5/34) Counting objects: 17% (6/34) Counting objects: 20% (7/34) Counting objects: 23% (8/34) Counting objects: 26% (9/34) Counting objects: 29% (10/34) Counting objects: 32% (11/34) Counting objects: 35% (12/34) Counting objects: 38% (13/34) Counting objects: 41% (14/34) Counting objects: 44% (15/34) Counting objects: 47% (16/34) Counting objects: 50% (17/34) Counting objects: 52% (18/34) Counting objects: 55% (19/34) Counting objects: 58% (20/34) Counting objects: 61% (21/34) Counting objects: 64% (22/34) Counting objects: 67% (23/34) Counting objects: 70% (24/34) Counting objects: 73% (25/34) Counting objects: 76% (26/34) Counting objects: 79% (27/34) Counting objects: 82% (28/34) Counting objects: 85% (29/34) Counting objects: 88% (30/34) Counting objects: 91% (31/34) Counting objects: 94% (32/34) Counting objects: 97% (33/34) Counting objects: 100% (34/34) Counting objects: 100% (34/34), done. Compressing objects: 4% (1/24) Compressing objects: 8% (2/24) Compressing objects: 12% (3/24) Compressing objects: 16% (4/24) Compressing objects: 20% (5/24) Compressing objects: 25% (6/24) Compressing objects: 29% (7/24) Compressing objects: 33% (8/24) Compressing objects: 37% (9/24) Compressing objects: 41% (10/24) Compressing objects: 45% (11/24) Compressing objects: 50% (12/24) Compressing objects: 54% (13/24) Compressing objects: 58% (14/24) Compressing objects: 62% (15/24) Compressing objects: 66% (16/24) Compressing objects: 70% (17/24) Compressing objects: 75% (18/24) Compressing objects: 79% (19/24) Compressing objects: 83% (20/24) Compressing objects: 87% (21/24) Compressing objects: 91% (22/24) Compressing objects: 95% (23/24) Compressing objects: 100% (24/24) Compressing objects: 100% (24/24), done. Total 71007 (delta 16), reused 10 (delta 10), pack-reused 70973 (from 3) From https://github.com/redhat-appstudio/infra-deployments * branch main -> FETCH_HEAD Already up to date. Installing the OpenShift GitOps operator subscription: clusterrole.rbac.authorization.k8s.io/appstudio-openshift-gitops-argocd-application-controller created clusterrole.rbac.authorization.k8s.io/appstudio-openshift-gitops-argocd-server created clusterrolebinding.rbac.authorization.k8s.io/appstudio-openshift-gitops-argocd-application-controller created clusterrolebinding.rbac.authorization.k8s.io/appstudio-openshift-gitops-argocd-server created subscription.operators.coreos.com/openshift-gitops-operator created Waiting for default project (and namespace) to exist: ..................................................OK Waiting for OpenShift GitOps Route: OK argocd.argoproj.io/openshift-gitops patched argocd.argoproj.io/openshift-gitops patched Switch the Route to use re-encryption argocd.argoproj.io/openshift-gitops patched Restarting ArgoCD Server pod "openshift-gitops-server-78868c5878-p5852" deleted Allow any authenticated users to be admin on the Argo CD instance argocd.argoproj.io/openshift-gitops patched Mark Pending PVC as Healthy, workaround for WaitForFirstConsumer StorageClasses. Warning: unknown field "spec.resourceCustomizations" argocd.argoproj.io/openshift-gitops patched (no change) Setting kustomize build options argocd.argoproj.io/openshift-gitops patched Setting ignore Aggregated Roles argocd.argoproj.io/openshift-gitops patched Setting ArgoCD tracking method to annotation argocd.argoproj.io/openshift-gitops patched Restarting GitOps server deployment.apps/openshift-gitops-server restarted ========================================================================= Argo CD URL is: https://openshift-gitops-server-openshift-gitops.apps.rosa.kx-4d0800d184.t2kf.p3.openshiftapps.com (NOTE: It may take a few moments for the route to become available) Waiting for the route: ...........OK Login/password uses your OpenShift credentials ('Login with OpenShift' button) Setting secrets for Quality Dashboard namespace/quality-dashboard created secret/quality-dashboard-secrets created Creating secret for CI Helper App namespace/ci-helper-app created secret/ci-helper-app-secrets created Setting secrets for pipeline-service tekton-results namespace already exists, skipping creation tekton-logging namespace already exists, skipping creation namespace/product-kubearchive-logging created Creating DB secret secret/tekton-results-database created Creating S3 secret secret/tekton-results-s3 created Creating MinIO config secret/minio-storage-configuration created Creating S3 secret secret/tekton-results-s3 created Creating MinIO config MinIO config already exists, skipping creation Creating Postgres TLS certs ......+.....+++++++++++++++++++++++++++++++++++++++*......+...+..+.........+.+......+++++++++++++++++++++++++++++++++++++++*....+........+......+.+...+...+........+.++++++ ............+++++++++++++++++++++++++++++++++++++++*..+....+.....+....+++++++++++++++++++++++++++++++++++++++*........+...+...+.......+.....+.......+...+......+......+...+...+...............+........+.......+.........+.....+....+..+.........+...+...+.......+.....+......+.......+...+..+.........+.+.................+.+.....+............+...+..........+.........+...............+......+.........+...........+....+......++++++ ----- Certificate request self-signature ok subject=CN=cluster.local ......+.+++++++++++++++++++++++++++++++++++++++*......+..+.+..............+....+..+...+....+...+...............+...+............+..+...+............+.+.....+.+..............+......+.........+.+......+++++++++++++++++++++++++++++++++++++++*..+...+....+...+........+......+.+.........+.....+....+...........+.+..+.......+...........+...+............+.......+...............+........+.......+..+.+...+.....+.........+...+.......+.....+............+.+..................+......+........+.+..+............+.+.....+....+..+.+.................+.+...........+...+.......+...+..+......+.............+..+..................+.......+..+.+..+.......+......+.....+..........+......+...+..+.+.....+......................+......+..............+.+..+............+..........+..+....+...........+...+...+....+.....+..........+..................+..+.+...+......+.....+.............+...+..+.+..+.........+....+.....+......+...+...+...+.......+.....+.............+..+.........+....+.........+..+...+...+.......+...+..+..........+...+.........+..............+....+.................+.+.....+...........................+.+.....+.+......++++++ .+++++++++++++++++++++++++++++++++++++++*.......................+..........+......+...+..+...+...+++++++++++++++++++++++++++++++++++++++*......+............+........+.+.....+.+............+...+...+.........+..+...+............+...+...+...+.+......+..+.++++++ ----- Certificate request self-signature ok subject=CN=postgres-postgresql.tekton-results.svc.cluster.local secret/postgresql-tls created configmap/rds-root-crt created namespace/application-service created Creating a has secret from legacy token secret/has-github-token created Creating a secret with a token for Image Controller namespace/image-controller created secret/quaytoken created Configuring the cluster with a pull secret for Docker Hub Saved credentials for docker.io into /tmp/tmp.YMRgnJ9pRK secret/pull-secret data updated Saved credentials for docker.io into /tmp/tmp.YMRgnJ9pRK secret/docker-io-pull created Setting secrets for Dora metrics exporter namespace/dora-metrics created secret/exporters-secret created Setting Cluster Mode: preview Switched to a new branch 'preview-main-iwoa' labeling node/ip-10-0-143-50.ec2.internal... node/ip-10-0-143-50.ec2.internal labeled successfully labeled node/ip-10-0-143-50.ec2.internal labeling node/ip-10-0-154-239.ec2.internal... node/ip-10-0-154-239.ec2.internal labeled successfully labeled node/ip-10-0-154-239.ec2.internal labeling node/ip-10-0-172-14.ec2.internal... node/ip-10-0-172-14.ec2.internal labeled successfully labeled node/ip-10-0-172-14.ec2.internal verifying labels... all nodes labeled successfully. Detected OCP minor version: 17 Changing AppStudio Gitlab Org to "redhat-appstudio-qe" [preview-main-iwoa 9a999719c] Preview mode, do not merge into main 6 files changed, 12 insertions(+), 18 deletions(-) remote: remote: Create a pull request for 'preview-main-iwoa' on GitHub by visiting: remote: https://github.com/redhat-appstudio-qe/infra-deployments/pull/new/preview-main-iwoa remote: To https://github.com/redhat-appstudio-qe/infra-deployments.git * [new branch] preview-main-iwoa -> preview-main-iwoa branch 'preview-main-iwoa' set up to track 'qe/preview-main-iwoa'. application.argoproj.io/all-application-sets created Waiting for sync of all-application-sets argoCD app Waiting for sync of all-application-sets argoCD app Waiting for sync of all-application-sets argoCD app application.argoproj.io/image-controller-in-cluster-local patched application.argoproj.io/enterprise-contract-in-cluster-local patched application.argoproj.io/squid-in-cluster-local patched application.argoproj.io/has-in-cluster-local patched application.argoproj.io/all-application-sets patched application.argoproj.io/kyverno-in-cluster-local patched application.argoproj.io/cert-manager-in-cluster-local patched application.argoproj.io/application-api-in-cluster-local patched application.argoproj.io/policies-in-cluster-local patched application.argoproj.io/repository-validator-in-cluster-local patched application.argoproj.io/tempo-in-cluster-local patched application.argoproj.io/release-in-cluster-local patched application.argoproj.io/image-rbac-proxy-in-cluster-local patched application.argoproj.io/mintmaker-in-cluster-local patched application.argoproj.io/crossplane-control-plane-in-cluster-local patched application.argoproj.io/perf-team-prometheus-reader-in-cluster-local patched application.argoproj.io/integration-in-cluster-local patched application.argoproj.io/internal-services-in-cluster-local patched application.argoproj.io/kubearchive-in-cluster-local patched application.argoproj.io/trust-manager-in-cluster-local patched application.argoproj.io/vector-tekton-logs-collector-in-cluster-local patched application.argoproj.io/multi-platform-controller-in-cluster-local patched application.argoproj.io/konflux-kite-in-cluster-local patched application.argoproj.io/build-service-in-cluster-local patched application.argoproj.io/dora-metrics-in-cluster-local patched application.argoproj.io/monitoring-workload-prometheus-in-cluster-local patched application.argoproj.io/knative-eventing-in-cluster-local patched application.argoproj.io/tracing-workload-otel-collector-in-cluster-local patched application.argoproj.io/monitoring-registry-in-cluster-local patched application.argoproj.io/build-templates-in-cluster-local patched application.argoproj.io/disable-csvcopy-in-cluster-local patched application.argoproj.io/konflux-rbac-in-cluster-local patched application.argoproj.io/monitoring-workload-grafana-in-cluster-local patched application.argoproj.io/vector-kubearchive-log-collector-in-cluster-local patched (no change) application.argoproj.io/tracing-workload-tracing-in-cluster-local patched application.argoproj.io/project-controller-in-cluster-local patched application.argoproj.io/kueue-in-cluster-local patched application.argoproj.io/pipeline-service-in-cluster-local patched application-api-in-cluster-local OutOfSync Missing build-service-in-cluster-local Synced Progressing image-rbac-proxy-in-cluster-local Synced Progressing internal-services-in-cluster-local OutOfSync Missing kubearchive-in-cluster-local OutOfSync Missing kueue-in-cluster-local OutOfSync Degraded kyverno-in-cluster-local OutOfSync Missing monitoring-workload-grafana-in-cluster-local OutOfSync Missing multi-platform-controller-in-cluster-local OutOfSync Missing pipeline-service-in-cluster-local OutOfSync Missing policies-in-cluster-local OutOfSync Healthy release-in-cluster-local Synced Progressing squid-in-cluster-local OutOfSync Missing trust-manager-in-cluster-local OutOfSync Missing vector-kubearchive-log-collector-in-cluster-local OutOfSync Progressing Waiting 10 seconds for application sync application-api-in-cluster-local OutOfSync Missing build-service-in-cluster-local Synced Progressing internal-services-in-cluster-local OutOfSync Missing kubearchive-in-cluster-local OutOfSync Missing kueue-in-cluster-local OutOfSync Degraded kyverno-in-cluster-local OutOfSync Missing monitoring-workload-grafana-in-cluster-local OutOfSync Missing multi-platform-controller-in-cluster-local OutOfSync Missing pipeline-service-in-cluster-local OutOfSync Missing policies-in-cluster-local OutOfSync Healthy release-in-cluster-local Synced Progressing squid-in-cluster-local OutOfSync Missing trust-manager-in-cluster-local OutOfSync Missing vector-kubearchive-log-collector-in-cluster-local OutOfSync Progressing Waiting 10 seconds for application sync application-api-in-cluster-local OutOfSync Missing build-service-in-cluster-local Synced Progressing internal-services-in-cluster-local OutOfSync Missing kubearchive-in-cluster-local OutOfSync Missing kueue-in-cluster-local OutOfSync Degraded kyverno-in-cluster-local OutOfSync Missing monitoring-workload-grafana-in-cluster-local OutOfSync Missing multi-platform-controller-in-cluster-local OutOfSync Missing pipeline-service-in-cluster-local OutOfSync Missing policies-in-cluster-local OutOfSync Healthy release-in-cluster-local Synced Progressing squid-in-cluster-local OutOfSync Missing trust-manager-in-cluster-local OutOfSync Missing vector-kubearchive-log-collector-in-cluster-local OutOfSync Progressing Waiting 10 seconds for application sync application-api-in-cluster-local OutOfSync Missing build-service-in-cluster-local Synced Progressing internal-services-in-cluster-local OutOfSync Missing kubearchive-in-cluster-local OutOfSync Missing kueue-in-cluster-local OutOfSync Missing kyverno-in-cluster-local OutOfSync Missing monitoring-workload-grafana-in-cluster-local OutOfSync Missing multi-platform-controller-in-cluster-local OutOfSync Missing pipeline-service-in-cluster-local OutOfSync Missing policies-in-cluster-local OutOfSync Healthy release-in-cluster-local Synced Progressing squid-in-cluster-local OutOfSync Missing tracing-workload-tracing-in-cluster-local OutOfSync Healthy trust-manager-in-cluster-local OutOfSync Missing vector-kubearchive-log-collector-in-cluster-local Unknown Progressing Waiting 10 seconds for application sync application-api-in-cluster-local OutOfSync Missing build-service-in-cluster-local Synced Progressing internal-services-in-cluster-local OutOfSync Missing kubearchive-in-cluster-local OutOfSync Missing kueue-in-cluster-local OutOfSync Missing kyverno-in-cluster-local OutOfSync Missing monitoring-workload-grafana-in-cluster-local OutOfSync Missing multi-platform-controller-in-cluster-local OutOfSync Missing pipeline-service-in-cluster-local OutOfSync Missing policies-in-cluster-local OutOfSync Healthy release-in-cluster-local Synced Progressing squid-in-cluster-local OutOfSync Missing tracing-workload-tracing-in-cluster-local OutOfSync Healthy trust-manager-in-cluster-local OutOfSync Missing vector-kubearchive-log-collector-in-cluster-local Unknown Progressing Waiting 10 seconds for application sync application-api-in-cluster-local OutOfSync Missing build-service-in-cluster-local Synced Progressing internal-services-in-cluster-local OutOfSync Missing kubearchive-in-cluster-local OutOfSync Missing kueue-in-cluster-local OutOfSync Missing kyverno-in-cluster-local OutOfSync Missing monitoring-workload-grafana-in-cluster-local OutOfSync Missing multi-platform-controller-in-cluster-local OutOfSync Progressing pipeline-service-in-cluster-local OutOfSync Missing policies-in-cluster-local OutOfSync Healthy release-in-cluster-local Synced Progressing squid-in-cluster-local OutOfSync Healthy tracing-workload-tracing-in-cluster-local OutOfSync Healthy trust-manager-in-cluster-local OutOfSync Missing vector-kubearchive-log-collector-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync application-api-in-cluster-local OutOfSync Missing build-service-in-cluster-local Synced Progressing internal-services-in-cluster-local OutOfSync Missing kubearchive-in-cluster-local OutOfSync Missing kueue-in-cluster-local OutOfSync Missing kyverno-in-cluster-local OutOfSync Missing monitoring-workload-grafana-in-cluster-local OutOfSync Missing multi-platform-controller-in-cluster-local OutOfSync Healthy pipeline-service-in-cluster-local OutOfSync Missing policies-in-cluster-local OutOfSync Healthy release-in-cluster-local Synced Progressing squid-in-cluster-local OutOfSync Healthy trust-manager-in-cluster-local OutOfSync Missing vector-kubearchive-log-collector-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync build-service-in-cluster-local Synced Progressing internal-services-in-cluster-local OutOfSync Missing kubearchive-in-cluster-local OutOfSync Missing kueue-in-cluster-local OutOfSync Missing kyverno-in-cluster-local OutOfSync Missing monitoring-workload-grafana-in-cluster-local OutOfSync Healthy multi-platform-controller-in-cluster-local OutOfSync Healthy pipeline-service-in-cluster-local OutOfSync Missing policies-in-cluster-local OutOfSync Healthy squid-in-cluster-local OutOfSync Healthy trust-manager-in-cluster-local OutOfSync Missing vector-kubearchive-log-collector-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync build-service-in-cluster-local Synced Progressing internal-services-in-cluster-local OutOfSync Missing kubearchive-in-cluster-local OutOfSync Missing kueue-in-cluster-local OutOfSync Missing kyverno-in-cluster-local OutOfSync Missing monitoring-workload-grafana-in-cluster-local OutOfSync Healthy multi-platform-controller-in-cluster-local OutOfSync Healthy pipeline-service-in-cluster-local OutOfSync Missing policies-in-cluster-local OutOfSync Healthy squid-in-cluster-local OutOfSync Healthy trust-manager-in-cluster-local OutOfSync Missing vector-kubearchive-log-collector-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync build-service-in-cluster-local Synced Progressing internal-services-in-cluster-local OutOfSync Missing kubearchive-in-cluster-local OutOfSync Missing kueue-in-cluster-local OutOfSync Missing kyverno-in-cluster-local OutOfSync Missing monitoring-workload-grafana-in-cluster-local OutOfSync Healthy multi-platform-controller-in-cluster-local OutOfSync Healthy pipeline-service-in-cluster-local OutOfSync Missing policies-in-cluster-local OutOfSync Healthy squid-in-cluster-local OutOfSync Healthy trust-manager-in-cluster-local OutOfSync Missing vector-kubearchive-log-collector-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync build-service-in-cluster-local Synced Progressing internal-services-in-cluster-local OutOfSync Missing kubearchive-in-cluster-local OutOfSync Missing kueue-in-cluster-local OutOfSync Missing kyverno-in-cluster-local OutOfSync Missing monitoring-workload-grafana-in-cluster-local OutOfSync Healthy multi-platform-controller-in-cluster-local OutOfSync Healthy pipeline-service-in-cluster-local OutOfSync Missing policies-in-cluster-local OutOfSync Healthy squid-in-cluster-local OutOfSync Healthy trust-manager-in-cluster-local OutOfSync Missing vector-kubearchive-log-collector-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync build-service-in-cluster-local Synced Progressing internal-services-in-cluster-local OutOfSync Missing kueue-in-cluster-local OutOfSync Progressing kyverno-in-cluster-local OutOfSync Missing monitoring-workload-grafana-in-cluster-local OutOfSync Healthy multi-platform-controller-in-cluster-local OutOfSync Healthy pipeline-service-in-cluster-local OutOfSync Missing policies-in-cluster-local OutOfSync Healthy squid-in-cluster-local OutOfSync Healthy trust-manager-in-cluster-local OutOfSync Missing vector-kubearchive-log-collector-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync internal-services-in-cluster-local OutOfSync Missing kueue-in-cluster-local OutOfSync Degraded kyverno-in-cluster-local OutOfSync Missing monitoring-workload-grafana-in-cluster-local OutOfSync Healthy multi-platform-controller-in-cluster-local OutOfSync Healthy pipeline-service-in-cluster-local OutOfSync Missing policies-in-cluster-local OutOfSync Healthy squid-in-cluster-local OutOfSync Healthy trust-manager-in-cluster-local OutOfSync Missing vector-kubearchive-log-collector-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync internal-services-in-cluster-local OutOfSync Missing kueue-in-cluster-local OutOfSync Healthy kyverno-in-cluster-local OutOfSync Missing monitoring-workload-grafana-in-cluster-local OutOfSync Healthy multi-platform-controller-in-cluster-local OutOfSync Healthy pipeline-service-in-cluster-local OutOfSync Missing policies-in-cluster-local OutOfSync Healthy squid-in-cluster-local OutOfSync Healthy trust-manager-in-cluster-local OutOfSync Missing vector-kubearchive-log-collector-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync internal-services-in-cluster-local OutOfSync Missing kueue-in-cluster-local OutOfSync Healthy kyverno-in-cluster-local OutOfSync Missing monitoring-workload-grafana-in-cluster-local OutOfSync Healthy multi-platform-controller-in-cluster-local OutOfSync Healthy pipeline-service-in-cluster-local OutOfSync Missing policies-in-cluster-local OutOfSync Healthy squid-in-cluster-local OutOfSync Healthy trust-manager-in-cluster-local OutOfSync Missing vector-kubearchive-log-collector-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync internal-services-in-cluster-local OutOfSync Missing kueue-in-cluster-local OutOfSync Healthy kyverno-in-cluster-local OutOfSync Missing monitoring-workload-grafana-in-cluster-local OutOfSync Healthy multi-platform-controller-in-cluster-local OutOfSync Healthy pipeline-service-in-cluster-local OutOfSync Missing policies-in-cluster-local OutOfSync Healthy squid-in-cluster-local OutOfSync Healthy trust-manager-in-cluster-local OutOfSync Missing vector-kubearchive-log-collector-in-cluster-local OutOfSync Healthy Waiting 10 seconds for application sync internal-services-in-cluster-local OutOfSync Missing kueue-in-cluster-local OutOfSync Healthy kyverno-in-cluster-local OutOfSync Missing monitoring-workload-grafana-in-cluster-local OutOfSync Healthy pipeline-service-in-cluster-local OutOfSync Missing policies-in-cluster-local OutOfSync Healthy squid-in-cluster-local OutOfSync Healthy trust-manager-in-cluster-local OutOfSync Missing Waiting 10 seconds for application sync internal-services-in-cluster-local OutOfSync Missing kueue-in-cluster-local OutOfSync Healthy kyverno-in-cluster-local OutOfSync Missing pipeline-service-in-cluster-local OutOfSync Missing squid-in-cluster-local OutOfSync Healthy trust-manager-in-cluster-local OutOfSync Missing Waiting 10 seconds for application sync internal-services-in-cluster-local OutOfSync Missing kueue-in-cluster-local OutOfSync Healthy pipeline-service-in-cluster-local OutOfSync Missing squid-in-cluster-local OutOfSync Healthy trust-manager-in-cluster-local OutOfSync Missing Waiting 10 seconds for application sync internal-services-in-cluster-local OutOfSync Missing kueue-in-cluster-local OutOfSync Healthy pipeline-service-in-cluster-local OutOfSync Missing squid-in-cluster-local OutOfSync Healthy trust-manager-in-cluster-local OutOfSync Missing Waiting 10 seconds for application sync internal-services-in-cluster-local OutOfSync Missing kueue-in-cluster-local OutOfSync Healthy pipeline-service-in-cluster-local OutOfSync Missing trust-manager-in-cluster-local OutOfSync Missing Waiting 10 seconds for application sync internal-services-in-cluster-local OutOfSync Missing kueue-in-cluster-local OutOfSync Healthy pipeline-service-in-cluster-local OutOfSync Missing trust-manager-in-cluster-local OutOfSync Missing Waiting 10 seconds for application sync internal-services-in-cluster-local OutOfSync Missing kueue-in-cluster-local OutOfSync Healthy pipeline-service-in-cluster-local OutOfSync Missing trust-manager-in-cluster-local OutOfSync Missing Waiting 10 seconds for application sync kueue-in-cluster-local OutOfSync Healthy pipeline-service-in-cluster-local OutOfSync Missing trust-manager-in-cluster-local OutOfSync Missing Waiting 10 seconds for application sync kueue-in-cluster-local OutOfSync Healthy pipeline-service-in-cluster-local OutOfSync Missing trust-manager-in-cluster-local OutOfSync Missing Waiting 10 seconds for application sync kueue-in-cluster-local OutOfSync Healthy pipeline-service-in-cluster-local OutOfSync Missing trust-manager-in-cluster-local OutOfSync Missing Waiting 10 seconds for application sync pipeline-service-in-cluster-local OutOfSync Missing trust-manager-in-cluster-local OutOfSync Missing Waiting 10 seconds for application sync pipeline-service-in-cluster-local OutOfSync Missing trust-manager-in-cluster-local OutOfSync Missing Waiting 10 seconds for application sync pipeline-service-in-cluster-local OutOfSync Missing trust-manager-in-cluster-local OutOfSync Missing Waiting 10 seconds for application sync pipeline-service-in-cluster-local OutOfSync Missing trust-manager-in-cluster-local OutOfSync Missing Waiting 10 seconds for application sync pipeline-service-in-cluster-local OutOfSync Missing trust-manager-in-cluster-local OutOfSync Missing Waiting 10 seconds for application sync pipeline-service-in-cluster-local OutOfSync Missing Waiting 10 seconds for application sync pipeline-service-in-cluster-local OutOfSync Missing Waiting 10 seconds for application sync All Applications are synced and Healthy All required tekton resources are installed and ready Tekton CRDs are ready Setup Pac with existing QE sprayproxy and github App namespace/openshift-pipelines configured namespace/build-service configured namespace/integration-service configured secret/pipelines-as-code-secret created secret/pipelines-as-code-secret created secret/pipelines-as-code-secret created secret/pipelines-as-code-secret created Configured pipelines-as-code-secret secret in openshift-pipelines namespace Switched to branch 'main' Your branch is up to date with 'upstream/main'. [controller-runtime] log.SetLogger(...) was never called; logs will not be displayed. Detected at: > goroutine 91 [running]: > runtime/debug.Stack() > /usr/lib/golang/src/runtime/debug/stack.go:26 +0x5e > sigs.k8s.io/controller-runtime/pkg/log.eventuallyFulfillRoot() > /opt/app-root/src/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.6/pkg/log/log.go:60 +0xcd > sigs.k8s.io/controller-runtime/pkg/log.(*delegatingLogSink).WithName(0xc0005db700, {0x2f9a737, 0x14}) > /opt/app-root/src/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.6/pkg/log/deleg.go:147 +0x3e > github.com/go-logr/logr.Logger.WithName({{0x36f4d10, 0xc0005db700}, 0x0}, {0x2f9a737?, 0x0?}) > /opt/app-root/src/go/pkg/mod/github.com/go-logr/logr@v1.4.2/logr.go:345 +0x36 > sigs.k8s.io/controller-runtime/pkg/client.newClient(0x2d77c00?, {0x0, 0xc000595ea0, {0x0, 0x0}, 0x0, {0x0, 0x0}, 0x0}) > /opt/app-root/src/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.6/pkg/client/client.go:129 +0xf1 > sigs.k8s.io/controller-runtime/pkg/client.New(0xc000925688?, {0x0, 0xc000595ea0, {0x0, 0x0}, 0x0, {0x0, 0x0}, 0x0}) > /opt/app-root/src/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.6/pkg/client/client.go:110 +0x7d > github.com/konflux-ci/e2e-tests/pkg/clients/kubernetes.NewAdminKubernetesClient() > /tmp/tmp.0hDDckIGAX/pkg/clients/kubernetes/client.go:157 +0xa5 > github.com/konflux-ci/e2e-tests/pkg/clients/sprayproxy.GetPaCHost() > /tmp/tmp.0hDDckIGAX/pkg/clients/sprayproxy/sprayproxy.go:93 +0x1c > github.com/konflux-ci/e2e-tests/magefiles/rulesengine/repos.registerPacServer() > /tmp/tmp.0hDDckIGAX/magefiles/rulesengine/repos/common.go:426 +0x78 > github.com/konflux-ci/e2e-tests/magefiles/rulesengine/repos.init.func8(0xc000842d88?) > /tmp/tmp.0hDDckIGAX/magefiles/rulesengine/repos/common.go:378 +0x25 > github.com/konflux-ci/e2e-tests/magefiles/rulesengine.ActionFunc.Execute(0xc?, 0x2f75146?) > /tmp/tmp.0hDDckIGAX/magefiles/rulesengine/types.go:279 +0x19 > github.com/konflux-ci/e2e-tests/magefiles/rulesengine.(*Rule).Apply(...) > /tmp/tmp.0hDDckIGAX/magefiles/rulesengine/types.go:315 > github.com/konflux-ci/e2e-tests/magefiles/rulesengine.(*Rule).Check(0x5250240, 0xc000842d88) > /tmp/tmp.0hDDckIGAX/magefiles/rulesengine/types.go:348 +0xb3 > github.com/konflux-ci/e2e-tests/magefiles/rulesengine.All.Check({0x5248cc0?, 0xc001041c00?, 0x1f20cd9?}, 0xc000842d88) > /tmp/tmp.0hDDckIGAX/magefiles/rulesengine/types.go:245 +0x4f > github.com/konflux-ci/e2e-tests/magefiles/rulesengine.(*Rule).Eval(...) > /tmp/tmp.0hDDckIGAX/magefiles/rulesengine/types.go:308 > github.com/konflux-ci/e2e-tests/magefiles/rulesengine.(*Rule).Check(0x5250300, 0xc000842d88) > /tmp/tmp.0hDDckIGAX/magefiles/rulesengine/types.go:340 +0x2b > github.com/konflux-ci/e2e-tests/magefiles/rulesengine.All.Check({0x5251f80?, 0x4295dc?, 0x52d3cc0?}, 0xc000842d88) > /tmp/tmp.0hDDckIGAX/magefiles/rulesengine/types.go:245 +0x4f > github.com/konflux-ci/e2e-tests/magefiles/rulesengine.(*Rule).Eval(...) > /tmp/tmp.0hDDckIGAX/magefiles/rulesengine/types.go:308 > github.com/konflux-ci/e2e-tests/magefiles/rulesengine.(*RuleEngine).runLoadedCatalog(0x5287ab0, {0xc00056ea08?, 0xc001213e60?, 0x47?}, 0xc000842d88) > /tmp/tmp.0hDDckIGAX/magefiles/rulesengine/types.go:129 +0x119 > github.com/konflux-ci/e2e-tests/magefiles/rulesengine.(*RuleEngine).RunRulesOfCategory(0x5287ab0, {0x2f6f4e3, 0x2}, 0xc000842d88) > /tmp/tmp.0hDDckIGAX/magefiles/rulesengine/types.go:121 +0x1b4 > main.CI.TestE2E({}) > /tmp/tmp.0hDDckIGAX/magefiles/magefile.go:330 +0x18a > main.main.func19({0xc0004c8e60?, 0x178e8ae?}) > /tmp/tmp.0hDDckIGAX/magefiles/mage_output_file.go:827 +0xf > main.main.func12.1() > /tmp/tmp.0hDDckIGAX/magefiles/mage_output_file.go:302 +0x5b > created by main.main.func12 in goroutine 1 > /tmp/tmp.0hDDckIGAX/magefiles/mage_output_file.go:297 +0xbe I1223 01:36:26.861617 16893 common.go:434] Registered PaC server: https://pipelines-as-code-controller-openshift-pipelines.apps.rosa.kx-4d0800d184.t2kf.p3.openshiftapps.com I1223 01:36:26.923709 16893 common.go:459] The PaC servers registered in Sprayproxy: https://pipelines-as-code-controller-openshift-pipelines.apps.rosa.kx-4d0800d184.t2kf.p3.openshiftapps.com, https://pipelines-as-code-controller-openshift-pipelines.apps.rosa.kx-729279a2c1.yyst.p3.openshiftapps.com I1223 01:36:26.923738 16893 common.go:475] going to create new Tekton bundle remote-build for the purpose of testing multi-platform-controller PR I1223 01:36:27.269494 16893 common.go:516] Found current task ref quay.io/konflux-ci/tekton-catalog/task-buildah:0.7@sha256:8b16e4e79853e3a3192f82e9f8930b79b04942bb389eaab4c44fb4d233ccefe6 I1223 01:36:27.271973 16893 util.go:512] found credentials for image ref quay.io/redhat-appstudio-qe/test-images:pipeline-bundle-1766453786-gvdp -> user: redhat-appstudio-qe+redhat_appstudio_quality Creating Tekton Bundle: - Added Pipeline: buildah-remote-pipeline to image I1223 01:36:28.632236 16893 bundle.go:57] image digest for a new tekton bundle quay.io/redhat-appstudio-qe/test-images:pipeline-bundle-1766453786-gvdp: quay.io/redhat-appstudio-qe/test-images@sha256:136227f3cd4cc75b46d476ba5858feaa1cccbd9ea5a98d6ffe21a660139b56e1 I1223 01:36:28.632264 16893 common.go:542] SETTING ENV VAR CUSTOM_BUILDAH_REMOTE_PIPELINE_BUILD_BUNDLE_ARM64 to value quay.io/redhat-appstudio-qe/test-images:pipeline-bundle-1766453786-gvdp I1223 01:36:28.899847 16893 common.go:516] Found current task ref quay.io/konflux-ci/tekton-catalog/task-buildah:0.7@sha256:8b16e4e79853e3a3192f82e9f8930b79b04942bb389eaab4c44fb4d233ccefe6 I1223 01:36:28.902344 16893 util.go:512] found credentials for image ref quay.io/redhat-appstudio-qe/test-images:pipeline-bundle-1766453788-jwlb -> user: redhat-appstudio-qe+redhat_appstudio_quality Creating Tekton Bundle: - Added Pipeline: buildah-remote-pipeline to image I1223 01:36:30.255682 16893 bundle.go:57] image digest for a new tekton bundle quay.io/redhat-appstudio-qe/test-images:pipeline-bundle-1766453788-jwlb: quay.io/redhat-appstudio-qe/test-images@sha256:738bac8cbb2efcac61c6c90c44d20bc024ac434ae45446d6bac13da5b5ac982f I1223 01:36:30.255714 16893 common.go:542] SETTING ENV VAR CUSTOM_BUILDAH_REMOTE_PIPELINE_BUILD_BUNDLE_S390X to value quay.io/redhat-appstudio-qe/test-images:pipeline-bundle-1766453788-jwlb I1223 01:36:30.498085 16893 common.go:516] Found current task ref quay.io/konflux-ci/tekton-catalog/task-buildah:0.7@sha256:8b16e4e79853e3a3192f82e9f8930b79b04942bb389eaab4c44fb4d233ccefe6 I1223 01:36:30.500021 16893 util.go:512] found credentials for image ref quay.io/redhat-appstudio-qe/test-images:pipeline-bundle-1766453790-hswj -> user: redhat-appstudio-qe+redhat_appstudio_quality Creating Tekton Bundle: - Added Pipeline: buildah-remote-pipeline to image I1223 01:36:31.995604 16893 bundle.go:57] image digest for a new tekton bundle quay.io/redhat-appstudio-qe/test-images:pipeline-bundle-1766453790-hswj: quay.io/redhat-appstudio-qe/test-images@sha256:ba78291e8868dd6d9897ea9c9b7fe6a89bd0d93272dc4924b2897efea970c3ec I1223 01:36:31.995641 16893 common.go:542] SETTING ENV VAR CUSTOM_BUILDAH_REMOTE_PIPELINE_BUILD_BUNDLE_PPC64LE to value quay.io/redhat-appstudio-qe/test-images:pipeline-bundle-1766453790-hswj exec: ginkgo "--seed=1766453132" "--timeout=1h30m0s" "--grace-period=30s" "--output-interceptor-mode=none" "--no-color" "--json-report=e2e-report.json" "--junit-report=e2e-report.xml" "--procs=20" "--nodes=20" "--p" "--output-dir=/workspace/artifact-dir" "./cmd" "--" go: downloading github.com/konflux-ci/build-service v0.0.0-20240611083846-2dee6cfe6fe4 go: downloading github.com/IBM/go-sdk-core/v5 v5.15.3 go: downloading github.com/aws/aws-sdk-go-v2 v1.32.7 go: downloading github.com/aws/aws-sdk-go-v2/config v1.28.7 go: downloading github.com/aws/aws-sdk-go-v2/service/ec2 v1.135.0 go: downloading github.com/IBM/vpc-go-sdk v0.48.0 go: downloading github.com/go-playground/validator/v10 v10.17.0 go: downloading github.com/go-openapi/strfmt v0.22.0 go: downloading github.com/aws/smithy-go v1.22.1 go: downloading github.com/aws/aws-sdk-go-v2/credentials v1.17.48 go: downloading github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.22 go: downloading github.com/aws/aws-sdk-go-v2/internal/ini v1.8.1 go: downloading github.com/aws/aws-sdk-go-v2/service/sso v1.24.8 go: downloading github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.7 go: downloading github.com/aws/aws-sdk-go-v2/service/sts v1.33.3 go: downloading github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 go: downloading github.com/go-openapi/errors v0.21.0 go: downloading github.com/mitchellh/mapstructure v1.5.0 go: downloading github.com/oklog/ulid v1.3.1 go: downloading github.com/gabriel-vasile/mimetype v1.4.3 go: downloading github.com/go-playground/universal-translator v0.18.1 go: downloading github.com/leodido/go-urn v1.3.0 go: downloading go.mongodb.org/mongo-driver v1.13.1 go: downloading github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.1 go: downloading github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.26 go: downloading github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.7 go: downloading github.com/google/go-github/v45 v45.2.0 go: downloading github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.26 go: downloading github.com/go-playground/locales v0.14.1 Running Suite: Red Hat App Studio E2E tests - /tmp/tmp.0hDDckIGAX/cmd ===================================================================== Random Seed: 1766453132 Will run 353 of 387 specs Running in parallel across 20 processes ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] aws host-pool allocation when the Component with multi-platform-build is created a PipelineRun is triggered [multi-platform, aws-host-pool] /tmp/tmp.0hDDckIGAX/tests/build/multi-platform.go:120 ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] aws host-pool allocation when the Component with multi-platform-build is created the build-container task from component pipelinerun is buildah-remote [multi-platform, aws-host-pool] /tmp/tmp.0hDDckIGAX/tests/build/multi-platform.go:124 ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] aws host-pool allocation when the Component with multi-platform-build is created The multi platform secret is populated [multi-platform, aws-host-pool] /tmp/tmp.0hDDckIGAX/tests/build/multi-platform.go:127 ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] aws host-pool allocation when the Component with multi-platform-build is created that PipelineRun completes successfully [multi-platform, aws-host-pool] /tmp/tmp.0hDDckIGAX/tests/build/multi-platform.go:148 ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] aws host-pool allocation when the Component with multi-platform-build is created test that cleanup happened successfully [multi-platform, aws-host-pool] /tmp/tmp.0hDDckIGAX/tests/build/multi-platform.go:152 ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] aws dynamic allocation when the Component with multi-platform-build is created a PipelineRun is triggered [multi-platform, aws-dynamic] /tmp/tmp.0hDDckIGAX/tests/build/multi-platform.go:251 ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] aws dynamic allocation when the Component with multi-platform-build is created the build-container task from component pipelinerun is buildah-remote [multi-platform, aws-dynamic] /tmp/tmp.0hDDckIGAX/tests/build/multi-platform.go:255 ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] aws dynamic allocation when the Component with multi-platform-build is created The multi platform secret is populated [multi-platform, aws-dynamic] /tmp/tmp.0hDDckIGAX/tests/build/multi-platform.go:259 ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] aws dynamic allocation when the Component with multi-platform-build is created that PipelineRun completes successfully [multi-platform, aws-dynamic] /tmp/tmp.0hDDckIGAX/tests/build/multi-platform.go:263 ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] aws dynamic allocation when the Component with multi-platform-build is created check cleanup happened successfully [multi-platform, aws-dynamic] /tmp/tmp.0hDDckIGAX/tests/build/multi-platform.go:267 ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] ibm system z dynamic allocation when the Component with multi-platform-build is created a PipelineRun is triggered [multi-platform, ibmz-dynamic] /tmp/tmp.0hDDckIGAX/tests/build/multi-platform.go:341 ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] ibm system z dynamic allocation when the Component with multi-platform-build is created the build-container task from component pipelinerun is buildah-remote [multi-platform, ibmz-dynamic] /tmp/tmp.0hDDckIGAX/tests/build/multi-platform.go:345 ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] ibm system z dynamic allocation when the Component with multi-platform-build is created The multi platform secret is populated [multi-platform, ibmz-dynamic] /tmp/tmp.0hDDckIGAX/tests/build/multi-platform.go:349 ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] ibm system z dynamic allocation when the Component with multi-platform-build is created that PipelineRun completes successfully [multi-platform, ibmz-dynamic] /tmp/tmp.0hDDckIGAX/tests/build/multi-platform.go:353 ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] ibm system z dynamic allocation when the Component with multi-platform-build is created check cleanup happened successfully [multi-platform, ibmz-dynamic] /tmp/tmp.0hDDckIGAX/tests/build/multi-platform.go:357 ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] ibm power pc dynamic allocation when the Component with multi-platform-build is created a PipelineRun is triggered [multi-platform, ibmp-dynamic] /tmp/tmp.0hDDckIGAX/tests/build/multi-platform.go:432 ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] ibm power pc dynamic allocation when the Component with multi-platform-build is created the build-container task from component pipelinerun is buildah-remote [multi-platform, ibmp-dynamic] /tmp/tmp.0hDDckIGAX/tests/build/multi-platform.go:436 ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] ibm power pc dynamic allocation when the Component with multi-platform-build is created The multi platform secret is populated [multi-platform, ibmp-dynamic] /tmp/tmp.0hDDckIGAX/tests/build/multi-platform.go:440 ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] ibm power pc dynamic allocation when the Component with multi-platform-build is created that PipelineRun completes successfully [multi-platform, ibmp-dynamic] /tmp/tmp.0hDDckIGAX/tests/build/multi-platform.go:444 ------------------------------ P [PENDING] [multi-platform-build-service-suite Multi Platform Controller E2E tests] ibm power pc dynamic allocation when the Component with multi-platform-build is created check cleanup happened successfully [multi-platform, ibmp-dynamic] /tmp/tmp.0hDDckIGAX/tests/build/multi-platform.go:448 ------------------------------ P [PENDING] [release-pipelines-suite e2e tests for release-to-github pipeline] Release-to-github happy path Post-release verification verifies if release CR is created [release-pipelines, release-to-github, releaseToGithub] /tmp/tmp.0hDDckIGAX/tests/release/pipelines/release_to_github.go:139 ------------------------------ P [PENDING] [release-pipelines-suite e2e tests for release-to-github pipeline] Release-to-github happy path Post-release verification verifies the release pipelinerun is running and succeeds [release-pipelines, release-to-github, releaseToGithub] /tmp/tmp.0hDDckIGAX/tests/release/pipelines/release_to_github.go:149 ------------------------------ P [PENDING] [release-pipelines-suite e2e tests for release-to-github pipeline] Release-to-github happy path Post-release verification verifies release CR completed and set succeeded. [release-pipelines, release-to-github, releaseToGithub] /tmp/tmp.0hDDckIGAX/tests/release/pipelines/release_to_github.go:182 ------------------------------ P [PENDING] [release-pipelines-suite e2e tests for release-to-github pipeline] Release-to-github happy path Post-release verification verifies if the Release exists in github repo [release-pipelines, release-to-github, releaseToGithub] /tmp/tmp.0hDDckIGAX/tests/release/pipelines/release_to_github.go:193 ------------------------------ • [FAILED] [0.301 seconds] [release-pipelines-suite e2e tests for multi arch with rh-advisories pipeline] Multi arch test happy path [BeforeAll] Post-release verification verifies the release CR is created [release-pipelines, rh-advisories, multiarch-advisories, multiArchAdvisories] [BeforeAll] /tmp/tmp.0hDDckIGAX/tests/release/pipelines/multiarch_advisories.go:61 [It] /tmp/tmp.0hDDckIGAX/tests/release/pipelines/multiarch_advisories.go:113 Timeline >> [FAILED] in [BeforeAll] - /tmp/tmp.0hDDckIGAX/tests/release/releaseLib.go:322 @ 12/23/25 01:38:03.089 [PANICKED] in [AfterAll] - /usr/lib/golang/src/runtime/panic.go:262 @ 12/23/25 01:38:03.089 << Timeline [FAILED] Unexpected error: <*url.Error | 0xc0019003f0>: Get "https://api-toolchain-host-operator.apps.stone-stg-host.qc0p.p1.openshiftapps.com/api/v1/namespaces/managed-release-team-tenant/secrets/pyxis": dial tcp: lookup api-toolchain-host-operator.apps.stone-stg-host.qc0p.p1.openshiftapps.com on 172.30.0.10:53: no such host { Op: "Get", URL: "https://api-toolchain-host-operator.apps.stone-stg-host.qc0p.p1.openshiftapps.com/api/v1/namespaces/managed-release-team-tenant/secrets/pyxis", Err: <*net.OpError | 0xc000c1a190>{ Op: "dial", Net: "tcp", Source: nil, Addr: nil, Err: <*net.DNSError | 0xc0005e1b80>{ UnwrapErr: nil, Err: "no such host", Name: "api-toolchain-host-operator.apps.stone-stg-host.qc0p.p1.openshiftapps.com", Server: "172.30.0.10:53", IsTimeout: false, IsTemporary: false, IsNotFound: true, }, }, } occurred In [BeforeAll] at: /tmp/tmp.0hDDckIGAX/tests/release/releaseLib.go:322 @ 12/23/25 01:38:03.089 There were additional failures detected. To view them in detail run ginkgo -vv ------------------------------ SSS ------------------------------ P [PENDING] [task-suite tkn bundle task] creates Tekton bundles with different params when context points to a file [build-templates] /tmp/tmp.0hDDckIGAX/tests/build/tkn-bundle.go:177 ------------------------------ P [PENDING] [task-suite tkn bundle task] creates Tekton bundles with different params creates Tekton bundles from specific context [build-templates] /tmp/tmp.0hDDckIGAX/tests/build/tkn-bundle.go:188 ------------------------------ P [PENDING] [task-suite tkn bundle task] creates Tekton bundles with different params when context is the root directory [build-templates] /tmp/tmp.0hDDckIGAX/tests/build/tkn-bundle.go:198 ------------------------------ P [PENDING] [task-suite tkn bundle task] creates Tekton bundles with different params creates Tekton bundles when context points to a file and a directory [build-templates] /tmp/tmp.0hDDckIGAX/tests/build/tkn-bundle.go:207 ------------------------------ P [PENDING] [task-suite tkn bundle task] creates Tekton bundles with different params creates Tekton bundles when using negation [build-templates] /tmp/tmp.0hDDckIGAX/tests/build/tkn-bundle.go:217 ------------------------------ P [PENDING] [task-suite tkn bundle task] creates Tekton bundles with different params allows overriding HOME environment variable [build-templates] /tmp/tmp.0hDDckIGAX/tests/build/tkn-bundle.go:227 ------------------------------ P [PENDING] [task-suite tkn bundle task] creates Tekton bundles with different params allows overriding STEP image [build-templates] /tmp/tmp.0hDDckIGAX/tests/build/tkn-bundle.go:236 ------------------------------ • [FAILED] [0.217 seconds] [release-pipelines-suite FBC e2e-tests] with FBC happy path [BeforeAll] Post-release verification creates component from git source https://github.com/redhat-appstudio-qe/fbc-sample-repo-test [release-pipelines, fbc-release, fbcHappyPath] [BeforeAll] /tmp/tmp.0hDDckIGAX/tests/release/pipelines/fbc_release.go:89 [It] /tmp/tmp.0hDDckIGAX/tests/release/pipelines/fbc_release.go:123 Timeline >> [FAILED] in [BeforeAll] - /tmp/tmp.0hDDckIGAX/tests/release/releaseLib.go:322 @ 12/23/25 01:38:03.099 [PANICKED] in [AfterAll] - /usr/lib/golang/src/runtime/panic.go:262 @ 12/23/25 01:38:03.099 << Timeline [FAILED] Unexpected error: <*url.Error | 0xc000b26960>: Get "https://api-toolchain-host-operator.apps.stone-stg-host.qc0p.p1.openshiftapps.com/api/v1/namespaces/managed-release-team-tenant/secrets/pyxis": dial tcp: lookup api-toolchain-host-operator.apps.stone-stg-host.qc0p.p1.openshiftapps.com on 172.30.0.10:53: no such host { Op: "Get", URL: "https://api-toolchain-host-operator.apps.stone-stg-host.qc0p.p1.openshiftapps.com/api/v1/namespaces/managed-release-team-tenant/secrets/pyxis", Err: <*net.OpError | 0xc0009cc7d0>{ Op: "dial", Net: "tcp", Source: nil, Addr: nil, Err: <*net.DNSError | 0xc00099caa0>{ UnwrapErr: nil, Err: "no such host", Name: "api-toolchain-host-operator.apps.stone-stg-host.qc0p.p1.openshiftapps.com", Server: "172.30.0.10:53", IsTimeout: false, IsTemporary: false, IsNotFound: true, }, }, } occurred In [BeforeAll] at: /tmp/tmp.0hDDckIGAX/tests/release/releaseLib.go:322 @ 12/23/25 01:38:03.099 There were additional failures detected. To view them in detail run ginkgo -vv ------------------------------ SSSSSSSSSSSSSSS ------------------------------ • [FAILED] [0.590 seconds] [release-pipelines-suite e2e tests for rh-push-to-redhat-io pipeline] Rh-push-to-redhat-io happy path [BeforeAll] Post-release verification verifies if the release CR is created [release-pipelines, rh-push-to-registry-redhat-io, PushToRedhatIO] [BeforeAll] /tmp/tmp.0hDDckIGAX/tests/release/pipelines/rh_push_to_registry_redhat_io.go:61 [It] /tmp/tmp.0hDDckIGAX/tests/release/pipelines/rh_push_to_registry_redhat_io.go:110 Timeline >> [FAILED] in [BeforeAll] - /tmp/tmp.0hDDckIGAX/tests/release/releaseLib.go:322 @ 12/23/25 01:38:02.895 [PANICKED] in [AfterAll] - /usr/lib/golang/src/runtime/panic.go:262 @ 12/23/25 01:38:03.182 << Timeline [FAILED] Unexpected error: <*url.Error | 0xc000e683f0>: Get "https://api-toolchain-host-operator.apps.stone-stg-host.qc0p.p1.openshiftapps.com/api/v1/namespaces/managed-release-team-tenant/secrets/pyxis": dial tcp: lookup api-toolchain-host-operator.apps.stone-stg-host.qc0p.p1.openshiftapps.com on 172.30.0.10:53: no such host { Op: "Get", URL: "https://api-toolchain-host-operator.apps.stone-stg-host.qc0p.p1.openshiftapps.com/api/v1/namespaces/managed-release-team-tenant/secrets/pyxis", Err: <*net.OpError | 0xc0003c61e0>{ Op: "dial", Net: "tcp", Source: nil, Addr: nil, Err: <*net.DNSError | 0xc000e10730>{ UnwrapErr: nil, Err: "no such host", Name: "api-toolchain-host-operator.apps.stone-stg-host.qc0p.p1.openshiftapps.com", Server: "172.30.0.10:53", IsTimeout: false, IsTemporary: false, IsNotFound: true, }, }, } occurred In [BeforeAll] at: /tmp/tmp.0hDDckIGAX/tests/release/releaseLib.go:322 @ 12/23/25 01:38:02.895 There were additional failures detected. To view them in detail run ginkgo -vv ------------------------------ SSS ------------------------------ • [FAILED] [0.215 seconds] [release-pipelines-suite e2e tests for rh-advisories pipeline] Rh-advisories happy path [BeforeAll] Post-release verification verifies if release CR is created [release-pipelines, rh-advisories, rhAdvisories] [BeforeAll] /tmp/tmp.0hDDckIGAX/tests/release/pipelines/rh_advisories.go:61 [It] /tmp/tmp.0hDDckIGAX/tests/release/pipelines/rh_advisories.go:118 Timeline >> [FAILED] in [BeforeAll] - /tmp/tmp.0hDDckIGAX/tests/release/releaseLib.go:322 @ 12/23/25 01:38:03.399 [PANICKED] in [AfterAll] - /usr/lib/golang/src/runtime/panic.go:262 @ 12/23/25 01:38:03.399 << Timeline [FAILED] Unexpected error: <*url.Error | 0xc00192cea0>: Get "https://api-toolchain-host-operator.apps.stone-stg-host.qc0p.p1.openshiftapps.com/api/v1/namespaces/managed-release-team-tenant/secrets/pyxis": dial tcp: lookup api-toolchain-host-operator.apps.stone-stg-host.qc0p.p1.openshiftapps.com on 172.30.0.10:53: no such host { Op: "Get", URL: "https://api-toolchain-host-operator.apps.stone-stg-host.qc0p.p1.openshiftapps.com/api/v1/namespaces/managed-release-team-tenant/secrets/pyxis", Err: <*net.OpError | 0xc0001deaa0>{ Op: "dial", Net: "tcp", Source: nil, Addr: nil, Err: <*net.DNSError | 0xc000890f00>{ UnwrapErr: nil, Err: "no such host", Name: "api-toolchain-host-operator.apps.stone-stg-host.qc0p.p1.openshiftapps.com", Server: "172.30.0.10:53", IsTimeout: false, IsTemporary: false, IsNotFound: true, }, }, } occurred In [BeforeAll] at: /tmp/tmp.0hDDckIGAX/tests/release/releaseLib.go:322 @ 12/23/25 01:38:03.399 There were additional failures detected. To view them in detail run ginkgo -vv ------------------------------ SSS ------------------------------ • [FAILED] [3.378 seconds] [release-pipelines-suite e2e tests for rhtap-service-push pipeline] Rhtap-service-push happy path [BeforeAll] Post-release verification verifies if the release CR is created [release-pipelines, rhtap-service-push, RhtapServicePush] [BeforeAll] /tmp/tmp.0hDDckIGAX/tests/release/pipelines/rhtap_service_push.go:75 [It] /tmp/tmp.0hDDckIGAX/tests/release/pipelines/rhtap_service_push.go:150 Timeline >> PR #3687 got created with sha 81d76c495df4bac45359c1b6cec48ca0baf305c5 merged result sha: 5922a58aeaaa461ed1532cd053afea3390ae97a7 for PR #3687 [FAILED] in [BeforeAll] - /tmp/tmp.0hDDckIGAX/tests/release/pipelines/rhtap_service_push.go:119 @ 12/23/25 01:38:06.47 [PANICKED] in [AfterAll] - /usr/lib/golang/src/runtime/panic.go:262 @ 12/23/25 01:38:06.47 << Timeline [FAILED] Unexpected error: <*fmt.wrapError | 0xc0008858c0>: failed to get API group resources: unable to retrieve the complete list of server APIs: appstudio.redhat.com/v1alpha1: Get "https://api-toolchain-host-operator.apps.stone-stg-host.qc0p.p1.openshiftapps.com/apis/appstudio.redhat.com/v1alpha1": dial tcp: lookup api-toolchain-host-operator.apps.stone-stg-host.qc0p.p1.openshiftapps.com on 172.30.0.10:53: no such host { msg: "failed to get API group resources: unable to retrieve the complete list of server APIs: appstudio.redhat.com/v1alpha1: Get \"https://api-toolchain-host-operator.apps.stone-stg-host.qc0p.p1.openshiftapps.com/apis/appstudio.redhat.com/v1alpha1\": dial tcp: lookup api-toolchain-host-operator.apps.stone-stg-host.qc0p.p1.openshiftapps.com on 172.30.0.10:53: no such host", err: <*apiutil.ErrResourceDiscoveryFailed | 0xc000fe0160>{ { Group: "appstudio.redhat.com", Version: "v1alpha1", }: <*url.Error | 0xc000c5d410>{ Op: "Get", URL: "https://api-toolchain-host-operator.apps.stone-stg-host.qc0p.p1.openshiftapps.com/apis/appstudio.redhat.com/v1alpha1", Err: <*net.OpError | 0xc000c1b180>{ Op: "dial", Net: "tcp", Source: nil, Addr: nil, Err: <*net.DNSError | 0xc000c1aff0>{ UnwrapErr: nil, Err: "no such host", Name: "api-toolchain-host-operator.apps.stone-stg-host.qc0p.p1.openshiftapps.com", Server: "172.30.0.10:53", IsTimeout: false, IsTemporary: false, IsNotFound: true, }, }, }, }, } occurred In [BeforeAll] at: /tmp/tmp.0hDDckIGAX/tests/release/pipelines/rhtap_service_push.go:119 @ 12/23/25 01:38:06.47 There were additional failures detected. To view them in detail run ginkgo -vv ------------------------------ SSS•••••••••••••••• ------------------------------ • [PANICKED] [125.008 seconds] [upgrade-suite Create users and check their state] [It] Verify AppStudioProvisionedUser [upgrade-verify] /tmp/tmp.0hDDckIGAX/tests/upgrade/verifyWorkload.go:20 Timeline >> "msg"="Observed a panic: \"invalid memory address or nil pointer dereference\" (runtime error: invalid memory address or nil pointer dereference)\ngoroutine 231 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x2c50300, 0x5412340})\n\t/opt/app-root/src/go/pkg/mod/k8s.io/apimachinery@v0.29.4/pkg/util/runtime/runtime.go:75 +0x85\nk8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc001d80fc0?})\n\t/opt/app-root/src/go/pkg/mod/k8s.io/apimachinery@v0.29.4/pkg/util/runtime/runtime.go:49 +0x65\npanic({0x2c50300?, 0x5412340?})\n\t/usr/lib/golang/src/runtime/panic.go:792 +0x132\ngithub.com/konflux-ci/e2e-tests/pkg/sandbox.(*SandboxController).CheckUserCreatedWithSignUp.func1()\n\t/tmp/tmp.0hDDckIGAX/pkg/sandbox/sandbox.go:319 +0x35\ngithub.com/konflux-ci/e2e-tests/pkg/utils.WaitUntilWithInterval.func1({0xee6b2800?, 0x0?})\n\t/tmp/tmp.0hDDckIGAX/pkg/utils/util.go:129 +0x13\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1(0xc0003d3ce0?, {0x38564d8?, 0xc0005eed20?})\n\t/opt/app-root/src/go/pkg/mod/k8s.io/apimachinery@v0.29.4/pkg/util/wait/loop.go:53 +0x52\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x38564d8, 0xc0005eed20}, {0x384afd0, 0xc0003d3ce0}, 0x1, 0x0, 0xc000cace68)\n\t/opt/app-root/src/go/pkg/mod/k8s.io/apimachinery@v0.29.4/pkg/util/wait/loop.go:54 +0x115\nk8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x3856388?, 0x54c8ce0?}, 0xee6b2800, 0x419be5?, 0x1, 0xc000cace68)\n\t/opt/app-root/src/go/pkg/mod/k8s.io/apimachinery@v0.29.4/pkg/util/wait/poll.go:48 +0xa5\ngithub.com/konflux-ci/e2e-tests/pkg/utils.WaitUntilWithInterval(0xa?, 0xc000e98eb0?, 0x1?)\n\t/tmp/tmp.0hDDckIGAX/pkg/utils/util.go:129 +0x45\ngithub.com/konflux-ci/e2e-tests/pkg/sandbox.(*SandboxController).CheckUserCreatedWithSignUp(0x324230e?, {0x324230e?, 0x323ef3f?}, 0x8?)\n\t/tmp/tmp.0hDDckIGAX/pkg/sandbox/sandbox.go:318 +0x72\ngithub.com/konflux-ci/e2e-tests/pkg/sandbox.(*SandboxController).CheckUserCreated(0x0, {0x324230e, 0x9})\n\t/tmp/tmp.0hDDckIGAX/pkg/sandbox/sandbox.go:314 +0x4b\ngithub.com/konflux-ci/e2e-tests/tests/upgrade/verify.VerifyAppStudioProvisionedUser(0x0?)\n\t/tmp/tmp.0hDDckIGAX/tests/upgrade/verify/verifyUsers.go:14 +0x25\ngithub.com/konflux-ci/e2e-tests/tests/upgrade.init.func1.2()\n\t/tmp/tmp.0hDDckIGAX/tests/upgrade/verifyWorkload.go:21 +0x1a\ngithub.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x7c28d6?, 0xc00015e600?})\n\t/opt/app-root/src/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.22.2/internal/node.go:475 +0x13\ngithub.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3()\n\t/opt/app-root/src/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.22.2/internal/suite.go:894 +0x7b\ncreated by github.com/onsi/ginkgo/v2/internal.(*Suite).runNode in goroutine 89\n\t/opt/app-root/src/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.22.2/internal/suite.go:881 +0xd7b" "error"=null [PANICKED] in [It] - /opt/app-root/src/go/pkg/mod/k8s.io/apimachinery@v0.29.4/pkg/util/runtime/runtime.go:56 @ 12/23/25 01:40:07.789 << Timeline [PANICKED] Test Panicked In [It] at: /opt/app-root/src/go/pkg/mod/k8s.io/apimachinery@v0.29.4/pkg/util/runtime/runtime.go:56 @ 12/23/25 01:40:07.789 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc001d80fc0?}) /opt/app-root/src/go/pkg/mod/k8s.io/apimachinery@v0.29.4/pkg/util/runtime/runtime.go:56 +0xc7 panic({0x2c50300?, 0x5412340?}) /usr/lib/golang/src/runtime/panic.go:792 +0x132 github.com/konflux-ci/e2e-tests/pkg/sandbox.(*SandboxController).CheckUserCreatedWithSignUp.func1() /tmp/tmp.0hDDckIGAX/pkg/sandbox/sandbox.go:319 +0x35 github.com/konflux-ci/e2e-tests/pkg/utils.WaitUntilWithInterval.func1({0xee6b2800?, 0x0?}) /tmp/tmp.0hDDckIGAX/pkg/utils/util.go:129 +0x13 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1(0xc0003d3ce0?, {0x38564d8?, 0xc0005eed20?}) /opt/app-root/src/go/pkg/mod/k8s.io/apimachinery@v0.29.4/pkg/util/wait/loop.go:53 +0x52 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x38564d8, 0xc0005eed20}, {0x384afd0, 0xc0003d3ce0}, 0x1, 0x0, 0xc000cc3e68) /opt/app-root/src/go/pkg/mod/k8s.io/apimachinery@v0.29.4/pkg/util/wait/loop.go:54 +0x115 k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x3856388?, 0x54c8ce0?}, 0xee6b2800, 0x419be5?, 0x1, 0xc000cace68) /opt/app-root/src/go/pkg/mod/k8s.io/apimachinery@v0.29.4/pkg/util/wait/poll.go:48 +0xa5 github.com/konflux-ci/e2e-tests/pkg/utils.WaitUntilWithInterval(0xa?, 0xc000e98eb0?, 0x1?) /tmp/tmp.0hDDckIGAX/pkg/utils/util.go:129 +0x45 github.com/konflux-ci/e2e-tests/pkg/sandbox.(*SandboxController).CheckUserCreatedWithSignUp(0x324230e?, {0x324230e?, 0x323ef3f?}, 0x8?) /tmp/tmp.0hDDckIGAX/pkg/sandbox/sandbox.go:318 +0x72 github.com/konflux-ci/e2e-tests/pkg/sandbox.(*SandboxController).CheckUserCreated(0x0, {0x324230e, 0x9}) /tmp/tmp.0hDDckIGAX/pkg/sandbox/sandbox.go:314 +0x4b github.com/konflux-ci/e2e-tests/tests/upgrade/verify.VerifyAppStudioProvisionedUser(0x0?) /tmp/tmp.0hDDckIGAX/tests/upgrade/verify/verifyUsers.go:14 +0x25 github.com/konflux-ci/e2e-tests/tests/upgrade.init.func1.2() /tmp/tmp.0hDDckIGAX/tests/upgrade/verifyWorkload.go:21 +0x1a ------------------------------ SS••••••••••••••••••••••••••••••••••••••••••••••••••••••• ------------------------------ P [PENDING] [build-service-suite Build service E2E tests] test build secret lookup when two secrets are created when second component is deleted, pac pr branch should not exist in the repo [build-service, pac-build, secret-lookup] /tmp/tmp.0hDDckIGAX/tests/build/build.go:1121 ------------------------------ • ------------------------------ • [FAILED] [0.320 seconds] [release-pipelines-suite [HACBS-1571]test-release-e2e-push-image-to-pyxis] Post-release verification [It] validate the result of task create-pyxis-image contains image ids [release-pipelines, rh-push-to-external-registry] /tmp/tmp.0hDDckIGAX/tests/release/pipelines/rh_push_to_external_registry.go:233 [FAILED] Unexpected error: <*errors.errorString | 0xc000594370>: task with create-pyxis-image name doesn't exist in managed-xshpj pipelinerun { s: "task with create-pyxis-image name doesn't exist in managed-xshpj pipelinerun", } occurred In [It] at: /tmp/tmp.0hDDckIGAX/tests/release/pipelines/rh_push_to_external_registry.go:236 @ 12/23/25 01:43:51.39 ------------------------------ SS•••••••••••S•S• ------------------------------ P [PENDING] [build-service-suite Build templates E2E test] HACBS pipelines scenario sample-python-basic-oci when Pipeline Results are stored for component with Git source URL https://github.com/redhat-appstudio-qe/devfile-sample-python-basic and Pipeline docker-build should have Pipeline Logs [build, build-templates, HACBS, pipeline-service, pipeline] /tmp/tmp.0hDDckIGAX/tests/build/build_templates.go:489 ------------------------------ •••S••••••••••••••••••••••••••••••••••••S•S•• ------------------------------ P [PENDING] [build-service-suite Build templates E2E test] HACBS pipelines scenario sample-python-basic-oci when Pipeline Results are stored for component with Git source URL https://github.com/redhat-appstudio-qe/devfile-sample-python-basic and Pipeline docker-build-oci-ta should have Pipeline Logs [build, build-templates, HACBS, pipeline-service, pipeline] /tmp/tmp.0hDDckIGAX/tests/build/build_templates.go:489 ------------------------------ •••••••• ------------------------------ • [FAILED] [1.376 seconds] [build-service-suite Build service E2E tests] test PaC component build github when a new Component with specified custom branch is created [It] eventually leads to the PipelineRun status report at Checks tab [build-service, github-webhook, pac-build, pipeline, image-controller, build-custom-branch] /tmp/tmp.0hDDckIGAX/tests/build/build.go:449 [FAILED] Expected : failure to equal : success In [It] at: /tmp/tmp.0hDDckIGAX/tests/build/build.go:453 @ 12/23/25 01:49:57.073 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•S••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••• Summarizing 8 Failures: [PANICKED!] [upgrade-suite Create users and check their state] [It] Verify AppStudioProvisionedUser [upgrade-verify] /opt/app-root/src/go/pkg/mod/k8s.io/apimachinery@v0.29.4/pkg/util/runtime/runtime.go:56 [FAIL] [release-pipelines-suite FBC e2e-tests] with FBC happy path [BeforeAll] Post-release verification creates component from git source https://github.com/redhat-appstudio-qe/fbc-sample-repo-test [release-pipelines, fbc-release, fbcHappyPath] /tmp/tmp.0hDDckIGAX/tests/release/releaseLib.go:322 [FAIL] [release-pipelines-suite e2e tests for rh-push-to-redhat-io pipeline] Rh-push-to-redhat-io happy path [BeforeAll] Post-release verification verifies if the release CR is created [release-pipelines, rh-push-to-registry-redhat-io, PushToRedhatIO] /tmp/tmp.0hDDckIGAX/tests/release/releaseLib.go:322 [FAIL] [release-pipelines-suite e2e tests for rh-advisories pipeline] Rh-advisories happy path [BeforeAll] Post-release verification verifies if release CR is created [release-pipelines, rh-advisories, rhAdvisories] /tmp/tmp.0hDDckIGAX/tests/release/releaseLib.go:322 [FAIL] [build-service-suite Build service E2E tests] test PaC component build github when a new Component with specified custom branch is created [It] eventually leads to the PipelineRun status report at Checks tab [build-service, github-webhook, pac-build, pipeline, image-controller, build-custom-branch] /tmp/tmp.0hDDckIGAX/tests/build/build.go:453 [FAIL] [release-pipelines-suite e2e tests for multi arch with rh-advisories pipeline] Multi arch test happy path [BeforeAll] Post-release verification verifies the release CR is created [release-pipelines, rh-advisories, multiarch-advisories, multiArchAdvisories] /tmp/tmp.0hDDckIGAX/tests/release/releaseLib.go:322 [FAIL] [release-pipelines-suite e2e tests for rhtap-service-push pipeline] Rhtap-service-push happy path [BeforeAll] Post-release verification verifies if the release CR is created [release-pipelines, rhtap-service-push, RhtapServicePush] /tmp/tmp.0hDDckIGAX/tests/release/pipelines/rhtap_service_push.go:119 [FAIL] [release-pipelines-suite [HACBS-1571]test-release-e2e-push-image-to-pyxis] Post-release verification [It] validate the result of task create-pyxis-image contains image ids [release-pipelines, rh-push-to-external-registry] /tmp/tmp.0hDDckIGAX/tests/release/pipelines/rh_push_to_external_registry.go:236 Ran 273 of 387 Specs in 2185.997 seconds FAIL! -- 265 Passed | 8 Failed | 34 Pending | 80 Skipped Ginkgo ran 1 suite in 37m56.522901439s Test Suite Failed Error: running "ginkgo --seed=1766453132 --timeout=1h30m0s --grace-period=30s --output-interceptor-mode=none --no-color --json-report=e2e-report.json --junit-report=e2e-report.xml --procs=20 --nodes=20 --p --output-dir=/workspace/artifact-dir ./cmd --" failed with exit code 1 make: *** [Makefile:25: ci/test/e2e] Error 1