[INFO] Fetching and executing solve-pr-pairing.sh... [INFO] Loading env vars from parameters [INFO] Updating image repository to quay.io/redhat-user-workloads/rhtap-integration-tenant/integration-service/integration-service [INFO] Updating image tag to on-pr-f9ecbbcf927ff3641a98e1e84dc2be2a8206a597-linux-x86-64 [INFO] Updating GitHub reference to hongweiliu17@f9ecbbcf927ff3641a98e1e84dc2be2a8206a597 [INFO] kubernetes cluster is hosted on: https://54.71.21.136:6443 Kubernetes control plane is running at https://54.71.21.136:6443 CoreDNS is running at https://54.71.21.136:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. [INFO] Installing Konflux CI dependencies ๐Ÿ” Checking requirements kubectl is installed openssl is installed Checking kubectl version kubectl version v1.35.0 meets minimum requirement (v1.31.4) All requirements are met Continue ๐Ÿงช Testing PVC creation for default storage class Creating PVC from './dependencies/pre-deployment-pvc-binding' using the cluster's default storage class namespace/test-pvc-ns created persistentvolumeclaim/test-pvc created pod/test-pvc-consumer created persistentvolumeclaim/test-pvc condition met namespace "test-pvc-ns" deleted persistentvolumeclaim "test-pvc" deleted from test-pvc-ns namespace pod "test-pvc-consumer" deleted from test-pvc-ns namespace PVC binding successfull ๐ŸŒŠ Deploying Konflux Dependencies ๐Ÿ” Deploying Cert Manager... namespace/cert-manager created customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created serviceaccount/cert-manager created serviceaccount/cert-manager-cainjector created serviceaccount/cert-manager-webhook created role.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created role.rbac.authorization.k8s.io/cert-manager:leaderelection created clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created clusterrole.rbac.authorization.k8s.io/cert-manager-cluster-view created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created clusterrole.rbac.authorization.k8s.io/cert-manager-edit created clusterrole.rbac.authorization.k8s.io/cert-manager-view created clusterrole.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created service/cert-manager created service/cert-manager-webhook created deployment.apps/cert-manager created deployment.apps/cert-manager-cainjector created deployment.apps/cert-manager-webhook created mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created pod/cert-manager-66d46f75d6-g6ng8 condition met pod/cert-manager-cainjector-856bdc4b95-sg98s condition met pod/cert-manager-webhook-7fdfc5cd79-5gdrt condition met ๐Ÿค Deploying Trust Manager... customresourcedefinition.apiextensions.k8s.io/bundles.trust.cert-manager.io created serviceaccount/trust-manager created role.rbac.authorization.k8s.io/trust-manager created role.rbac.authorization.k8s.io/trust-manager:leaderelection created clusterrole.rbac.authorization.k8s.io/trust-manager created rolebinding.rbac.authorization.k8s.io/trust-manager created rolebinding.rbac.authorization.k8s.io/trust-manager:leaderelection created clusterrolebinding.rbac.authorization.k8s.io/trust-manager created service/trust-manager created service/trust-manager-metrics created deployment.apps/trust-manager created certificate.cert-manager.io/trust-manager created issuer.cert-manager.io/trust-manager created validatingwebhookconfiguration.admissionregistration.k8s.io/trust-manager created pod/trust-manager-7c9f8b8f7d-s7tzx condition met ๐Ÿ“œ Setting up Cluster Issuer... certificate.cert-manager.io/selfsigned-ca created clusterissuer.cert-manager.io/ca-issuer created clusterissuer.cert-manager.io/self-signed-cluster-issuer created ๐Ÿฑ Deploying Tekton... ๐Ÿฑ Installing Tekton Operator... namespace/tekton-operator created customresourcedefinition.apiextensions.k8s.io/manualapprovalgates.operator.tekton.dev created customresourcedefinition.apiextensions.k8s.io/tektonchains.operator.tekton.dev created customresourcedefinition.apiextensions.k8s.io/tektonconfigs.operator.tekton.dev created customresourcedefinition.apiextensions.k8s.io/tektondashboards.operator.tekton.dev created customresourcedefinition.apiextensions.k8s.io/tektonhubs.operator.tekton.dev created customresourcedefinition.apiextensions.k8s.io/tektoninstallersets.operator.tekton.dev created customresourcedefinition.apiextensions.k8s.io/tektonpipelines.operator.tekton.dev created customresourcedefinition.apiextensions.k8s.io/tektonpruners.operator.tekton.dev created customresourcedefinition.apiextensions.k8s.io/tektonresults.operator.tekton.dev created customresourcedefinition.apiextensions.k8s.io/tektontriggers.operator.tekton.dev created serviceaccount/tekton-operator created role.rbac.authorization.k8s.io/tekton-operator-info created clusterrole.rbac.authorization.k8s.io/tekton-config-read-role created clusterrole.rbac.authorization.k8s.io/tekton-operator created clusterrole.rbac.authorization.k8s.io/tekton-result-read-role created rolebinding.rbac.authorization.k8s.io/tekton-operator-info created clusterrolebinding.rbac.authorization.k8s.io/tekton-config-read-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/tekton-operator created clusterrolebinding.rbac.authorization.k8s.io/tekton-result-read-rolebinding created configmap/config-logging created configmap/tekton-config-defaults created configmap/tekton-config-observability created configmap/tekton-operator-controller-config-leader-election created configmap/tekton-operator-info created configmap/tekton-operator-webhook-config-leader-election created secret/tekton-operator-webhook-certs created service/tekton-operator created service/tekton-operator-webhook created deployment.apps/tekton-operator created deployment.apps/tekton-operator-webhook created pod/tekton-operator-864c79545c-prs2q condition met pod/tekton-operator-webhook-b678db645-8h4n8 condition met tektonconfig.operator.tekton.dev/config condition met โš™๏ธ Configuring Tekton... Warning: resource tektonconfigs/config is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. tektonconfig.operator.tekton.dev/config configured ๐Ÿ”„ Setting up Pipeline As Code... namespace/pipelines-as-code created customresourcedefinition.apiextensions.k8s.io/repositories.pipelinesascode.tekton.dev created serviceaccount/pipelines-as-code-controller created serviceaccount/pipelines-as-code-watcher created serviceaccount/pipelines-as-code-webhook created role.rbac.authorization.k8s.io/pipelines-as-code-controller-role created role.rbac.authorization.k8s.io/pipelines-as-code-info created role.rbac.authorization.k8s.io/pipelines-as-code-watcher-role created role.rbac.authorization.k8s.io/pipelines-as-code-webhook-role created clusterrole.rbac.authorization.k8s.io/pipeline-as-code-controller-clusterrole created clusterrole.rbac.authorization.k8s.io/pipeline-as-code-watcher-clusterrole created clusterrole.rbac.authorization.k8s.io/pipeline-as-code-webhook-clusterrole created clusterrole.rbac.authorization.k8s.io/pipelines-as-code-aggregate created rolebinding.rbac.authorization.k8s.io/pipelines-as-code-controller-binding created rolebinding.rbac.authorization.k8s.io/pipelines-as-code-info created rolebinding.rbac.authorization.k8s.io/pipelines-as-code-watcher-binding created rolebinding.rbac.authorization.k8s.io/pipelines-as-code-webhook-binding created clusterrolebinding.rbac.authorization.k8s.io/pipelines-as-code-controller-clusterbinding created clusterrolebinding.rbac.authorization.k8s.io/pipelines-as-code-watcher-clusterbinding created clusterrolebinding.rbac.authorization.k8s.io/pipelines-as-code-webhook-clusterbinding created configmap/pac-config-logging created configmap/pac-watcher-config-leader-election created configmap/pac-webhook-config-leader-election created configmap/pipelines-as-code created configmap/pipelines-as-code-config-observability created configmap/pipelines-as-code-info created secret/pipelines-as-code-webhook-certs created service/pipelines-as-code-controller created service/pipelines-as-code-watcher created service/pipelines-as-code-webhook created deployment.apps/pipelines-as-code-controller created deployment.apps/pipelines-as-code-watcher created deployment.apps/pipelines-as-code-webhook created validatingwebhookconfiguration.admissionregistration.k8s.io/validation.pipelinesascode.tekton.dev created tektonconfig.operator.tekton.dev/config condition met ๐Ÿ” Setting up Tekton Chains RBAC... namespace/openshift-pipelines created serviceaccount/chains-secrets-admin created role.rbac.authorization.k8s.io/chains-secret-admin created role.rbac.authorization.k8s.io/chains-secret-admin created clusterrole.rbac.authorization.k8s.io/tekton-chains-public-key-viewer created rolebinding.rbac.authorization.k8s.io/chains-secret-admin created rolebinding.rbac.authorization.k8s.io/tekton-chains-public-key-viewer created rolebinding.rbac.authorization.k8s.io/chains-secret-admin created rolebinding.rbac.authorization.k8s.io/tekton-chains-public-key-viewer created job.batch/tekton-chains-signing-secret created ๐Ÿ”‘ Deploying Dex... namespace/dex created serviceaccount/dex created clusterrole.rbac.authorization.k8s.io/dex created clusterrolebinding.rbac.authorization.k8s.io/dex created configmap/dex-7hm4fc5fb8 created service/dex created deployment.apps/dex created certificate.cert-manager.io/dex-cert created Error from server (NotFound): secrets "oauth2-proxy-client-secret" not found ๐Ÿ”‘ Creating secret oauth2-proxy-client-secret secret/oauth2-proxy-client-secret created ๐Ÿ“ฆ Deploying Registry... namespace/kind-registry created service/registry-service created deployment.apps/registry created certificate.cert-manager.io/registry-cert created bundle.trust.cert-manager.io/trusted-ca created pod/registry-68dcdc78fb-kcnnb condition met ๐Ÿ”„ Deploying Smee... Randomizing smee-channel ID namespace/smee-client created deployment.apps/gosmee-client created ๐Ÿ›ก๏ธ Deploying Kyverno... namespace/kyverno serverside-applied customresourcedefinition.apiextensions.k8s.io/cleanuppolicies.kyverno.io serverside-applied customresourcedefinition.apiextensions.k8s.io/clustercleanuppolicies.kyverno.io serverside-applied customresourcedefinition.apiextensions.k8s.io/clusterephemeralreports.reports.kyverno.io serverside-applied customresourcedefinition.apiextensions.k8s.io/clusterpolicies.kyverno.io serverside-applied customresourcedefinition.apiextensions.k8s.io/clusterpolicyreports.wgpolicyk8s.io serverside-applied customresourcedefinition.apiextensions.k8s.io/deletingpolicies.policies.kyverno.io serverside-applied customresourcedefinition.apiextensions.k8s.io/ephemeralreports.reports.kyverno.io serverside-applied customresourcedefinition.apiextensions.k8s.io/generatingpolicies.policies.kyverno.io serverside-applied customresourcedefinition.apiextensions.k8s.io/globalcontextentries.kyverno.io serverside-applied customresourcedefinition.apiextensions.k8s.io/imagevalidatingpolicies.policies.kyverno.io serverside-applied customresourcedefinition.apiextensions.k8s.io/mutatingpolicies.policies.kyverno.io serverside-applied customresourcedefinition.apiextensions.k8s.io/namespaceddeletingpolicies.policies.kyverno.io serverside-applied customresourcedefinition.apiextensions.k8s.io/namespacedimagevalidatingpolicies.policies.kyverno.io serverside-applied customresourcedefinition.apiextensions.k8s.io/namespacedvalidatingpolicies.policies.kyverno.io serverside-applied customresourcedefinition.apiextensions.k8s.io/policies.kyverno.io serverside-applied customresourcedefinition.apiextensions.k8s.io/policyexceptions.kyverno.io serverside-applied customresourcedefinition.apiextensions.k8s.io/policyexceptions.policies.kyverno.io serverside-applied customresourcedefinition.apiextensions.k8s.io/policyreports.wgpolicyk8s.io serverside-applied customresourcedefinition.apiextensions.k8s.io/updaterequests.kyverno.io serverside-applied customresourcedefinition.apiextensions.k8s.io/validatingpolicies.policies.kyverno.io serverside-applied serviceaccount/kyverno-admission-controller serverside-applied serviceaccount/kyverno-background-controller serverside-applied serviceaccount/kyverno-cleanup-controller serverside-applied serviceaccount/kyverno-reports-controller serverside-applied role.rbac.authorization.k8s.io/kyverno:admission-controller serverside-applied role.rbac.authorization.k8s.io/kyverno:background-controller serverside-applied role.rbac.authorization.k8s.io/kyverno:cleanup-controller serverside-applied role.rbac.authorization.k8s.io/kyverno:reports-controller serverside-applied clusterrole.rbac.authorization.k8s.io/kyverno-manage-resources serverside-applied clusterrole.rbac.authorization.k8s.io/kyverno:admission-controller serverside-applied clusterrole.rbac.authorization.k8s.io/kyverno:admission-controller:core serverside-applied clusterrole.rbac.authorization.k8s.io/kyverno:background-controller serverside-applied clusterrole.rbac.authorization.k8s.io/kyverno:background-controller:core serverside-applied clusterrole.rbac.authorization.k8s.io/kyverno:cleanup-controller serverside-applied clusterrole.rbac.authorization.k8s.io/kyverno:cleanup-controller:core serverside-applied clusterrole.rbac.authorization.k8s.io/kyverno:rbac:admin:policies serverside-applied clusterrole.rbac.authorization.k8s.io/kyverno:rbac:admin:policyreports serverside-applied clusterrole.rbac.authorization.k8s.io/kyverno:rbac:admin:reports serverside-applied clusterrole.rbac.authorization.k8s.io/kyverno:rbac:admin:updaterequests serverside-applied clusterrole.rbac.authorization.k8s.io/kyverno:rbac:view:policies serverside-applied clusterrole.rbac.authorization.k8s.io/kyverno:rbac:view:policyreports serverside-applied clusterrole.rbac.authorization.k8s.io/kyverno:rbac:view:reports serverside-applied clusterrole.rbac.authorization.k8s.io/kyverno:rbac:view:updaterequests serverside-applied clusterrole.rbac.authorization.k8s.io/kyverno:reports-controller serverside-applied clusterrole.rbac.authorization.k8s.io/kyverno:reports-controller:core serverside-applied rolebinding.rbac.authorization.k8s.io/kyverno:admission-controller serverside-applied rolebinding.rbac.authorization.k8s.io/kyverno:background-controller serverside-applied rolebinding.rbac.authorization.k8s.io/kyverno:cleanup-controller serverside-applied rolebinding.rbac.authorization.k8s.io/kyverno:reports-controller serverside-applied clusterrolebinding.rbac.authorization.k8s.io/kyverno:admission-controller serverside-applied clusterrolebinding.rbac.authorization.k8s.io/kyverno:admission-controller:view serverside-applied clusterrolebinding.rbac.authorization.k8s.io/kyverno:background-controller serverside-applied clusterrolebinding.rbac.authorization.k8s.io/kyverno:background-controller:view serverside-applied clusterrolebinding.rbac.authorization.k8s.io/kyverno:cleanup-controller serverside-applied clusterrolebinding.rbac.authorization.k8s.io/kyverno:reports-controller serverside-applied clusterrolebinding.rbac.authorization.k8s.io/kyverno:reports-controller:view serverside-applied configmap/kyverno serverside-applied configmap/kyverno-metrics serverside-applied service/kyverno-background-controller-metrics serverside-applied service/kyverno-cleanup-controller serverside-applied service/kyverno-cleanup-controller-metrics serverside-applied service/kyverno-reports-controller-metrics serverside-applied service/kyverno-svc serverside-applied service/kyverno-svc-metrics serverside-applied deployment.apps/kyverno-admission-controller serverside-applied deployment.apps/kyverno-background-controller serverside-applied deployment.apps/kyverno-cleanup-controller serverside-applied deployment.apps/kyverno-reports-controller serverside-applied clusterpolicy.kyverno.io/reduce-tekton-pr-taskrun-resource-requests created clusterpolicy.kyverno.io/set-skip-checks-parameter created ๐Ÿ“‹ Deploying Konflux Info... namespace/konflux-info created role.rbac.authorization.k8s.io/konflux-public-info-view-role created rolebinding.rbac.authorization.k8s.io/konflux-public-info-view-rb created configmap/konflux-banner-configmap created configmap/konflux-public-info created โณ Waiting for the dependencies to be ready โณ Waiting for Tekton configuration to be ready... tektonconfig.operator.tekton.dev/config condition met โณ Waiting for all deployments to be available... deployment.apps/cert-manager condition met deployment.apps/cert-manager-cainjector condition met deployment.apps/cert-manager-webhook condition met deployment.apps/trust-manager condition met deployment.apps/dex condition met deployment.apps/registry condition met deployment.apps/coredns condition met deployment.apps/kyverno-admission-controller condition met deployment.apps/kyverno-background-controller condition met deployment.apps/kyverno-cleanup-controller condition met deployment.apps/kyverno-reports-controller condition met deployment.apps/local-path-provisioner condition met deployment.apps/pipelines-as-code-controller condition met deployment.apps/pipelines-as-code-watcher condition met deployment.apps/pipelines-as-code-webhook condition met deployment.apps/gosmee-client condition met deployment.apps/tekton-operator condition met deployment.apps/tekton-operator-webhook condition met deployment.apps/tekton-chains-controller condition met deployment.apps/tekton-events-controller condition met deployment.apps/tekton-operator-proxy-webhook condition met deployment.apps/tekton-pipelines-controller condition met deployment.apps/tekton-pipelines-remote-resolvers condition met deployment.apps/tekton-pipelines-webhook condition met deployment.apps/tekton-results-api condition met deployment.apps/tekton-results-retention-policy-agent condition met deployment.apps/tekton-results-watcher condition met deployment.apps/tekton-triggers-controller condition met deployment.apps/tekton-triggers-core-interceptors condition met deployment.apps/tekton-triggers-webhook condition met โณ Waiting for Tekton configuration to be ready... tektonconfig.operator.tekton.dev/config condition met โณ Waiting for all deployments to be available... deployment.apps/cert-manager condition met deployment.apps/cert-manager-cainjector condition met deployment.apps/cert-manager-webhook condition met deployment.apps/trust-manager condition met deployment.apps/dex condition met deployment.apps/registry condition met deployment.apps/coredns condition met deployment.apps/kyverno-admission-controller condition met deployment.apps/kyverno-background-controller condition met deployment.apps/kyverno-cleanup-controller condition met deployment.apps/kyverno-reports-controller condition met deployment.apps/local-path-provisioner condition met deployment.apps/pipelines-as-code-controller condition met deployment.apps/pipelines-as-code-watcher condition met deployment.apps/pipelines-as-code-webhook condition met deployment.apps/gosmee-client condition met deployment.apps/tekton-operator condition met deployment.apps/tekton-operator-webhook condition met deployment.apps/tekton-chains-controller condition met deployment.apps/tekton-events-controller condition met deployment.apps/tekton-operator-proxy-webhook condition met deployment.apps/tekton-pipelines-controller condition met deployment.apps/tekton-pipelines-remote-resolvers condition met deployment.apps/tekton-pipelines-webhook condition met deployment.apps/tekton-results-api condition met deployment.apps/tekton-results-retention-policy-agent condition met deployment.apps/tekton-results-watcher condition met deployment.apps/tekton-triggers-controller condition met deployment.apps/tekton-triggers-core-interceptors condition met deployment.apps/tekton-triggers-webhook condition met [INFO] Installing Konflux CI... Deploying Konflux ๐Ÿš€ Deploying Application API CRDs... customresourcedefinition.apiextensions.k8s.io/applications.appstudio.redhat.com created customresourcedefinition.apiextensions.k8s.io/componentdetectionqueries.appstudio.redhat.com created customresourcedefinition.apiextensions.k8s.io/components.appstudio.redhat.com created customresourcedefinition.apiextensions.k8s.io/deploymenttargetclaims.appstudio.redhat.com created customresourcedefinition.apiextensions.k8s.io/deploymenttargetclasses.appstudio.redhat.com created customresourcedefinition.apiextensions.k8s.io/deploymenttargets.appstudio.redhat.com created customresourcedefinition.apiextensions.k8s.io/environments.appstudio.redhat.com created customresourcedefinition.apiextensions.k8s.io/promotionruns.appstudio.redhat.com created customresourcedefinition.apiextensions.k8s.io/snapshotenvironmentbindings.appstudio.redhat.com created customresourcedefinition.apiextensions.k8s.io/snapshots.appstudio.redhat.com created ๐Ÿ‘ฅ Setting up RBAC permissions... clusterrole.rbac.authorization.k8s.io/konflux-admin-user-actions created clusterrole.rbac.authorization.k8s.io/konflux-admin-user-actions-batch created clusterrole.rbac.authorization.k8s.io/konflux-admin-user-actions-core created clusterrole.rbac.authorization.k8s.io/konflux-admin-user-actions-extra created clusterrole.rbac.authorization.k8s.io/konflux-contributor-user-actions created clusterrole.rbac.authorization.k8s.io/konflux-contributor-user-actions-core created clusterrole.rbac.authorization.k8s.io/konflux-contributor-user-actions-extra created clusterrole.rbac.authorization.k8s.io/konflux-maintainer-user-actions created clusterrole.rbac.authorization.k8s.io/konflux-maintainer-user-actions-core created clusterrole.rbac.authorization.k8s.io/konflux-maintainer-user-actions-extra created clusterrole.rbac.authorization.k8s.io/konflux-self-access-reviewer created clusterrole.rbac.authorization.k8s.io/konflux-viewer-user-actions created clusterrole.rbac.authorization.k8s.io/konflux-viewer-user-actions-core created clusterrole.rbac.authorization.k8s.io/konflux-viewer-user-actions-extra created ๐Ÿ“œ Deploying Enterprise Contract... namespace/enterprise-contract-service created customresourcedefinition.apiextensions.k8s.io/enterprisecontractpolicies.appstudio.redhat.com created clusterrole.rbac.authorization.k8s.io/enterprisecontractpolicy-editor-role created clusterrole.rbac.authorization.k8s.io/enterprisecontractpolicy-viewer-role created rolebinding.rbac.authorization.k8s.io/public-ec-cm created rolebinding.rbac.authorization.k8s.io/public-ecp created configmap/ec-defaults created resource mapping not found for name: "all" namespace: "enterprise-contract-service" from "./konflux-ci/enterprise-contract": no matches for kind "EnterpriseContractPolicy" in version "appstudio.redhat.com/v1alpha1" ensure CRDs are installed first resource mapping not found for name: "default" namespace: "enterprise-contract-service" from "./konflux-ci/enterprise-contract": no matches for kind "EnterpriseContractPolicy" in version "appstudio.redhat.com/v1alpha1" ensure CRDs are installed first resource mapping not found for name: "redhat" namespace: "enterprise-contract-service" from "./konflux-ci/enterprise-contract": no matches for kind "EnterpriseContractPolicy" in version "appstudio.redhat.com/v1alpha1" ensure CRDs are installed first resource mapping not found for name: "redhat-no-hermetic" namespace: "enterprise-contract-service" from "./konflux-ci/enterprise-contract": no matches for kind "EnterpriseContractPolicy" in version "appstudio.redhat.com/v1alpha1" ensure CRDs are installed first resource mapping not found for name: "redhat-trusted-tasks" namespace: "enterprise-contract-service" from "./konflux-ci/enterprise-contract": no matches for kind "EnterpriseContractPolicy" in version "appstudio.redhat.com/v1alpha1" ensure CRDs are installed first resource mapping not found for name: "slsa3" namespace: "enterprise-contract-service" from "./konflux-ci/enterprise-contract": no matches for kind "EnterpriseContractPolicy" in version "appstudio.redhat.com/v1alpha1" ensure CRDs are installed first ๐Ÿ”„ Retrying command (attempt 2/3)... namespace/enterprise-contract-service unchanged customresourcedefinition.apiextensions.k8s.io/enterprisecontractpolicies.appstudio.redhat.com unchanged clusterrole.rbac.authorization.k8s.io/enterprisecontractpolicy-editor-role unchanged clusterrole.rbac.authorization.k8s.io/enterprisecontractpolicy-viewer-role unchanged rolebinding.rbac.authorization.k8s.io/public-ec-cm unchanged rolebinding.rbac.authorization.k8s.io/public-ecp unchanged configmap/ec-defaults unchanged enterprisecontractpolicy.appstudio.redhat.com/all created enterprisecontractpolicy.appstudio.redhat.com/default created enterprisecontractpolicy.appstudio.redhat.com/redhat created enterprisecontractpolicy.appstudio.redhat.com/redhat-no-hermetic created enterprisecontractpolicy.appstudio.redhat.com/redhat-trusted-tasks created enterprisecontractpolicy.appstudio.redhat.com/slsa3 created ๐ŸŽฏ Deploying Release Service... namespace/release-service serverside-applied customresourcedefinition.apiextensions.k8s.io/internalrequests.appstudio.redhat.com serverside-applied customresourcedefinition.apiextensions.k8s.io/internalservicesconfigs.appstudio.redhat.com serverside-applied customresourcedefinition.apiextensions.k8s.io/releaseplanadmissions.appstudio.redhat.com serverside-applied customresourcedefinition.apiextensions.k8s.io/releaseplans.appstudio.redhat.com serverside-applied customresourcedefinition.apiextensions.k8s.io/releases.appstudio.redhat.com serverside-applied customresourcedefinition.apiextensions.k8s.io/releaseserviceconfigs.appstudio.redhat.com serverside-applied serviceaccount/release-service-controller-manager serverside-applied role.rbac.authorization.k8s.io/release-service-leader-election-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-pipeline-resource-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-application-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-component-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-environment-viewer-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-manager-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-metrics-auth-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-release-editor-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-release-viewer-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-releaseplan-editor-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-releaseplan-viewer-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-releaseplanadmission-editor-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-releaseplanadmission-viewer-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-snapshot-editor-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-snapshot-viewer-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-snapshotenvironmentbinding-editor-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-tekton-role serverside-applied clusterrole.rbac.authorization.k8s.io/releaseserviceconfig-role serverside-applied rolebinding.rbac.authorization.k8s.io/release-service-leader-election-rolebinding serverside-applied rolebinding.rbac.authorization.k8s.io/releaseserviceconfigs-rolebinding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-application-role-binding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-component-role-binding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-environment-role-binding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-manager-rolebinding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-metrics-auth-rolebinding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-release-role-binding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-releaseplan-role-binding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-releaseplanadmission-role-binding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-snapshot-role-binding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-snapshotenvironmentbinding-role-binding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-tekton-role-binding serverside-applied configmap/release-service-manager-config serverside-applied configmap/release-service-manager-properties serverside-applied service/release-service-controller-manager-metrics-service serverside-applied service/release-service-webhook-service serverside-applied deployment.apps/release-service-controller-manager serverside-applied certificate.cert-manager.io/serving-cert serverside-applied issuer.cert-manager.io/selfsigned-issuer serverside-applied mutatingwebhookconfiguration.admissionregistration.k8s.io/release-service-mutating-webhook-configuration serverside-applied validatingwebhookconfiguration.admissionregistration.k8s.io/release-service-validating-webhook-configuration serverside-applied error: resource mapping not found for name: "release-service-config" namespace: "release-service" from "./konflux-ci/release": no matches for kind "ReleaseServiceConfig" in version "appstudio.redhat.com/v1alpha1" ensure CRDs are installed first ๐Ÿ”„ Retrying command (attempt 2/3)... namespace/release-service serverside-applied customresourcedefinition.apiextensions.k8s.io/internalrequests.appstudio.redhat.com serverside-applied customresourcedefinition.apiextensions.k8s.io/internalservicesconfigs.appstudio.redhat.com serverside-applied customresourcedefinition.apiextensions.k8s.io/releaseplanadmissions.appstudio.redhat.com serverside-applied customresourcedefinition.apiextensions.k8s.io/releaseplans.appstudio.redhat.com serverside-applied customresourcedefinition.apiextensions.k8s.io/releases.appstudio.redhat.com serverside-applied customresourcedefinition.apiextensions.k8s.io/releaseserviceconfigs.appstudio.redhat.com serverside-applied serviceaccount/release-service-controller-manager serverside-applied role.rbac.authorization.k8s.io/release-service-leader-election-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-pipeline-resource-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-application-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-component-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-environment-viewer-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-manager-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-metrics-auth-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-release-editor-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-release-viewer-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-releaseplan-editor-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-releaseplan-viewer-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-releaseplanadmission-editor-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-releaseplanadmission-viewer-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-snapshot-editor-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-snapshot-viewer-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-snapshotenvironmentbinding-editor-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-tekton-role serverside-applied clusterrole.rbac.authorization.k8s.io/releaseserviceconfig-role serverside-applied rolebinding.rbac.authorization.k8s.io/release-service-leader-election-rolebinding serverside-applied rolebinding.rbac.authorization.k8s.io/releaseserviceconfigs-rolebinding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-application-role-binding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-component-role-binding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-environment-role-binding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-manager-rolebinding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-metrics-auth-rolebinding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-release-role-binding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-releaseplan-role-binding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-releaseplanadmission-role-binding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-snapshot-role-binding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-snapshotenvironmentbinding-role-binding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-tekton-role-binding serverside-applied configmap/release-service-manager-config serverside-applied configmap/release-service-manager-properties serverside-applied service/release-service-controller-manager-metrics-service serverside-applied service/release-service-webhook-service serverside-applied deployment.apps/release-service-controller-manager serverside-applied releaseserviceconfig.appstudio.redhat.com/release-service-config serverside-applied certificate.cert-manager.io/serving-cert serverside-applied issuer.cert-manager.io/selfsigned-issuer serverside-applied mutatingwebhookconfiguration.admissionregistration.k8s.io/release-service-mutating-webhook-configuration serverside-applied validatingwebhookconfiguration.admissionregistration.k8s.io/release-service-validating-webhook-configuration serverside-applied ๐Ÿ—๏ธ Deploying Build Service... namespace/build-service created serviceaccount/build-service-controller-manager created role.rbac.authorization.k8s.io/build-service-build-pipeline-config-read-only created role.rbac.authorization.k8s.io/build-service-leader-election-role created clusterrole.rbac.authorization.k8s.io/appstudio-pipelines-runner created clusterrole.rbac.authorization.k8s.io/build-service-manager-role created clusterrole.rbac.authorization.k8s.io/build-service-metrics-auth-role created rolebinding.rbac.authorization.k8s.io/build-pipeline-config-read-only-binding created rolebinding.rbac.authorization.k8s.io/build-service-leader-election-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/build-pipeline-runner-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/build-service-manager-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/build-service-metrics-auth-rolebinding created configmap/build-pipeline-config created service/build-service-controller-manager-metrics-service created deployment.apps/build-service-controller-manager created ๐Ÿ”„ Deploying Integration Service... namespace/integration-service created customresourcedefinition.apiextensions.k8s.io/componentgroups.appstudio.redhat.com created customresourcedefinition.apiextensions.k8s.io/integrationtestscenarios.appstudio.redhat.com created serviceaccount/integration-service-controller-manager created serviceaccount/integration-service-snapshot-garbage-collector created role.rbac.authorization.k8s.io/integration-service-leader-election-role created clusterrole.rbac.authorization.k8s.io/integration-service-componentgroup-admin-role created clusterrole.rbac.authorization.k8s.io/integration-service-componentgroup-editor-role created clusterrole.rbac.authorization.k8s.io/integration-service-componentgroup-viewer-role created clusterrole.rbac.authorization.k8s.io/integration-service-integrationtestscenario-admin-role created clusterrole.rbac.authorization.k8s.io/integration-service-integrationtestscenario-editor-role created clusterrole.rbac.authorization.k8s.io/integration-service-integrationtestscenario-viewer-role created clusterrole.rbac.authorization.k8s.io/integration-service-manager-role created clusterrole.rbac.authorization.k8s.io/integration-service-metrics-auth-role created clusterrole.rbac.authorization.k8s.io/integration-service-snapshot-garbage-collector created clusterrole.rbac.authorization.k8s.io/integration-service-tekton-editor-role created clusterrole.rbac.authorization.k8s.io/konflux-integration-runner created rolebinding.rbac.authorization.k8s.io/integration-service-leader-election-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/integration-service-manager-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/integration-service-metrics-auth-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/integration-service-snapshot-garbage-collector created clusterrolebinding.rbac.authorization.k8s.io/integration-service-tekton-role-binding created clusterrolebinding.rbac.authorization.k8s.io/kyverno-background-controller-konflux-integration-runner created configmap/integration-service-manager-config created service/integration-service-controller-manager-metrics-service created service/integration-service-webhook-service created deployment.apps/integration-service-controller-manager created cronjob.batch/integration-service-snapshot-garbage-collector created certificate.cert-manager.io/serving-cert created issuer.cert-manager.io/selfsigned-issuer created clusterpolicy.kyverno.io/init-ns-integration created mutatingwebhookconfiguration.admissionregistration.k8s.io/integration-service-mutating-webhook-configuration created validatingwebhookconfiguration.admissionregistration.k8s.io/integration-service-validating-webhook-configuration created ๐Ÿ“‹ Setting up Namespace Lister... namespace/namespace-lister created serviceaccount/namespace-lister created clusterrole.rbac.authorization.k8s.io/namespace-lister-authorizer created clusterrolebinding.rbac.authorization.k8s.io/namespace-lister-authorizer created service/namespace-lister created deployment.apps/namespace-lister created certificate.cert-manager.io/namespace-lister created clusterpolicy.kyverno.io/deny-virtual-domain created networkpolicy.networking.k8s.io/namespace-lister-allow-from-konfluxui created networkpolicy.networking.k8s.io/namespace-lister-allow-to-apiserver created ๐ŸŽจ Deploying UI components... namespace/konflux-ui created serviceaccount/proxy created clusterrole.rbac.authorization.k8s.io/konflux-proxy created clusterrole.rbac.authorization.k8s.io/konflux-proxy-namespace-lister created clusterrolebinding.rbac.authorization.k8s.io/konflux-proxy created clusterrolebinding.rbac.authorization.k8s.io/konflux-proxy-namespace-lister created configmap/nginx-idp-location-h959ghd6bh created configmap/proxy-6bg85b98b7 created configmap/proxy-nginx-static-fmmfg7d22f created configmap/proxy-nginx-templates-4m8fgtf4m9 created secret/proxy created service/proxy created deployment.apps/proxy created certificate.cert-manager.io/serving-cert created Error from server (NotFound): secrets "oauth2-proxy-client-secret" not found ๐Ÿ”‘ Setting up OAuth2 proxy client secret... secret/oauth2-proxy-client-secret created Error from server (NotFound): secrets "oauth2-proxy-cookie-secret" not found ๐Ÿช Creating OAuth2 proxy cookie secret... secret/oauth2-proxy-cookie-secret created Waiting for Konflux to be ready โณ Waiting for Tekton configuration to be ready... tektonconfig.operator.tekton.dev/config condition met โณ Waiting for all deployments to be available... deployment.apps/build-service-controller-manager condition met deployment.apps/cert-manager condition met deployment.apps/cert-manager-cainjector condition met deployment.apps/cert-manager-webhook condition met deployment.apps/trust-manager condition met deployment.apps/dex condition met timed out waiting for the condition on deployments/integration-service-controller-manager timed out waiting for the condition on deployments/registry timed out waiting for the condition on deployments/proxy timed out waiting for the condition on deployments/coredns timed out waiting for the condition on deployments/kyverno-admission-controller timed out waiting for the condition on deployments/kyverno-background-controller timed out waiting for the condition on deployments/kyverno-cleanup-controller timed out waiting for the condition on deployments/kyverno-reports-controller timed out waiting for the condition on deployments/local-path-provisioner timed out waiting for the condition on deployments/namespace-lister timed out waiting for the condition on deployments/pipelines-as-code-controller timed out waiting for the condition on deployments/pipelines-as-code-watcher timed out waiting for the condition on deployments/pipelines-as-code-webhook timed out waiting for the condition on deployments/release-service-controller-manager timed out waiting for the condition on deployments/gosmee-client timed out waiting for the condition on deployments/tekton-operator timed out waiting for the condition on deployments/tekton-operator-webhook timed out waiting for the condition on deployments/tekton-chains-controller timed out waiting for the condition on deployments/tekton-events-controller timed out waiting for the condition on deployments/tekton-operator-proxy-webhook timed out waiting for the condition on deployments/tekton-pipelines-controller timed out waiting for the condition on deployments/tekton-pipelines-remote-resolvers timed out waiting for the condition on deployments/tekton-pipelines-webhook timed out waiting for the condition on deployments/tekton-results-api timed out waiting for the condition on deployments/tekton-results-retention-policy-agent timed out waiting for the condition on deployments/tekton-results-watcher timed out waiting for the condition on deployments/tekton-triggers-controller timed out waiting for the condition on deployments/tekton-triggers-core-interceptors timed out waiting for the condition on deployments/tekton-triggers-webhook Deployment failed Generating error logs Collecting resource monitoring information... logs from all pods from user-ns2 namespace ---------- namespace 'build-service' ---------- ---------- namespace 'cert-manager' ---------- apiVersion: v1 kind: Pod metadata: creationTimestamp: "2026-01-21T13:00:41Z" generateName: trust-manager-7c9f8b8f7d- labels: app: trust-manager app.kubernetes.io/instance: trust-manager app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: trust-manager app.kubernetes.io/version: v0.12.0 helm.sh/chart: trust-manager-v0.12.0 pod-template-hash: 7c9f8b8f7d name: trust-manager-7c9f8b8f7d-s7tzx namespace: cert-manager ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: trust-manager-7c9f8b8f7d uid: 68d82b51-1c67-4426-8c20-b8ea7e942a98 resourceVersion: "821" uid: 7c696c0e-9668-4464-bf59-1d952cf4832d spec: containers: - args: - --log-format=text - --log-level=1 - --metrics-port=9402 - --readiness-probe-port=6060 - --readiness-probe-path=/readyz - --leader-election-lease-duration=15s - --leader-election-renew-deadline=10s - --trust-namespace=cert-manager - --webhook-host=0.0.0.0 - --webhook-port=6443 - --webhook-certificate-dir=/tls - --default-package-location=/packages/cert-manager-package-debian.json image: quay.io/jetstack/trust-manager:v0.12.0 imagePullPolicy: IfNotPresent name: trust-manager ports: - containerPort: 6443 protocol: TCP - containerPort: 9402 protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: 6060 scheme: HTTP initialDelaySeconds: 3 periodSeconds: 7 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 100m memory: 250Mi requests: cpu: 10m memory: 50Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true runAsNonRoot: true seccompProfile: type: RuntimeDefault terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /tls name: tls readOnly: true - mountPath: /packages name: packages readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-9c2jp readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true initContainers: - args: - /copyandmaybepause - /debian-package - /packages image: quay.io/jetstack/cert-manager-package-debian:20210119.0 imagePullPolicy: IfNotPresent name: cert-manager-package-debian resources: limits: cpu: 100m memory: 250Mi requests: cpu: 10m memory: 50Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true runAsNonRoot: true seccompProfile: type: RuntimeDefault terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /packages name: packages - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-9c2jp readOnly: true nodeName: kind-mapt-control-plane nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: trust-manager serviceAccountName: trust-manager terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - emptyDir: sizeLimit: 50M name: packages - name: tls secret: defaultMode: 420 secretName: trust-manager-tls - name: kube-api-access-9c2jp projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2026-01-21T13:00:45Z" status: "True" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2026-01-21T13:00:45Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2026-01-21T13:00:56Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2026-01-21T13:00:56Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2026-01-21T13:00:41Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://b9cf1e7daac8a3a61c4a72e7aa723d9bb21761c3cf7879096a8e7fdf21e9ddd4 image: quay.io/jetstack/trust-manager:v0.12.0 imageID: quay.io/jetstack/trust-manager@sha256:8285d0d1c374dcf6e29ddcac10a5c937502eb8c318dbd2411f9789ede0e23421 lastState: {} name: trust-manager ready: true restartCount: 0 started: true state: running: startedAt: "2026-01-21T13:00:48Z" volumeMounts: - mountPath: /tls name: tls readOnly: true recursiveReadOnly: Disabled - mountPath: /packages name: packages readOnly: true recursiveReadOnly: Disabled - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-9c2jp readOnly: true recursiveReadOnly: Disabled hostIP: 10.89.0.2 hostIPs: - ip: 10.89.0.2 initContainerStatuses: - containerID: containerd://20093bdf9dc135b47921b60c9fb566a6dbe3ce2fd4b46409160ab0b80f8037fd image: quay.io/jetstack/cert-manager-package-debian:20210119.0 imageID: quay.io/jetstack/cert-manager-package-debian@sha256:116133f68938ef568aca17a0c691d5b1ef73a9a207029c9a068cf4230053fed5 lastState: {} name: cert-manager-package-debian ready: true restartCount: 0 started: false state: terminated: containerID: containerd://20093bdf9dc135b47921b60c9fb566a6dbe3ce2fd4b46409160ab0b80f8037fd exitCode: 0 finishedAt: "2026-01-21T13:00:45Z" reason: Completed startedAt: "2026-01-21T13:00:45Z" volumeMounts: - mountPath: /packages name: packages - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-9c2jp readOnly: true recursiveReadOnly: Disabled phase: Running podIP: 10.244.0.10 podIPs: - ip: 10.244.0.10 qosClass: Burstable startTime: "2026-01-21T13:00:41Z" --- Pod 'trust-manager-7c9f8b8f7d-s7tzx' under namespace 'cert-manager': Pod trust-manager-7c9f8b8f7d-s7tzx MountVolume.SetUp failed for volume "tls" : secret "trust-manager-tls" not found (FailedMount) 2026/01/21 13:00:45 reading from /debian-package 2026/01/21 13:00:45 writing to /packages 2026/01/21 13:00:45 successfully copied /debian-package/cert-manager-package-debian.json to /packages/cert-manager-package-debian.json time=2026-01-21T13:00:48.426Z level=INFO msg="successfully loaded default package from filesystem" logger=trust/bundle path=/packages/cert-manager-package-debian.json time=2026-01-21T13:00:48.426Z level=INFO msg="registering webhook endpoints" logger=trust/webhook time=2026-01-21T13:00:48.426Z level=INFO msg="skip registering a mutating webhook, object does not implement admission.Defaulter or WithDefaulter wasn't called" logger=trust/manager/controller-runtime/builder GVK="trust.cert-manager.io/v1alpha1, Kind=Bundle" time=2026-01-21T13:00:48.426Z level=INFO msg="Registering a validating webhook" logger=trust/manager/controller-runtime/builder GVK="trust.cert-manager.io/v1alpha1, Kind=Bundle" path=/validate-trust-cert-manager-io-v1alpha1-bundle time=2026-01-21T13:00:48.435Z level=INFO msg="Registering webhook" path=/validate-trust-cert-manager-io-v1alpha1-bundle logger=trust/manager/controller-runtime/webhook time=2026-01-21T13:00:48.435Z level=INFO msg="Starting metrics server" logger=trust/manager/controller-runtime/metrics time=2026-01-21T13:00:48.435Z level=INFO msg="starting server" name="health probe" addr=[::]:6060 logger=trust/manager time=2026-01-21T13:00:48.435Z level=INFO msg="Serving metrics server" logger=trust/manager/controller-runtime/metrics bindAddress=0.0.0.0:9402 secure=false time=2026-01-21T13:00:48.435Z level=INFO msg="Starting webhook server" logger=trust/manager/controller-runtime/webhook time=2026-01-21T13:00:48.435Z level=INFO msg="attempting to acquire leader lease cert-manager/trust-manager-leader-election..." time=2026-01-21T13:00:48.435Z level=INFO msg="Updated current TLS certificate" logger=trust/manager/controller-runtime/certwatcher time=2026-01-21T13:00:48.435Z level=INFO msg="Serving webhook server" logger=trust/manager/controller-runtime/webhook host=0.0.0.0 port=6443 time=2026-01-21T13:00:48.435Z level=INFO msg="Starting certificate watcher" logger=trust/manager/controller-runtime/certwatcher time=2026-01-21T13:00:48.439Z level=INFO msg="successfully acquired lease cert-manager/trust-manager-leader-election" time=2026-01-21T13:00:48.439Z level=DEBUG+3 msg="trust-manager-7c9f8b8f7d-s7tzx_27b45841-a689-4abb-b1f0-47a73f029370 became leader" logger=trust/manager/events type=Normal object="{Kind:Lease Namespace:cert-manager Name:trust-manager-leader-election UID:0d0ab107-f898-454e-96d0-bde7a6bcaf68 APIVersion:coordination.k8s.io/v1 ResourceVersion:803 FieldPath:}" reason=LeaderElection time=2026-01-21T13:00:48.439Z level=INFO msg="Starting EventSource" controller=bundles logger=trust/manager source="kind source: *v1alpha1.Bundle" time=2026-01-21T13:00:48.439Z level=INFO msg="Starting EventSource" controller=bundles logger=trust/manager source="kind source: *v1.Namespace" time=2026-01-21T13:00:48.439Z level=INFO msg="Starting EventSource" controller=bundles logger=trust/manager source="kind source: *v1.ConfigMap" time=2026-01-21T13:00:48.439Z level=INFO msg="Starting EventSource" controller=bundles logger=trust/manager source="kind source: *v1.Secret" time=2026-01-21T13:00:48.439Z level=INFO msg="Starting EventSource" controller=bundles logger=trust/manager source="kind source: *v1.PartialObjectMetadata" time=2026-01-21T13:00:48.439Z level=INFO msg="Starting Controller" controller=bundles logger=trust/manager time=2026-01-21T13:00:48.540Z level=INFO msg="Starting workers" controller=bundles logger=trust/manager "worker count"=1 time=2026-01-21T13:03:16.636Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:32231648-cf14-4b24-997c-7885fcb7b9f1 APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:2942 FieldPath:}" reason=Synced time=2026-01-21T13:03:16.637Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:32231648-cf14-4b24-997c-7885fcb7b9f1 APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:2942 FieldPath:}" reason=Synced time=2026-01-21T13:03:36.537Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:32231648-cf14-4b24-997c-7885fcb7b9f1 APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:2964 FieldPath:}" reason=Synced time=2026-01-21T13:03:36.537Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:32231648-cf14-4b24-997c-7885fcb7b9f1 APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:2964 FieldPath:}" reason=Synced time=2026-01-21T13:03:38.539Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:32231648-cf14-4b24-997c-7885fcb7b9f1 APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:2964 FieldPath:}" reason=Synced time=2026-01-21T13:03:38.539Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:32231648-cf14-4b24-997c-7885fcb7b9f1 APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:2964 FieldPath:}" reason=Synced time=2026-01-21T13:03:50.660Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:32231648-cf14-4b24-997c-7885fcb7b9f1 APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:2964 FieldPath:}" reason=Synced time=2026-01-21T13:03:50.661Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:32231648-cf14-4b24-997c-7885fcb7b9f1 APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:2964 FieldPath:}" reason=Synced time=2026-01-21T13:04:10.436Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:32231648-cf14-4b24-997c-7885fcb7b9f1 APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:2964 FieldPath:}" reason=Synced time=2026-01-21T13:04:10.436Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:32231648-cf14-4b24-997c-7885fcb7b9f1 APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:2964 FieldPath:}" reason=Synced time=2026-01-21T13:04:20.837Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:32231648-cf14-4b24-997c-7885fcb7b9f1 APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:2964 FieldPath:}" reason=Synced time=2026-01-21T13:04:20.837Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:32231648-cf14-4b24-997c-7885fcb7b9f1 APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:2964 FieldPath:}" reason=Synced time=2026-01-21T13:04:33.257Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:32231648-cf14-4b24-997c-7885fcb7b9f1 APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:2964 FieldPath:}" reason=Synced time=2026-01-21T13:04:33.257Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:32231648-cf14-4b24-997c-7885fcb7b9f1 APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:2964 FieldPath:}" reason=Synced time=2026-01-21T13:04:42.940Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:32231648-cf14-4b24-997c-7885fcb7b9f1 APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:2964 FieldPath:}" reason=Synced time=2026-01-21T13:04:42.940Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:32231648-cf14-4b24-997c-7885fcb7b9f1 APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:2964 FieldPath:}" reason=Synced time=2026-01-21T13:04:48.837Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:32231648-cf14-4b24-997c-7885fcb7b9f1 APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:2964 FieldPath:}" reason=Synced time=2026-01-21T13:04:48.837Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:32231648-cf14-4b24-997c-7885fcb7b9f1 APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:2964 FieldPath:}" reason=Synced time=2026-01-21T13:04:51.037Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:32231648-cf14-4b24-997c-7885fcb7b9f1 APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:2964 FieldPath:}" reason=Synced time=2026-01-21T13:04:51.037Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:32231648-cf14-4b24-997c-7885fcb7b9f1 APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:2964 FieldPath:}" reason=Synced ---------- namespace 'default' ---------- ---------- namespace 'dex' ---------- apiVersion: v1 kind: Pod metadata: creationTimestamp: "2026-01-21T13:03:13Z" generateName: dex-77589666fc- labels: app: dex pod-template-hash: 77589666fc name: dex-77589666fc-98t5p namespace: dex ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: dex-77589666fc uid: 5be54b03-e02f-41fb-a81d-a164d8d83566 resourceVersion: "3512" uid: 8ec26dcc-5744-44a5-8ef8-7a49e333ef4b spec: containers: - command: - /usr/local/bin/dex - serve - /etc/dex/cfg/config.yaml env: - name: CLIENT_SECRET valueFrom: secretKeyRef: key: client-secret name: oauth2-proxy-client-secret image: ghcr.io/dexidp/dex:v2.44.0 imagePullPolicy: IfNotPresent name: dex ports: - containerPort: 9443 name: https protocol: TCP - containerPort: 5558 name: telemetry protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /healthz/ready port: telemetry scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 50m memory: 128Mi requests: cpu: 10m memory: 64Mi securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1001 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/dex/cfg name: dex - mountPath: /etc/dex/tls name: tls - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-pfwjn readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: kind-mapt-control-plane preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: dex serviceAccountName: dex terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - configMap: defaultMode: 420 items: - key: config.yaml path: config.yaml name: dex-7hm4fc5fb8 name: dex - name: tls secret: defaultMode: 420 secretName: dex-cert - name: kube-api-access-pfwjn projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2026-01-21T13:03:33Z" status: "True" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2026-01-21T13:03:13Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2026-01-21T13:03:44Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2026-01-21T13:03:44Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2026-01-21T13:03:13Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://f138a6b764947f39f657e3c8a5f06e316ea0c449abff37a86580cc2b3362a913 image: ghcr.io/dexidp/dex:v2.44.0 imageID: ghcr.io/dexidp/dex@sha256:5d0656fce7d453c0e3b2706abf40c0d0ce5b371fb0b73b3cf714d05f35fa5f86 lastState: {} name: dex ready: true restartCount: 0 started: true state: running: startedAt: "2026-01-21T13:03:33Z" volumeMounts: - mountPath: /etc/dex/cfg name: dex - mountPath: /etc/dex/tls name: tls - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-pfwjn readOnly: true recursiveReadOnly: Disabled hostIP: 10.89.0.2 hostIPs: - ip: 10.89.0.2 phase: Running podIP: 10.244.0.34 podIPs: - ip: 10.244.0.34 qosClass: Burstable startTime: "2026-01-21T13:03:13Z" --- Pod 'dex-77589666fc-98t5p' under namespace 'dex': Pod dex-77589666fc-98t5p MountVolume.SetUp failed for volume "tls" : secret "dex-cert" not found (FailedMount) time=2026-01-21T13:03:33.319Z level=INFO msg="Version info" dex_version=v2.44.0 go.version=go1.25.0 go.os=linux go.arch=amd64 time=2026-01-21T13:03:33.319Z level=INFO msg="config issuer" issuer=https://54.71.21.136:9443/idp/ time=2026-01-21T13:03:33.319Z level=INFO msg="kubernetes client" api_version=dex.coreos.com/v1 time=2026-01-21T13:03:33.322Z level=INFO msg="creating custom Kubernetes resources" time=2026-01-21T13:03:33.322Z level=INFO msg="checking if custom resource has already been created..." object=authcodes.dex.coreos.com time=2026-01-21T13:03:33.323Z level=INFO msg="failed to list custom resource, attempting to create" object=authcodes.dex.coreos.com err="not found" time=2026-01-21T13:03:33.327Z level=ERROR msg="create custom resource" object=authcodes.dex.coreos.com time=2026-01-21T13:03:33.327Z level=INFO msg="checking if custom resource has already been created..." object=authrequests.dex.coreos.com time=2026-01-21T13:03:33.328Z level=INFO msg="failed to list custom resource, attempting to create" object=authrequests.dex.coreos.com err="not found" time=2026-01-21T13:03:33.335Z level=ERROR msg="create custom resource" object=authrequests.dex.coreos.com time=2026-01-21T13:03:33.335Z level=INFO msg="checking if custom resource has already been created..." object=oauth2clients.dex.coreos.com time=2026-01-21T13:03:33.335Z level=INFO msg="failed to list custom resource, attempting to create" object=oauth2clients.dex.coreos.com err="not found" time=2026-01-21T13:03:33.343Z level=ERROR msg="create custom resource" object=oauth2clients.dex.coreos.com time=2026-01-21T13:03:33.343Z level=INFO msg="checking if custom resource has already been created..." object=signingkeies.dex.coreos.com time=2026-01-21T13:03:33.343Z level=INFO msg="failed to list custom resource, attempting to create" object=signingkeies.dex.coreos.com err="not found" time=2026-01-21T13:03:33.350Z level=ERROR msg="create custom resource" object=signingkeies.dex.coreos.com time=2026-01-21T13:03:33.350Z level=INFO msg="checking if custom resource has already been created..." object=refreshtokens.dex.coreos.com time=2026-01-21T13:03:33.350Z level=INFO msg="failed to list custom resource, attempting to create" object=refreshtokens.dex.coreos.com err="not found" time=2026-01-21T13:03:33.357Z level=ERROR msg="create custom resource" object=refreshtokens.dex.coreos.com time=2026-01-21T13:03:33.357Z level=INFO msg="checking if custom resource has already been created..." object=passwords.dex.coreos.com time=2026-01-21T13:03:33.358Z level=INFO msg="failed to list custom resource, attempting to create" object=passwords.dex.coreos.com err="not found" time=2026-01-21T13:03:33.365Z level=ERROR msg="create custom resource" object=passwords.dex.coreos.com time=2026-01-21T13:03:33.365Z level=INFO msg="checking if custom resource has already been created..." object=offlinesessionses.dex.coreos.com time=2026-01-21T13:03:33.365Z level=INFO msg="failed to list custom resource, attempting to create" object=offlinesessionses.dex.coreos.com err="not found" time=2026-01-21T13:03:33.373Z level=ERROR msg="create custom resource" object=offlinesessionses.dex.coreos.com time=2026-01-21T13:03:33.373Z level=INFO msg="checking if custom resource has already been created..." object=connectors.dex.coreos.com time=2026-01-21T13:03:33.373Z level=INFO msg="failed to list custom resource, attempting to create" object=connectors.dex.coreos.com err="not found" time=2026-01-21T13:03:33.382Z level=ERROR msg="create custom resource" object=connectors.dex.coreos.com time=2026-01-21T13:03:33.382Z level=INFO msg="checking if custom resource has already been created..." object=devicerequests.dex.coreos.com time=2026-01-21T13:03:33.383Z level=INFO msg="failed to list custom resource, attempting to create" object=devicerequests.dex.coreos.com err="not found" time=2026-01-21T13:03:33.419Z level=ERROR msg="create custom resource" object=devicerequests.dex.coreos.com time=2026-01-21T13:03:33.419Z level=INFO msg="checking if custom resource has already been created..." object=devicetokens.dex.coreos.com time=2026-01-21T13:03:33.419Z level=INFO msg="failed to list custom resource, attempting to create" object=devicetokens.dex.coreos.com err="not found" time=2026-01-21T13:03:33.424Z level=ERROR msg="create custom resource" object=devicetokens.dex.coreos.com time=2026-01-21T13:03:33.424Z level=INFO msg="config storage" storage_type=kubernetes time=2026-01-21T13:03:33.424Z level=INFO msg="config static client" client_name=oauth2-proxy time=2026-01-21T13:03:33.424Z level=INFO msg="config connector: local passwords enabled" time=2026-01-21T13:03:33.424Z level=INFO msg="config skipping approval screen" time=2026-01-21T13:03:33.424Z level=INFO msg="config using password grant connector" password_connector=local time=2026-01-21T13:03:33.424Z level=INFO msg="config refresh tokens rotation" enabled=true time=2026-01-21T13:03:33.438Z level=INFO msg="keys expired, rotating" time=2026-01-21T13:03:35.424Z level=INFO msg="keys rotated" next_rotation=2026-01-21T19:03:35.420Z time=2026-01-21T13:03:35.424Z level=INFO msg="listening on" server=telemetry address=0.0.0.0:5558 time=2026-01-21T13:03:35.424Z level=INFO msg="listening on" server=https address=0.0.0.0:9443 apiVersion: v1 kind: Pod metadata: creationTimestamp: "2026-01-21T13:03:13Z" generateName: dex-77589666fc- labels: app: dex pod-template-hash: 77589666fc name: dex-77589666fc-98t5p namespace: dex ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: dex-77589666fc uid: 5be54b03-e02f-41fb-a81d-a164d8d83566 resourceVersion: "3512" uid: 8ec26dcc-5744-44a5-8ef8-7a49e333ef4b spec: containers: - command: - /usr/local/bin/dex - serve - /etc/dex/cfg/config.yaml env: - name: CLIENT_SECRET valueFrom: secretKeyRef: key: client-secret name: oauth2-proxy-client-secret image: ghcr.io/dexidp/dex:v2.44.0 imagePullPolicy: IfNotPresent name: dex ports: - containerPort: 9443 name: https protocol: TCP - containerPort: 5558 name: telemetry protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /healthz/ready port: telemetry scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 50m memory: 128Mi requests: cpu: 10m memory: 64Mi securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1001 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/dex/cfg name: dex - mountPath: /etc/dex/tls name: tls - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-pfwjn readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: kind-mapt-control-plane preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: dex serviceAccountName: dex terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - configMap: defaultMode: 420 items: - key: config.yaml path: config.yaml name: dex-7hm4fc5fb8 name: dex - name: tls secret: defaultMode: 420 secretName: dex-cert - name: kube-api-access-pfwjn projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2026-01-21T13:03:33Z" status: "True" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2026-01-21T13:03:13Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2026-01-21T13:03:44Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2026-01-21T13:03:44Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2026-01-21T13:03:13Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://f138a6b764947f39f657e3c8a5f06e316ea0c449abff37a86580cc2b3362a913 image: ghcr.io/dexidp/dex:v2.44.0 imageID: ghcr.io/dexidp/dex@sha256:5d0656fce7d453c0e3b2706abf40c0d0ce5b371fb0b73b3cf714d05f35fa5f86 lastState: {} name: dex ready: true restartCount: 0 started: true state: running: startedAt: "2026-01-21T13:03:33Z" volumeMounts: - mountPath: /etc/dex/cfg name: dex - mountPath: /etc/dex/tls name: tls - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-pfwjn readOnly: true recursiveReadOnly: Disabled hostIP: 10.89.0.2 hostIPs: - ip: 10.89.0.2 phase: Running podIP: 10.244.0.34 podIPs: - ip: 10.244.0.34 qosClass: Burstable startTime: "2026-01-21T13:03:13Z" --- Pod 'dex-77589666fc-98t5p' under namespace 'dex': Pod dex-77589666fc-98t5p Readiness probe failed: Get "http://10.244.0.34:5558/healthz/ready": dial tcp 10.244.0.34:5558: connect: connection refused (Unhealthy) time=2026-01-21T13:03:33.319Z level=INFO msg="Version info" dex_version=v2.44.0 go.version=go1.25.0 go.os=linux go.arch=amd64 time=2026-01-21T13:03:33.319Z level=INFO msg="config issuer" issuer=https://54.71.21.136:9443/idp/ time=2026-01-21T13:03:33.319Z level=INFO msg="kubernetes client" api_version=dex.coreos.com/v1 time=2026-01-21T13:03:33.322Z level=INFO msg="creating custom Kubernetes resources" time=2026-01-21T13:03:33.322Z level=INFO msg="checking if custom resource has already been created..." object=authcodes.dex.coreos.com time=2026-01-21T13:03:33.323Z level=INFO msg="failed to list custom resource, attempting to create" object=authcodes.dex.coreos.com err="not found" time=2026-01-21T13:03:33.327Z level=ERROR msg="create custom resource" object=authcodes.dex.coreos.com time=2026-01-21T13:03:33.327Z level=INFO msg="checking if custom resource has already been created..." object=authrequests.dex.coreos.com time=2026-01-21T13:03:33.328Z level=INFO msg="failed to list custom resource, attempting to create" object=authrequests.dex.coreos.com err="not found" time=2026-01-21T13:03:33.335Z level=ERROR msg="create custom resource" object=authrequests.dex.coreos.com time=2026-01-21T13:03:33.335Z level=INFO msg="checking if custom resource has already been created..." object=oauth2clients.dex.coreos.com time=2026-01-21T13:03:33.335Z level=INFO msg="failed to list custom resource, attempting to create" object=oauth2clients.dex.coreos.com err="not found" time=2026-01-21T13:03:33.343Z level=ERROR msg="create custom resource" object=oauth2clients.dex.coreos.com time=2026-01-21T13:03:33.343Z level=INFO msg="checking if custom resource has already been created..." object=signingkeies.dex.coreos.com time=2026-01-21T13:03:33.343Z level=INFO msg="failed to list custom resource, attempting to create" object=signingkeies.dex.coreos.com err="not found" time=2026-01-21T13:03:33.350Z level=ERROR msg="create custom resource" object=signingkeies.dex.coreos.com time=2026-01-21T13:03:33.350Z level=INFO msg="checking if custom resource has already been created..." object=refreshtokens.dex.coreos.com time=2026-01-21T13:03:33.350Z level=INFO msg="failed to list custom resource, attempting to create" object=refreshtokens.dex.coreos.com err="not found" time=2026-01-21T13:03:33.357Z level=ERROR msg="create custom resource" object=refreshtokens.dex.coreos.com time=2026-01-21T13:03:33.357Z level=INFO msg="checking if custom resource has already been created..." object=passwords.dex.coreos.com time=2026-01-21T13:03:33.358Z level=INFO msg="failed to list custom resource, attempting to create" object=passwords.dex.coreos.com err="not found" time=2026-01-21T13:03:33.365Z level=ERROR msg="create custom resource" object=passwords.dex.coreos.com time=2026-01-21T13:03:33.365Z level=INFO msg="checking if custom resource has already been created..." object=offlinesessionses.dex.coreos.com time=2026-01-21T13:03:33.365Z level=INFO msg="failed to list custom resource, attempting to create" object=offlinesessionses.dex.coreos.com err="not found" time=2026-01-21T13:03:33.373Z level=ERROR msg="create custom resource" object=offlinesessionses.dex.coreos.com time=2026-01-21T13:03:33.373Z level=INFO msg="checking if custom resource has already been created..." object=connectors.dex.coreos.com time=2026-01-21T13:03:33.373Z level=INFO msg="failed to list custom resource, attempting to create" object=connectors.dex.coreos.com err="not found" time=2026-01-21T13:03:33.382Z level=ERROR msg="create custom resource" object=connectors.dex.coreos.com time=2026-01-21T13:03:33.382Z level=INFO msg="checking if custom resource has already been created..." object=devicerequests.dex.coreos.com time=2026-01-21T13:03:33.383Z level=INFO msg="failed to list custom resource, attempting to create" object=devicerequests.dex.coreos.com err="not found" time=2026-01-21T13:03:33.419Z level=ERROR msg="create custom resource" object=devicerequests.dex.coreos.com time=2026-01-21T13:03:33.419Z level=INFO msg="checking if custom resource has already been created..." object=devicetokens.dex.coreos.com time=2026-01-21T13:03:33.419Z level=INFO msg="failed to list custom resource, attempting to create" object=devicetokens.dex.coreos.com err="not found" time=2026-01-21T13:03:33.424Z level=ERROR msg="create custom resource" object=devicetokens.dex.coreos.com time=2026-01-21T13:03:33.424Z level=INFO msg="config storage" storage_type=kubernetes time=2026-01-21T13:03:33.424Z level=INFO msg="config static client" client_name=oauth2-proxy time=2026-01-21T13:03:33.424Z level=INFO msg="config connector: local passwords enabled" time=2026-01-21T13:03:33.424Z level=INFO msg="config skipping approval screen" time=2026-01-21T13:03:33.424Z level=INFO msg="config using password grant connector" password_connector=local time=2026-01-21T13:03:33.424Z level=INFO msg="config refresh tokens rotation" enabled=true time=2026-01-21T13:03:33.438Z level=INFO msg="keys expired, rotating" time=2026-01-21T13:03:35.424Z level=INFO msg="keys rotated" next_rotation=2026-01-21T19:03:35.420Z time=2026-01-21T13:03:35.424Z level=INFO msg="listening on" server=telemetry address=0.0.0.0:5558 time=2026-01-21T13:03:35.424Z level=INFO msg="listening on" server=https address=0.0.0.0:9443 ---------- namespace 'enterprise-contract-service' ---------- ---------- namespace 'integration-service' ---------- apiVersion: v1 kind: Pod metadata: annotations: kubectl.kubernetes.io/default-container: manager creationTimestamp: "2026-01-21T13:04:46Z" generateName: integration-service-controller-manager-589744499c- labels: control-plane: controller-manager pod-template-hash: 589744499c name: integration-service-controller-manager-589744499c-q4brb namespace: integration-service ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: integration-service-controller-manager-589744499c uid: b125cb36-2778-4e02-9ba7-dd255712c392 resourceVersion: "6290" uid: ccf15ba3-5573-47a6-9ef6-528fc1b30e97 spec: containers: - args: - --metrics-bind-address=:8080 - --leader-elect - --lease-duration=30s - --leader-renew-deadline=15s - --leader-elector-retry-period=5s command: - /manager image: quay.io/redhat-user-workloads/rhtap-integration-tenant/integration-service/integration-service:on-pr-f9ecbbcf927ff3641a98e1e84dc2be2a8206a597-linux-x86-64 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8081 scheme: HTTP initialDelaySeconds: 15 periodSeconds: 20 successThreshold: 1 timeoutSeconds: 1 name: manager ports: - containerPort: 9443 name: webhook-server protocol: TCP - containerPort: 8081 name: probes protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: 8081 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 500m memory: 128Mi requests: cpu: 10m memory: 64Mi securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /tmp/k8s-webhook-server/serving-certs name: cert readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-zd994 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: kind-mapt-control-plane preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: runAsNonRoot: true serviceAccount: integration-service-controller-manager serviceAccountName: integration-service-controller-manager terminationGracePeriodSeconds: 10 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: cert secret: defaultMode: 420 secretName: webhook-server-cert - name: kube-api-access-zd994 projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2026-01-21T13:04:59Z" status: "True" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2026-01-21T13:04:46Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2026-01-21T13:04:46Z" message: 'containers with unready status: [manager]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2026-01-21T13:04:46Z" message: 'containers with unready status: [manager]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2026-01-21T13:04:46Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://2dbb370a38a3d84ece90d689400b1b96451ade95e5b483b5486cff8121231555 image: quay.io/redhat-user-workloads/rhtap-integration-tenant/integration-service/integration-service:on-pr-f9ecbbcf927ff3641a98e1e84dc2be2a8206a597-linux-x86-64 imageID: quay.io/redhat-user-workloads/rhtap-integration-tenant/integration-service/integration-service@sha256:b5fc210ef3a288a90706a2a619502b0e5edebfc116b02b657c286aa1d6838849 lastState: terminated: containerID: containerd://2dbb370a38a3d84ece90d689400b1b96451ade95e5b483b5486cff8121231555 exitCode: 1 finishedAt: "2026-01-21T13:08:09Z" reason: Error startedAt: "2026-01-21T13:08:09Z" name: manager ready: false restartCount: 5 started: false state: waiting: message: back-off 2m40s restarting failed container=manager pod=integration-service-controller-manager-589744499c-q4brb_integration-service(ccf15ba3-5573-47a6-9ef6-528fc1b30e97) reason: CrashLoopBackOff volumeMounts: - mountPath: /tmp/k8s-webhook-server/serving-certs name: cert readOnly: true recursiveReadOnly: Disabled - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-zd994 readOnly: true recursiveReadOnly: Disabled hostIP: 10.89.0.2 hostIPs: - ip: 10.89.0.2 phase: Running podIP: 10.244.0.41 podIPs: - ip: 10.244.0.41 qosClass: Burstable startTime: "2026-01-21T13:04:46Z" --- Pod 'integration-service-controller-manager-589744499c-q4brb' under namespace 'integration-service': Pod integration-service-controller-manager-589744499c-q4brb MountVolume.SetUp failed for volume "cert" : secret "webhook-server-cert" not found (FailedMount) 2026/01/21 13:08:09 [COVERAGE] Starting coverage server on :9095 2026/01/21 13:08:09 [COVERAGE] Endpoints: GET :9095/coverage, GET :9095/health warning: GOCOVERDIR not set, no coverage data emitted {"level":"error","ts":"2026-01-21T13:08:09Z","logger":"setup","caller":"cmd/main.go:174","msg":"unable to setup controllers","error":"controller with name snapshot already exists. Controller names must be unique to avoid multiple controllers reporting to the same metric","stacktrace":"main.main\n\t/opt/app-root/src/cmd/main.go:174\nruntime.main\n\t/usr/lib/golang/src/runtime/proc.go:283"} apiVersion: v1 kind: Pod metadata: annotations: kubectl.kubernetes.io/default-container: manager creationTimestamp: "2026-01-21T13:04:46Z" generateName: integration-service-controller-manager-589744499c- labels: control-plane: controller-manager pod-template-hash: 589744499c name: integration-service-controller-manager-589744499c-q4brb namespace: integration-service ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: integration-service-controller-manager-589744499c uid: b125cb36-2778-4e02-9ba7-dd255712c392 resourceVersion: "6290" uid: ccf15ba3-5573-47a6-9ef6-528fc1b30e97 spec: containers: - args: - --metrics-bind-address=:8080 - --leader-elect - --lease-duration=30s - --leader-renew-deadline=15s - --leader-elector-retry-period=5s command: - /manager image: quay.io/redhat-user-workloads/rhtap-integration-tenant/integration-service/integration-service:on-pr-f9ecbbcf927ff3641a98e1e84dc2be2a8206a597-linux-x86-64 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8081 scheme: HTTP initialDelaySeconds: 15 periodSeconds: 20 successThreshold: 1 timeoutSeconds: 1 name: manager ports: - containerPort: 9443 name: webhook-server protocol: TCP - containerPort: 8081 name: probes protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: 8081 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 500m memory: 128Mi requests: cpu: 10m memory: 64Mi securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /tmp/k8s-webhook-server/serving-certs name: cert readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-zd994 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: kind-mapt-control-plane preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: runAsNonRoot: true serviceAccount: integration-service-controller-manager serviceAccountName: integration-service-controller-manager terminationGracePeriodSeconds: 10 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: cert secret: defaultMode: 420 secretName: webhook-server-cert - name: kube-api-access-zd994 projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2026-01-21T13:04:59Z" status: "True" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2026-01-21T13:04:46Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2026-01-21T13:04:46Z" message: 'containers with unready status: [manager]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2026-01-21T13:04:46Z" message: 'containers with unready status: [manager]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2026-01-21T13:04:46Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://2dbb370a38a3d84ece90d689400b1b96451ade95e5b483b5486cff8121231555 image: quay.io/redhat-user-workloads/rhtap-integration-tenant/integration-service/integration-service:on-pr-f9ecbbcf927ff3641a98e1e84dc2be2a8206a597-linux-x86-64 imageID: quay.io/redhat-user-workloads/rhtap-integration-tenant/integration-service/integration-service@sha256:b5fc210ef3a288a90706a2a619502b0e5edebfc116b02b657c286aa1d6838849 lastState: terminated: containerID: containerd://2dbb370a38a3d84ece90d689400b1b96451ade95e5b483b5486cff8121231555 exitCode: 1 finishedAt: "2026-01-21T13:08:09Z" reason: Error startedAt: "2026-01-21T13:08:09Z" name: manager ready: false restartCount: 5 started: false state: waiting: message: back-off 2m40s restarting failed container=manager pod=integration-service-controller-manager-589744499c-q4brb_integration-service(ccf15ba3-5573-47a6-9ef6-528fc1b30e97) reason: CrashLoopBackOff volumeMounts: - mountPath: /tmp/k8s-webhook-server/serving-certs name: cert readOnly: true recursiveReadOnly: Disabled - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-zd994 readOnly: true recursiveReadOnly: Disabled hostIP: 10.89.0.2 hostIPs: - ip: 10.89.0.2 phase: Running podIP: 10.244.0.41 podIPs: - ip: 10.244.0.41 qosClass: Burstable startTime: "2026-01-21T13:04:46Z" --- Pod 'integration-service-controller-manager-589744499c-q4brb' under namespace 'integration-service': Pod integration-service-controller-manager-589744499c-q4brb Back-off restarting failed container manager in pod integration-service-controller-manager-589744499c-q4brb_integration-service(ccf15ba3-5573-47a6-9ef6-528fc1b30e97) (BackOff) 2026/01/21 13:08:09 [COVERAGE] Starting coverage server on :9095 2026/01/21 13:08:09 [COVERAGE] Endpoints: GET :9095/coverage, GET :9095/health warning: GOCOVERDIR not set, no coverage data emitted {"level":"error","ts":"2026-01-21T13:08:09Z","logger":"setup","caller":"cmd/main.go:174","msg":"unable to setup controllers","error":"controller with name snapshot already exists. Controller names must be unique to avoid multiple controllers reporting to the same metric","stacktrace":"main.main\n\t/opt/app-root/src/cmd/main.go:174\nruntime.main\n\t/usr/lib/golang/src/runtime/proc.go:283"} ---------- namespace 'kind-registry' ---------- apiVersion: v1 kind: Pod metadata: creationTimestamp: "2026-01-21T13:03:15Z" generateName: registry-68dcdc78fb- labels: pod-template-hash: 68dcdc78fb run: registry name: registry-68dcdc78fb-kcnnb namespace: kind-registry ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: registry-68dcdc78fb uid: 290e32df-6b90-456f-8ec1-6bc1a1803a7a resourceVersion: "3215" uid: d16a3535-c7f5-4223-8dd2-c959e9df1464 spec: containers: - env: - name: REGISTRY_HTTP_TLS_CERTIFICATE value: /certs/tls.crt - name: REGISTRY_HTTP_TLS_KEY value: /certs/tls.key image: registry:2 imagePullPolicy: IfNotPresent name: registry ports: - containerPort: 5000 protocol: TCP resources: limits: cpu: 100m memory: 250Mi requests: cpu: 10m memory: 50Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /certs name: certs - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-swm8b readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: kind-mapt-control-plane preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: certs secret: defaultMode: 420 secretName: local-registry-tls - name: kube-api-access-swm8b projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2026-01-21T13:03:35Z" status: "True" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2026-01-21T13:03:15Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2026-01-21T13:03:35Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2026-01-21T13:03:35Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2026-01-21T13:03:15Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://aba9e5b0493e5bee099eb50c1484c3e14bdad03bf01f59ca5a714de00b132033 image: docker.io/library/registry:2 imageID: docker.io/library/registry@sha256:a3d8aaa63ed8681a604f1dea0aa03f100d5895b6a58ace528858a7b332415373 lastState: {} name: registry ready: true restartCount: 0 started: true state: running: startedAt: "2026-01-21T13:03:35Z" volumeMounts: - mountPath: /certs name: certs - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-swm8b readOnly: true recursiveReadOnly: Disabled hostIP: 10.89.0.2 hostIPs: - ip: 10.89.0.2 phase: Running podIP: 10.244.0.35 podIPs: - ip: 10.244.0.35 qosClass: Burstable startTime: "2026-01-21T13:03:15Z" --- Pod 'registry-68dcdc78fb-kcnnb' under namespace 'kind-registry': Pod registry-68dcdc78fb-kcnnb MountVolume.SetUp failed for volume "certs" : secret "local-registry-tls" not found (FailedMount) time="2026-01-21T13:03:35Z" level=warning msg="Ignoring unrecognized environment variable REGISTRY_SERVICE_PORT" time="2026-01-21T13:03:35Z" level=warning msg="Ignoring unrecognized environment variable REGISTRY_SERVICE_PORT_443_TCP" time="2026-01-21T13:03:35Z" level=warning msg="Ignoring unrecognized environment variable REGISTRY_SERVICE_PORT_443_TCP_ADDR" time="2026-01-21T13:03:35Z" level=warning msg="Ignoring unrecognized environment variable REGISTRY_SERVICE_PORT_443_TCP_PORT" time="2026-01-21T13:03:35Z" level=warning msg="Ignoring unrecognized environment variable REGISTRY_SERVICE_PORT_443_TCP_PROTO" time="2026-01-21T13:03:35Z" level=warning msg="Ignoring unrecognized environment variable REGISTRY_SERVICE_SERVICE_HOST" time="2026-01-21T13:03:35Z" level=warning msg="Ignoring unrecognized environment variable REGISTRY_SERVICE_SERVICE_PORT" time="2026-01-21T13:03:35.43561411Z" level=info msg="Starting upload purge in 54m0s" go.version=go1.20.8 instance.id=81d6182f-f973-4c5a-8950-600120db4c23 service=registry version=2.8.3 time="2026-01-21T13:03:35.435642501Z" level=warning msg="No HTTP secret provided - generated random secret. This may cause problems with uploads if multiple registries are behind a load-balancer. To provide a shared secret, fill in http.secret in the configuration file or set the REGISTRY_HTTP_SECRET environment variable." go.version=go1.20.8 instance.id=81d6182f-f973-4c5a-8950-600120db4c23 service=registry version=2.8.3 time="2026-01-21T13:03:35.435660461Z" level=info msg="redis not configured" go.version=go1.20.8 instance.id=81d6182f-f973-4c5a-8950-600120db4c23 service=registry version=2.8.3 time="2026-01-21T13:03:35.435748546Z" level=info msg="using inmemory blob descriptor cache" go.version=go1.20.8 instance.id=81d6182f-f973-4c5a-8950-600120db4c23 service=registry version=2.8.3 time="2026-01-21T13:03:35.435947822Z" level=info msg="restricting TLS version to tls1.2 or higher" go.version=go1.20.8 instance.id=81d6182f-f973-4c5a-8950-600120db4c23 service=registry version=2.8.3 time="2026-01-21T13:03:35.435976273Z" level=info msg="restricting TLS cipher suites to: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_AES_128_GCM_SHA256,TLS_CHACHA20_POLY1305_SHA256,TLS_AES_256_GCM_SHA384" go.version=go1.20.8 instance.id=81d6182f-f973-4c5a-8950-600120db4c23 service=registry version=2.8.3 time="2026-01-21T13:03:35.436320048Z" level=info msg="listening on [::]:5000, tls" go.version=go1.20.8 instance.id=81d6182f-f973-4c5a-8950-600120db4c23 service=registry version=2.8.3 ---------- namespace 'konflux-info' ---------- ---------- namespace 'konflux-ui' ---------- apiVersion: v1 kind: Pod metadata: creationTimestamp: "2026-01-21T13:04:52Z" generateName: proxy-6f756d5475- labels: app: proxy pod-template-hash: 6f756d5475 name: proxy-6f756d5475-4drbx namespace: konflux-ui ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: proxy-6f756d5475 uid: 795ab419-eeb4-4b0c-b705-bde39cf866e4 resourceVersion: "5153" uid: 9d322f9b-d404-44ec-921b-3792fb260bb1 spec: containers: - command: - nginx - -g - daemon off; - -c - /etc/nginx/nginx.conf image: registry.access.redhat.com/ubi9/nginx-124@sha256:b924363ff07ee0f8fd4f680497da774ac0721722a119665998ff5b2111098ad1 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /health port: 9443 scheme: HTTPS initialDelaySeconds: 30 periodSeconds: 60 successThreshold: 1 timeoutSeconds: 1 name: nginx ports: - containerPort: 8080 name: web protocol: TCP - containerPort: 9443 name: web-tls protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /health port: 9443 scheme: HTTPS initialDelaySeconds: 30 periodSeconds: 30 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 300m memory: 256Mi requests: cpu: 30m memory: 128Mi securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1001 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/nginx/nginx.conf name: proxy readOnly: true subPath: nginx.conf - mountPath: /var/log/nginx name: logs - mountPath: /var/lib/nginx/tmp name: nginx-tmp - mountPath: /run name: run - mountPath: /mnt name: serving-cert - mountPath: /mnt/nginx-generated-config name: nginx-generated-config - mountPath: /mnt/nginx-additional-location-configs name: nginx-static - mountPath: /opt/app-root/src/static-content name: static-content - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-pb8hp readOnly: true - args: - --provider - oidc - --provider-display-name - Dex OIDC - --client-id - oauth2-proxy - --http-address - 127.0.0.1:6000 - --redirect-url - https://54.71.21.136:9443/oauth2/callback - --oidc-issuer-url - https://54.71.21.136:9443/idp/ - --skip-oidc-discovery - --login-url - https://54.71.21.136:9443/idp/auth - --redeem-url - https://dex.dex.svc.cluster.local:9443/idp/token - --oidc-jwks-url - https://dex.dex.svc.cluster.local:9443/idp/keys - --cookie-secure - "true" - --cookie-name - __Host-konflux-ci-cookie - --email-domain - '*' - --ssl-insecure-skip-verify - "true" - --set-xauthrequest - "true" - --whitelist-domain - 54.71.21.136:9443 - --skip-jwt-bearer-tokens env: - name: OAUTH2_PROXY_CLIENT_SECRET valueFrom: secretKeyRef: key: client-secret name: oauth2-proxy-client-secret - name: OAUTH2_PROXY_COOKIE_SECRET valueFrom: secretKeyRef: key: cookie-secret name: oauth2-proxy-cookie-secret image: quay.io/oauth2-proxy/oauth2-proxy:latest@sha256:121cdc6520a02d7a2ddd181af6dbdc0f11f7d0c0d9353a999a69c3998cbfe37e imagePullPolicy: Always name: oauth2-proxy ports: - containerPort: 6000 name: web protocol: TCP resources: limits: cpu: 300m memory: 256Mi requests: cpu: 30m memory: 128Mi securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1001 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-pb8hp readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true initContainers: - command: - cp - -R - /opt/app-root/src/. - /mnt/static-content/ image: quay.io/konflux-ci/konflux-ui@sha256:008e0b1f6db14b6223b3191a114b733b82aee6577c62857f54286dff8df97448 imagePullPolicy: IfNotPresent name: copy-static-content resources: limits: cpu: 50m memory: 128Mi requests: cpu: 10m memory: 64Mi securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1001 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /mnt/static-content name: static-content - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-pb8hp readOnly: true - command: - sh - -c - | set -e # Copy the auth.conf template and replace the bearer token token=$(cat /mnt/api-token/token) sed "s/__BEARER_TOKEN__/$token/" /mnt/nginx-templates/auth.conf > /mnt/nginx-generated-config/auth.conf chmod 640 /mnt/nginx-generated-config/auth.conf image: registry.access.redhat.com/ubi9/ubi@sha256:66233eebd72bb5baa25190d4f55e1dc3fff3a9b77186c1f91a0abdb274452072 imagePullPolicy: IfNotPresent name: generate-nginx-configs resources: limits: cpu: 50m memory: 128Mi requests: cpu: 10m memory: 64Mi securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1001 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /mnt/nginx-generated-config name: nginx-generated-config - mountPath: /mnt/nginx-templates name: nginx-templates - mountPath: /mnt/api-token name: api-token - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-pb8hp readOnly: true nodeName: kind-mapt-control-plane preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: proxy serviceAccountName: proxy terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 topologySpreadConstraints: - labelSelector: matchLabels: app: proxy maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: ScheduleAnyway volumes: - configMap: defaultMode: 420 items: - key: nginx.conf path: nginx.conf name: proxy-6bg85b98b7 name: proxy - configMap: defaultMode: 420 name: proxy-nginx-templates-4m8fgtf4m9 name: nginx-templates - name: nginx-static projected: defaultMode: 420 sources: - configMap: name: proxy-nginx-static-fmmfg7d22f - configMap: name: nginx-idp-location-h959ghd6bh - emptyDir: {} name: logs - emptyDir: {} name: nginx-tmp - emptyDir: {} name: run - name: serving-cert secret: defaultMode: 420 secretName: serving-cert - emptyDir: {} name: nginx-generated-config - name: api-token secret: defaultMode: 420 secretName: proxy - emptyDir: {} name: static-content - name: kube-api-access-pb8hp projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2026-01-21T13:05:08Z" status: "True" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2026-01-21T13:05:15Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2026-01-21T13:05:56Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2026-01-21T13:05:56Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2026-01-21T13:04:52Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://04dd7524cba6228725bde71c1f667fee96da796f2701c33c26fc26711f99dccc image: sha256:d7f74cda4bbee0fb68c5fc8ca2946a8bd2c5fe0e19b8258a4d4890f1468f4a69 imageID: registry.access.redhat.com/ubi9/nginx-124@sha256:b924363ff07ee0f8fd4f680497da774ac0721722a119665998ff5b2111098ad1 lastState: {} name: nginx ready: true restartCount: 0 started: true state: running: startedAt: "2026-01-21T13:05:22Z" volumeMounts: - mountPath: /etc/nginx/nginx.conf name: proxy readOnly: true recursiveReadOnly: Disabled - mountPath: /var/log/nginx name: logs - mountPath: /var/lib/nginx/tmp name: nginx-tmp - mountPath: /run name: run - mountPath: /mnt name: serving-cert - mountPath: /mnt/nginx-generated-config name: nginx-generated-config - mountPath: /mnt/nginx-additional-location-configs name: nginx-static - mountPath: /opt/app-root/src/static-content name: static-content - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-pb8hp readOnly: true recursiveReadOnly: Disabled - containerID: containerd://29c50f06591782733f1a41f6c7903cce55740fb0205f6a27984c4ea1c023f476 image: sha256:789b5da5d7e02ec17af03a65a3e76d24a0b845a2bebf0958d73069a8156519f8 imageID: quay.io/oauth2-proxy/oauth2-proxy@sha256:121cdc6520a02d7a2ddd181af6dbdc0f11f7d0c0d9353a999a69c3998cbfe37e lastState: {} name: oauth2-proxy ready: true restartCount: 0 started: true state: running: startedAt: "2026-01-21T13:05:24Z" volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-pb8hp readOnly: true recursiveReadOnly: Disabled hostIP: 10.89.0.2 hostIPs: - ip: 10.89.0.2 initContainerStatuses: - containerID: containerd://1230d9dd2f79fee9ec80199b012ffef628116cb2d92530834b5bcb3c07cae123 image: sha256:0efa8b105fe9c26684c45b8cc3b813b22a3b415493ce6d41262b1f478da0e33c imageID: quay.io/konflux-ci/konflux-ui@sha256:008e0b1f6db14b6223b3191a114b733b82aee6577c62857f54286dff8df97448 lastState: {} name: copy-static-content ready: true restartCount: 0 started: false state: terminated: containerID: containerd://1230d9dd2f79fee9ec80199b012ffef628116cb2d92530834b5bcb3c07cae123 exitCode: 0 finishedAt: "2026-01-21T13:05:07Z" reason: Completed startedAt: "2026-01-21T13:05:07Z" volumeMounts: - mountPath: /mnt/static-content name: static-content - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-pb8hp readOnly: true recursiveReadOnly: Disabled - containerID: containerd://12c5360345424b35d912f7b4c5b4ec1ed176ab2d320562e8cc661146c6911a75 image: sha256:8d2a8803cfca17a81eb9412e1f33ae1c6fe3797553e9b819899dc03f1657cf12 imageID: registry.access.redhat.com/ubi9/ubi@sha256:66233eebd72bb5baa25190d4f55e1dc3fff3a9b77186c1f91a0abdb274452072 lastState: {} name: generate-nginx-configs ready: true restartCount: 0 started: false state: terminated: containerID: containerd://12c5360345424b35d912f7b4c5b4ec1ed176ab2d320562e8cc661146c6911a75 exitCode: 0 finishedAt: "2026-01-21T13:05:13Z" reason: Completed startedAt: "2026-01-21T13:05:13Z" volumeMounts: - mountPath: /mnt/nginx-generated-config name: nginx-generated-config - mountPath: /mnt/nginx-templates name: nginx-templates - mountPath: /mnt/api-token name: api-token - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-pb8hp readOnly: true recursiveReadOnly: Disabled phase: Running podIP: 10.244.0.43 podIPs: - ip: 10.244.0.43 qosClass: Burstable startTime: "2026-01-21T13:04:52Z" --- Pod 'proxy-6f756d5475-4drbx' under namespace 'konflux-ui': Pod proxy-6f756d5475-4drbx MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found (FailedMount) [21/Jan/2026:13:05:54 +0000] 10.244.0.1 - - - _ 10.244.0.43 to: - -: GET /health HTTP/1.1 200 upstream_response_time - msec 1769000754.548 request_time 0.000 [21/Jan/2026:13:05:56 +0000] 10.244.0.1 - - - _ 10.244.0.43 to: - -: GET /health HTTP/1.1 200 upstream_response_time - msec 1769000756.197 request_time 0.000 [21/Jan/2026:13:06:26 +0000] 10.244.0.1 - - - _ 10.244.0.43 to: - -: GET /health HTTP/1.1 200 upstream_response_time - msec 1769000786.198 request_time 0.000 [21/Jan/2026:13:06:54 +0000] 10.244.0.1 - - - _ 10.244.0.43 to: - -: GET /health HTTP/1.1 200 upstream_response_time - msec 1769000814.547 request_time 0.000 [21/Jan/2026:13:06:56 +0000] 10.244.0.1 - - - _ 10.244.0.43 to: - -: GET /health HTTP/1.1 200 upstream_response_time - msec 1769000816.198 request_time 0.000 [21/Jan/2026:13:07:26 +0000] 10.244.0.1 - - - _ 10.244.0.43 to: - -: GET /health HTTP/1.1 200 upstream_response_time - msec 1769000846.197 request_time 0.000 [21/Jan/2026:13:07:54 +0000] 10.244.0.1 - - - _ 10.244.0.43 to: - -: GET /health HTTP/1.1 200 upstream_response_time - msec 1769000874.548 request_time 0.000 [21/Jan/2026:13:07:56 +0000] 10.244.0.1 - - - _ 10.244.0.43 to: - -: GET /health HTTP/1.1 200 upstream_response_time - msec 1769000876.198 request_time 0.000 [21/Jan/2026:13:08:26 +0000] 10.244.0.1 - - - _ 10.244.0.43 to: - -: GET /health HTTP/1.1 200 upstream_response_time - msec 1769000906.197 request_time 0.000 [21/Jan/2026:13:08:54 +0000] 10.244.0.1 - - - _ 10.244.0.43 to: - -: GET /health HTTP/1.1 200 upstream_response_time - msec 1769000934.547 request_time 0.000 [21/Jan/2026:13:08:56 +0000] 10.244.0.1 - - - _ 10.244.0.43 to: - -: GET /health HTTP/1.1 200 upstream_response_time - msec 1769000936.197 request_time 0.000 [2026/01/21 13:05:24] [oauthproxy.go:162] Skipping JWT tokens from configured OIDC issuer: "https://54.71.21.136:9443/idp/" [2026/01/21 13:05:24] [oauthproxy.go:176] OAuthProxy configured for OpenID Connect Client ID: oauth2-proxy [2026/01/21 13:05:24] [oauthproxy.go:182] Cookie settings: name:__Host-konflux-ci-cookie secure(https):true httponly:true expiry:168h0m0s domains: path:/ samesite: refresh:disabled ---------- namespace 'kube-node-lease' ---------- ---------- namespace 'kube-public' ---------- ---------- namespace 'kube-system' ---------- apiVersion: v1 kind: Pod metadata: creationTimestamp: "2026-01-21T12:59:31Z" generateName: coredns-668d6bf9bc- labels: k8s-app: kube-dns pod-template-hash: 668d6bf9bc name: coredns-668d6bf9bc-8vqfv namespace: kube-system ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: coredns-668d6bf9bc uid: f5a3a281-0437-4188-a468-69341dd68c99 resourceVersion: "437" uid: d99790e8-53c1-4a7f-bc87-8e4d84448688 spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchExpressions: - key: k8s-app operator: In values: - kube-dns topologyKey: kubernetes.io/hostname weight: 100 containers: - args: - -conf - /etc/coredns/Corefile image: registry.k8s.io/coredns/coredns:v1.11.3 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 5 httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 name: coredns ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /ready port: 8181 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi securityContext: allowPrivilegeEscalation: false capabilities: add: - NET_BIND_SERVICE drop: - ALL readOnlyRootFilesystem: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/coredns name: config-volume readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-tqqwj readOnly: true dnsPolicy: Default enableServiceLinks: true nodeName: kind-mapt-control-plane nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 2000000000 priorityClassName: system-cluster-critical restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: coredns serviceAccountName: coredns terminationGracePeriodSeconds: 30 tolerations: - key: CriticalAddonsOnly operator: Exists - effect: NoSchedule key: node-role.kubernetes.io/control-plane - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - configMap: defaultMode: 420 items: - key: Corefile path: Corefile name: coredns name: config-volume - name: kube-api-access-tqqwj projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2026-01-21T12:59:46Z" status: "True" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2026-01-21T12:59:43Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2026-01-21T12:59:46Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2026-01-21T12:59:46Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2026-01-21T12:59:43Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://82aa10c09120b95711beed4bda80ee473050cbbcfaca8f5d66e1150389df8dfd image: registry.k8s.io/coredns/coredns:v1.11.3 imageID: sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 lastState: {} name: coredns ready: true restartCount: 0 started: true state: running: startedAt: "2026-01-21T12:59:46Z" volumeMounts: - mountPath: /etc/coredns name: config-volume readOnly: true recursiveReadOnly: Disabled - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-tqqwj readOnly: true recursiveReadOnly: Disabled hostIP: 10.89.0.2 hostIPs: - ip: 10.89.0.2 phase: Running podIP: 10.244.0.4 podIPs: - ip: 10.244.0.4 qosClass: Burstable startTime: "2026-01-21T12:59:43Z" --- Pod 'coredns-668d6bf9bc-8vqfv' under namespace 'kube-system': Pod coredns-668d6bf9bc-8vqfv 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. (FailedScheduling) .:53 [INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b CoreDNS-1.11.3 linux/amd64, go1.21.11, a6338e9 apiVersion: v1 kind: Pod metadata: creationTimestamp: "2026-01-21T12:59:31Z" generateName: coredns-668d6bf9bc- labels: k8s-app: kube-dns pod-template-hash: 668d6bf9bc name: coredns-668d6bf9bc-hplqj namespace: kube-system ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: coredns-668d6bf9bc uid: f5a3a281-0437-4188-a468-69341dd68c99 resourceVersion: "442" uid: 2d0ad395-7d6f-4aed-8cc3-1e9075f6cbea spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchExpressions: - key: k8s-app operator: In values: - kube-dns topologyKey: kubernetes.io/hostname weight: 100 containers: - args: - -conf - /etc/coredns/Corefile image: registry.k8s.io/coredns/coredns:v1.11.3 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 5 httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 name: coredns ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /ready port: 8181 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi securityContext: allowPrivilegeEscalation: false capabilities: add: - NET_BIND_SERVICE drop: - ALL readOnlyRootFilesystem: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/coredns name: config-volume readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-r8nh2 readOnly: true dnsPolicy: Default enableServiceLinks: true nodeName: kind-mapt-control-plane nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 2000000000 priorityClassName: system-cluster-critical restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: coredns serviceAccountName: coredns terminationGracePeriodSeconds: 30 tolerations: - key: CriticalAddonsOnly operator: Exists - effect: NoSchedule key: node-role.kubernetes.io/control-plane - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - configMap: defaultMode: 420 items: - key: Corefile path: Corefile name: coredns name: config-volume - name: kube-api-access-r8nh2 projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2026-01-21T12:59:46Z" status: "True" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2026-01-21T12:59:43Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2026-01-21T12:59:46Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2026-01-21T12:59:46Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2026-01-21T12:59:43Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://db9d167238d156e5da104be061af3179ed5942a199259cce70ca1869b3c8bc3a image: registry.k8s.io/coredns/coredns:v1.11.3 imageID: sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 lastState: {} name: coredns ready: true restartCount: 0 started: true state: running: startedAt: "2026-01-21T12:59:46Z" volumeMounts: - mountPath: /etc/coredns name: config-volume readOnly: true recursiveReadOnly: Disabled - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-r8nh2 readOnly: true recursiveReadOnly: Disabled hostIP: 10.89.0.2 hostIPs: - ip: 10.89.0.2 phase: Running podIP: 10.244.0.2 podIPs: - ip: 10.244.0.2 qosClass: Burstable startTime: "2026-01-21T12:59:43Z" --- Pod 'coredns-668d6bf9bc-hplqj' under namespace 'kube-system': Pod coredns-668d6bf9bc-hplqj 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. (FailedScheduling) .:53 [INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b CoreDNS-1.11.3 linux/amd64, go1.21.11, a6338e9 apiVersion: v1 kind: Pod metadata: creationTimestamp: "2026-01-21T12:59:30Z" generateName: kindnet- labels: app: kindnet controller-revision-hash: b4cc94945 k8s-app: kindnet pod-template-generation: "1" tier: node name: kindnet-tq4gl namespace: kube-system ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: DaemonSet name: kindnet uid: 87b7ae67-23e4-42b7-949e-6658d19c84a8 resourceVersion: "385" uid: 8e415767-2c62-4e35-8669-f57f1ec8f8ac spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - kind-mapt-control-plane containers: - env: - name: HOST_IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.hostIP - name: POD_IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP - name: POD_SUBNET value: 10.244.0.0/16 - name: CONTROL_PLANE_ENDPOINT value: kind-mapt-control-plane:6443 image: docker.io/kindest/kindnetd:v20250512-df8de77b imagePullPolicy: IfNotPresent name: kindnet-cni resources: limits: cpu: 100m memory: 50Mi requests: cpu: 100m memory: 50Mi securityContext: capabilities: add: - NET_RAW - NET_ADMIN privileged: false terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/cni/net.d name: cni-cfg - mountPath: /run/xtables.lock name: xtables-lock - mountPath: /lib/modules name: lib-modules readOnly: true - mountPath: /var/run/nri name: nri-plugin - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-96r6v readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true hostNetwork: true nodeName: kind-mapt-control-plane nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: kindnet serviceAccountName: kindnet terminationGracePeriodSeconds: 30 tolerations: - operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists - effect: NoSchedule key: node.kubernetes.io/disk-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/pid-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/unschedulable operator: Exists - effect: NoSchedule key: node.kubernetes.io/network-unavailable operator: Exists volumes: - hostPath: path: /etc/cni/net.d type: "" name: cni-cfg - hostPath: path: /run/xtables.lock type: FileOrCreate name: xtables-lock - hostPath: path: /lib/modules type: "" name: lib-modules - hostPath: path: /var/run/nri type: "" name: nri-plugin - name: kube-api-access-96r6v projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2026-01-21T12:59:33Z" status: "True" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2026-01-21T12:59:31Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2026-01-21T12:59:33Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2026-01-21T12:59:33Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2026-01-21T12:59:31Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://6eb26bb5e7ccb179e4a67e4b3f7ed4b1fa3d020f4706e431d3ef4bb1f5485a98 image: docker.io/kindest/kindnetd:v20250512-df8de77b imageID: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c lastState: {} name: kindnet-cni ready: true restartCount: 0 started: true state: running: startedAt: "2026-01-21T12:59:33Z" volumeMounts: - mountPath: /etc/cni/net.d name: cni-cfg - mountPath: /run/xtables.lock name: xtables-lock - mountPath: /lib/modules name: lib-modules readOnly: true recursiveReadOnly: Disabled - mountPath: /var/run/nri name: nri-plugin - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-96r6v readOnly: true recursiveReadOnly: Disabled hostIP: 10.89.0.2 hostIPs: - ip: 10.89.0.2 phase: Running podIP: 10.89.0.2 podIPs: - ip: 10.89.0.2 qosClass: Guaranteed startTime: "2026-01-21T12:59:31Z" --- Pod 'kindnet-tq4gl' under namespace 'kube-system': Pod kindnet-tq4gl MountVolume.SetUp failed for volume "kube-api-access-96r6v" : configmap "kube-root-ca.crt" not found (FailedMount) I0121 12:59:33.429273 1 main.go:390] probe TCP address kind-mapt-control-plane:6443 I0121 12:59:33.430168 1 main.go:109] connected to apiserver: https://kind-mapt-control-plane:6443 I0121 12:59:33.430277 1 main.go:139] hostIP = 10.89.0.2 podIP = 10.89.0.2 I0121 12:59:33.430366 1 main.go:148] setting mtu 9001 for CNI I0121 12:59:33.430376 1 main.go:178] kindnetd IP family: "ipv4" I0121 12:59:33.430385 1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16] time="2026-01-21T12:59:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)" I0121 12:59:33.713693 1 controller.go:377] "Starting controller" name="kube-network-policies" I0121 12:59:33.713748 1 controller.go:381] "Waiting for informer caches to sync" I0121 12:59:33.727850 1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies" time="2026-01-21T12:59:33Z" level=info msg="Registering plugin 10-kube-network-policies..." time="2026-01-21T12:59:33Z" level=info msg="Configuring plugin 10-kube-network-policies for runtime containerd/v2.1.1..." time="2026-01-21T12:59:33Z" level=info msg="Started plugin 10-kube-network-policies..." I0121 12:59:33.912030 1 nri.go:56] Synchronized state with the runtime (6 pods, 6 containers)... I0121 12:59:33.927979 1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies" I0121 12:59:33.927991 1 metrics.go:72] Registering metrics I0121 12:59:33.928026 1 controller.go:711] "Syncing nftables rules" E0121 12:59:33.928163 1 controller.go:417] "reading nfqueue stats" err="open /proc/net/netfilter/nfnetlink_queue: no such file or directory" I0121 12:59:43.720409 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 12:59:43.720462 1 main.go:301] handling current node I0121 12:59:53.719356 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 12:59:53.719379 1 main.go:301] handling current node I0121 13:00:03.718210 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:00:03.718240 1 main.go:301] handling current node I0121 13:00:13.713825 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:00:13.713851 1 main.go:301] handling current node I0121 13:00:23.716384 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:00:23.716418 1 main.go:301] handling current node I0121 13:00:33.713703 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:00:33.713724 1 main.go:301] handling current node I0121 13:00:43.713920 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:00:43.713944 1 main.go:301] handling current node I0121 13:00:53.713066 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:00:53.713102 1 main.go:301] handling current node I0121 13:01:03.713207 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:01:03.713231 1 main.go:301] handling current node I0121 13:01:13.715219 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:01:13.715242 1 main.go:301] handling current node I0121 13:01:23.716737 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:01:23.716760 1 main.go:301] handling current node I0121 13:01:33.713176 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:01:33.713197 1 main.go:301] handling current node I0121 13:01:43.713097 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:01:43.713123 1 main.go:301] handling current node I0121 13:01:53.713067 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:01:53.713095 1 main.go:301] handling current node I0121 13:02:03.713884 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:02:03.713904 1 main.go:301] handling current node I0121 13:02:13.721891 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:02:13.721914 1 main.go:301] handling current node I0121 13:02:23.713461 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:02:23.713482 1 main.go:301] handling current node I0121 13:02:33.713869 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:02:33.713893 1 main.go:301] handling current node I0121 13:02:43.717570 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:02:43.717596 1 main.go:301] handling current node I0121 13:02:53.713275 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:02:53.713299 1 main.go:301] handling current node I0121 13:03:03.714679 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:03:03.714708 1 main.go:301] handling current node I0121 13:03:13.713278 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:03:13.713316 1 main.go:301] handling current node I0121 13:03:23.713937 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:03:23.713966 1 main.go:301] handling current node I0121 13:03:33.713463 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:03:33.713491 1 main.go:301] handling current node I0121 13:03:43.713459 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:03:43.713491 1 main.go:301] handling current node I0121 13:03:53.714262 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:03:53.714285 1 main.go:301] handling current node I0121 13:04:03.713651 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:04:03.713695 1 main.go:301] handling current node I0121 13:04:13.714141 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:04:13.714165 1 main.go:301] handling current node I0121 13:04:23.713514 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:04:23.713536 1 main.go:301] handling current node I0121 13:04:33.714051 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:04:33.714085 1 main.go:301] handling current node I0121 13:04:43.713860 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:04:43.713885 1 main.go:301] handling current node I0121 13:04:49.997482 1 controller.go:711] "Syncing nftables rules" I0121 13:04:50.189207 1 controller.go:711] "Syncing nftables rules" I0121 13:04:53.713538 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:04:53.713573 1 main.go:301] handling current node I0121 13:05:02.152034 1 controller.go:711] "Syncing nftables rules" I0121 13:05:03.713494 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:05:03.713522 1 main.go:301] handling current node I0121 13:05:13.713086 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:05:13.713139 1 main.go:301] handling current node I0121 13:05:23.713429 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:05:23.713456 1 main.go:301] handling current node I0121 13:05:33.717436 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:05:33.717459 1 main.go:301] handling current node I0121 13:05:43.720054 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:05:43.720077 1 main.go:301] handling current node I0121 13:05:53.721867 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:05:53.721889 1 main.go:301] handling current node I0121 13:06:03.713972 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:06:03.713996 1 main.go:301] handling current node I0121 13:06:13.721274 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:06:13.721296 1 main.go:301] handling current node I0121 13:06:23.718379 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:06:23.718412 1 main.go:301] handling current node I0121 13:06:33.716357 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:06:33.716380 1 main.go:301] handling current node I0121 13:06:43.720039 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:06:43.720064 1 main.go:301] handling current node I0121 13:06:53.717098 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:06:53.717123 1 main.go:301] handling current node I0121 13:07:03.722175 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:07:03.722206 1 main.go:301] handling current node I0121 13:07:13.713464 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:07:13.713534 1 main.go:301] handling current node I0121 13:07:23.713749 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:07:23.713781 1 main.go:301] handling current node I0121 13:07:33.720013 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:07:33.720036 1 main.go:301] handling current node I0121 13:07:43.713558 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:07:43.713591 1 main.go:301] handling current node I0121 13:07:53.719648 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:07:53.719674 1 main.go:301] handling current node I0121 13:08:03.722681 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:08:03.722712 1 main.go:301] handling current node I0121 13:08:13.715529 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:08:13.715553 1 main.go:301] handling current node I0121 13:08:23.716289 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:08:23.716315 1 main.go:301] handling current node I0121 13:08:33.718099 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:08:33.718121 1 main.go:301] handling current node I0121 13:08:43.713560 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:08:43.713593 1 main.go:301] handling current node I0121 13:08:53.718844 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:08:53.718866 1 main.go:301] handling current node I0121 13:09:03.718416 1 main.go:297] Handling node with IPs: map[10.89.0.2:{}] I0121 13:09:03.718441 1 main.go:301] handling current node apiVersion: v1 kind: Pod metadata: creationTimestamp: "2026-01-21T12:59:30Z" generateName: kube-proxy- labels: controller-revision-hash: 5987677dc7 k8s-app: kube-proxy pod-template-generation: "1" name: kube-proxy-qxrlb namespace: kube-system ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: DaemonSet name: kube-proxy uid: d969e672-ea64-42fb-87e9-f284a9f86455 resourceVersion: "388" uid: 4eab2454-38a7-45b2-8689-a787c49a0162 spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - kind-mapt-control-plane containers: - command: - /usr/local/bin/kube-proxy - --config=/var/lib/kube-proxy/config.conf - --hostname-override=$(NODE_NAME) env: - name: NODE_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName image: registry.k8s.io/kube-proxy:v1.32.5 imagePullPolicy: IfNotPresent name: kube-proxy resources: {} securityContext: privileged: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/lib/kube-proxy name: kube-proxy - mountPath: /run/xtables.lock name: xtables-lock - mountPath: /lib/modules name: lib-modules readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-4tggz readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true hostNetwork: true nodeName: kind-mapt-control-plane nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 2000001000 priorityClassName: system-node-critical restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: kube-proxy serviceAccountName: kube-proxy terminationGracePeriodSeconds: 30 tolerations: - operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists - effect: NoSchedule key: node.kubernetes.io/disk-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/pid-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/unschedulable operator: Exists - effect: NoSchedule key: node.kubernetes.io/network-unavailable operator: Exists volumes: - configMap: defaultMode: 420 name: kube-proxy name: kube-proxy - hostPath: path: /run/xtables.lock type: FileOrCreate name: xtables-lock - hostPath: path: /lib/modules type: "" name: lib-modules - name: kube-api-access-4tggz projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2026-01-21T12:59:33Z" status: "True" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2026-01-21T12:59:31Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2026-01-21T12:59:33Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2026-01-21T12:59:33Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2026-01-21T12:59:31Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://9316e354babff6e2c9bd5721ef83b3c820a3f5a48e2275fd2e54e32bd6c24c3f image: registry.k8s.io/kube-proxy-amd64:v1.32.5 imageID: sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363 lastState: {} name: kube-proxy ready: true restartCount: 0 started: true state: running: startedAt: "2026-01-21T12:59:32Z" volumeMounts: - mountPath: /var/lib/kube-proxy name: kube-proxy - mountPath: /run/xtables.lock name: xtables-lock - mountPath: /lib/modules name: lib-modules readOnly: true recursiveReadOnly: Disabled - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-4tggz readOnly: true recursiveReadOnly: Disabled hostIP: 10.89.0.2 hostIPs: - ip: 10.89.0.2 phase: Running podIP: 10.89.0.2 podIPs: - ip: 10.89.0.2 qosClass: BestEffort startTime: "2026-01-21T12:59:31Z" --- Pod 'kube-proxy-qxrlb' under namespace 'kube-system': Pod kube-proxy-qxrlb MountVolume.SetUp failed for volume "kube-api-access-4tggz" : configmap "kube-root-ca.crt" not found (FailedMount) I0121 12:59:32.608274 1 server.go:698] "Successfully retrieved node IP(s)" IPs=["10.89.0.2"] I0121 12:59:32.608402 1 conntrack.go:121] "Set sysctl" entry="net/netfilter/nf_conntrack_tcp_timeout_established" value=86400 I0121 12:59:32.608446 1 conntrack.go:121] "Set sysctl" entry="net/netfilter/nf_conntrack_tcp_timeout_close_wait" value=3600 E0121 12:59:32.608468 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`" I0121 12:59:32.627406 1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4" I0121 12:59:32.627429 1 server_linux.go:170] "Using iptables Proxier" I0121 12:59:32.628790 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4" I0121 12:59:32.637398 1 server.go:497] "Version info" version="v1.32.5" I0121 12:59:32.637414 1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0121 12:59:32.638341 1 config.go:105] "Starting endpoint slice config controller" I0121 12:59:32.638358 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config I0121 12:59:32.638358 1 config.go:329] "Starting node config controller" I0121 12:59:32.638370 1 shared_informer.go:313] Waiting for caches to sync for node config I0121 12:59:32.638381 1 config.go:199] "Starting service config controller" I0121 12:59:32.638395 1 shared_informer.go:313] Waiting for caches to sync for service config I0121 12:59:32.739071 1 shared_informer.go:320] Caches are synced for node config I0121 12:59:32.739078 1 shared_informer.go:320] Caches are synced for endpoint slice config I0121 12:59:32.739097 1 shared_informer.go:320] Caches are synced for service config ---------- namespace 'kyverno' ---------- ---------- namespace 'local-path-storage' ---------- apiVersion: v1 kind: Pod metadata: creationTimestamp: "2026-01-21T12:59:31Z" generateName: local-path-provisioner-7dc846544d- labels: app: local-path-provisioner pod-template-hash: 7dc846544d name: local-path-provisioner-7dc846544d-l7k8k namespace: local-path-storage ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: local-path-provisioner-7dc846544d uid: 2e553e67-10df-4aae-92d0-5f1eb3f7f1d3 resourceVersion: "434" uid: 4f87b374-3cba-4a44-a011-e71b244d535a spec: containers: - command: - local-path-provisioner - --debug - start - --helper-image - docker.io/kindest/local-path-helper:v20241212-8ac705d0 - --config - /etc/config/config.json env: - name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: CONFIG_MOUNT_PATH value: /etc/config/ image: docker.io/kindest/local-path-provisioner:v20250214-acbabc1a imagePullPolicy: IfNotPresent name: local-path-provisioner resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/config/ name: config-volume - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-nrsrk readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: kind-mapt-control-plane nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: local-path-provisioner-service-account serviceAccountName: local-path-provisioner-service-account terminationGracePeriodSeconds: 30 tolerations: - effect: NoSchedule key: node-role.kubernetes.io/control-plane operator: Equal - effect: NoSchedule key: node-role.kubernetes.io/master operator: Equal - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - configMap: defaultMode: 420 name: local-path-config name: config-volume - name: kube-api-access-nrsrk projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2026-01-21T12:59:46Z" status: "True" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2026-01-21T12:59:43Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2026-01-21T12:59:46Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2026-01-21T12:59:46Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2026-01-21T12:59:43Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://b7b8ff3105e875514c75e6b5895d0fb464f43ee205189e91a6e475ec19312929 image: docker.io/kindest/local-path-provisioner:v20250214-acbabc1a imageID: sha256:bbb6209cc873b9b4095bd014b4687512eea2bd7b246f9ec06f4f6f0be14d9fb6 lastState: {} name: local-path-provisioner ready: true restartCount: 0 started: true state: running: startedAt: "2026-01-21T12:59:46Z" volumeMounts: - mountPath: /etc/config/ name: config-volume - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-nrsrk readOnly: true recursiveReadOnly: Disabled hostIP: 10.89.0.2 hostIPs: - ip: 10.89.0.2 phase: Running podIP: 10.244.0.3 podIPs: - ip: 10.244.0.3 qosClass: BestEffort startTime: "2026-01-21T12:59:43Z" --- Pod 'local-path-provisioner-7dc846544d-l7k8k' under namespace 'local-path-storage': Pod local-path-provisioner-7dc846544d-l7k8k 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. (FailedScheduling) time="2026-01-21T12:59:46Z" level=debug msg="Applied config: {\"nodePathMap\":[{\"node\":\"DEFAULT_PATH_FOR_NON_LISTED_NODES\",\"paths\":[\"/var/local-path-provisioner\"]}],\"storageClassConfigs\":null}" time="2026-01-21T12:59:46Z" level=debug msg="Provisioner started" I0121 12:59:46.207355 1 controller.go:824] "Starting provisioner controller" component="rancher.io/local-path_local-path-provisioner-7dc846544d-l7k8k_3b17d798-3dcf-4e87-b56e-cdc59c8e506b" I0121 12:59:46.307485 1 controller.go:873] "Started provisioner controller" component="rancher.io/local-path_local-path-provisioner-7dc846544d-l7k8k_3b17d798-3dcf-4e87-b56e-cdc59c8e506b" time="2026-01-21T13:00:10Z" level=debug msg="config doesn't contain node kind-mapt-control-plane, use DEFAULT_PATH_FOR_NON_LISTED_NODES instead" time="2026-01-21T13:00:10Z" level=info msg="Creating volume pvc-b4b8cbb5-117f-4bd7-a36c-79e79b0dc5f6 at kind-mapt-control-plane:/var/local-path-provisioner/pvc-b4b8cbb5-117f-4bd7-a36c-79e79b0dc5f6_test-pvc-ns_test-pvc" time="2026-01-21T13:00:10Z" level=info msg="create the helper pod helper-pod-create-pvc-b4b8cbb5-117f-4bd7-a36c-79e79b0dc5f6 into local-path-storage" I0121 13:00:10.655298 1 event.go:389] "Event occurred" object="test-pvc-ns/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="Provisioning" message="External provisioner is provisioning volume for claim \"test-pvc-ns/test-pvc\"" time="2026-01-21T13:00:12Z" level=info msg="Volume pvc-b4b8cbb5-117f-4bd7-a36c-79e79b0dc5f6 has been created on kind-mapt-control-plane:/var/local-path-provisioner/pvc-b4b8cbb5-117f-4bd7-a36c-79e79b0dc5f6_test-pvc-ns_test-pvc" time="2026-01-21T13:00:12Z" level=info msg="Start of helper-pod-create-pvc-b4b8cbb5-117f-4bd7-a36c-79e79b0dc5f6 logs" time="2026-01-21T13:00:12Z" level=info msg="End of helper-pod-create-pvc-b4b8cbb5-117f-4bd7-a36c-79e79b0dc5f6 logs" I0121 13:00:12.681792 1 event.go:389] "Event occurred" object="test-pvc-ns/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ProvisioningSucceeded" message="Successfully provisioned volume pvc-b4b8cbb5-117f-4bd7-a36c-79e79b0dc5f6" time="2026-01-21T13:00:12Z" level=info msg="Deleting volume pvc-b4b8cbb5-117f-4bd7-a36c-79e79b0dc5f6 at kind-mapt-control-plane:/var/local-path-provisioner/pvc-b4b8cbb5-117f-4bd7-a36c-79e79b0dc5f6_test-pvc-ns_test-pvc" time="2026-01-21T13:00:12Z" level=info msg="create the helper pod helper-pod-delete-pvc-b4b8cbb5-117f-4bd7-a36c-79e79b0dc5f6 into local-path-storage" time="2026-01-21T13:00:16Z" level=info msg="Volume pvc-b4b8cbb5-117f-4bd7-a36c-79e79b0dc5f6 has been deleted on kind-mapt-control-plane:/var/local-path-provisioner/pvc-b4b8cbb5-117f-4bd7-a36c-79e79b0dc5f6_test-pvc-ns_test-pvc" time="2026-01-21T13:00:16Z" level=info msg="Start of helper-pod-delete-pvc-b4b8cbb5-117f-4bd7-a36c-79e79b0dc5f6 logs" time="2026-01-21T13:00:16Z" level=info msg="End of helper-pod-delete-pvc-b4b8cbb5-117f-4bd7-a36c-79e79b0dc5f6 logs" time="2026-01-21T13:02:23Z" level=debug msg="config doesn't contain node kind-mapt-control-plane, use DEFAULT_PATH_FOR_NON_LISTED_NODES instead" time="2026-01-21T13:02:23Z" level=info msg="Creating volume pvc-af9cfe45-0b85-418f-bafb-7949dbd45605 at kind-mapt-control-plane:/var/local-path-provisioner/pvc-af9cfe45-0b85-418f-bafb-7949dbd45605_tekton-pipelines_postgredb-tekton-results-postgres-0" time="2026-01-21T13:02:23Z" level=info msg="create the helper pod helper-pod-create-pvc-af9cfe45-0b85-418f-bafb-7949dbd45605 into local-path-storage" I0121 13:02:23.285771 1 event.go:389] "Event occurred" object="tekton-pipelines/postgredb-tekton-results-postgres-0" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="Provisioning" message="External provisioner is provisioning volume for claim \"tekton-pipelines/postgredb-tekton-results-postgres-0\"" time="2026-01-21T13:02:26Z" level=info msg="Volume pvc-af9cfe45-0b85-418f-bafb-7949dbd45605 has been created on kind-mapt-control-plane:/var/local-path-provisioner/pvc-af9cfe45-0b85-418f-bafb-7949dbd45605_tekton-pipelines_postgredb-tekton-results-postgres-0" time="2026-01-21T13:02:26Z" level=info msg="Start of helper-pod-create-pvc-af9cfe45-0b85-418f-bafb-7949dbd45605 logs" time="2026-01-21T13:02:26Z" level=info msg="End of helper-pod-create-pvc-af9cfe45-0b85-418f-bafb-7949dbd45605 logs" I0121 13:02:27.003384 1 event.go:389] "Event occurred" object="tekton-pipelines/postgredb-tekton-results-postgres-0" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ProvisioningSucceeded" message="Successfully provisioned volume pvc-af9cfe45-0b85-418f-bafb-7949dbd45605" ---------- namespace 'namespace-lister' ---------- apiVersion: v1 kind: Pod metadata: creationTimestamp: "2026-01-21T13:04:49Z" generateName: namespace-lister-584d4574c4- labels: apps: namespace-lister pod-template-hash: 584d4574c4 name: namespace-lister-584d4574c4-rzwjh namespace: namespace-lister ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: namespace-lister-584d4574c4 uid: 7f968bee-080c-4637-986c-57d4bf325858 resourceVersion: "4713" uid: d7252533-6bd1-460b-9bc6-526dd02c1f3f spec: containers: - args: - -enable-tls - -cert-path=/var/tls/tls.crt - -key-path=/var/tls/tls.key env: - name: LOG_LEVEL value: "0" - name: CACHE_RESYNC_PERIOD value: 10m - name: CACHE_NAMESPACE_LABELSELECTOR value: konflux-ci.dev/type=tenant - name: AUTH_USERNAME_HEADER value: Impersonate-User image: quay.io/konflux-ci/namespace-lister@sha256:e4bb09dfe4513cdbe349507495c7f4044623a3c2aff866b7946d941e82a7a639 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8080 scheme: HTTPS initialDelaySeconds: 1 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 name: namespace-lister ports: - containerPort: 8080 name: http protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: 8080 scheme: HTTPS initialDelaySeconds: 1 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 200m memory: 256Mi requests: cpu: 20m memory: 64Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true runAsNonRoot: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/tls name: tls readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-qz5lf readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: kind-mapt-control-plane preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: namespace-lister serviceAccountName: namespace-lister terminationGracePeriodSeconds: 60 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 topologySpreadConstraints: - labelSelector: matchLabels: apps: namespace-lister maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: ScheduleAnyway volumes: - name: tls secret: defaultMode: 420 secretName: namespace-lister-tls - name: kube-api-access-qz5lf projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2026-01-21T13:05:02Z" status: "True" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2026-01-21T13:04:49Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2026-01-21T13:05:02Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2026-01-21T13:05:02Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2026-01-21T13:04:49Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://b2fbfd7a58b6180f7a830eae70d1929b75a2290bb93016d7970dbe2e2d8e8e99 image: sha256:66d01e5e112c798152bbf5d3d32848223af9958374089097f3f2821eaf5ca91b imageID: quay.io/konflux-ci/namespace-lister@sha256:e4bb09dfe4513cdbe349507495c7f4044623a3c2aff866b7946d941e82a7a639 lastState: {} name: namespace-lister ready: true restartCount: 0 started: true state: running: startedAt: "2026-01-21T13:05:01Z" volumeMounts: - mountPath: /var/tls name: tls readOnly: true recursiveReadOnly: Disabled - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-qz5lf readOnly: true recursiveReadOnly: Disabled hostIP: 10.89.0.2 hostIPs: - ip: 10.89.0.2 phase: Running podIP: 10.244.0.42 podIPs: - ip: 10.244.0.42 qosClass: Burstable startTime: "2026-01-21T13:04:49Z" --- Pod 'namespace-lister-584d4574c4-rzwjh' under namespace 'namespace-lister': Pod namespace-lister-584d4574c4-rzwjh MountVolume.SetUp failed for volume "tls" : secret "namespace-lister-tls" not found (FailedMount) {"time":"2026-01-21T13:05:01.766374437Z","level":"INFO","msg":"creating resource cache"} {"time":"2026-01-21T13:05:01.963461265Z","level":"INFO","msg":"creating access cache"} {"time":"2026-01-21T13:05:01.963664823Z","level":"INFO","msg":"building metrics server"} {"time":"2026-01-21T13:05:01.965278139Z","level":"INFO","msg":"starting metrics server in background"} {"time":"2026-01-21T13:05:01.965306102Z","level":"INFO","msg":"building api server"} {"time":"2026-01-21T13:05:01.965396983Z","level":"INFO","msg":"Starting metrics server","logger":"controller-runtime/metrics"} {"time":"2026-01-21T13:05:01.965647977Z","level":"INFO","msg":"Serving metrics server","logger":"controller-runtime/metrics","bindAddress":":9100","secure":true} ---------- namespace 'openshift-pipelines' ---------- ---------- namespace 'pipelines-as-code' ---------- apiVersion: v1 kind: Pod metadata: creationTimestamp: "2026-01-21T13:03:08Z" generateName: pipelines-as-code-controller-668fdd7d95- labels: app: pipelines-as-code-controller app.kubernetes.io/component: controller app.kubernetes.io/instance: default app.kubernetes.io/name: controller app.kubernetes.io/part-of: pipelines-as-code app.kubernetes.io/version: v0.41.0 pod-template-hash: 668fdd7d95 name: pipelines-as-code-controller-668fdd7d95-5mvkr namespace: pipelines-as-code ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: pipelines-as-code-controller-668fdd7d95 uid: bb5f92f3-eddc-45b3-bead-47fcc86310ab resourceVersion: "2892" uid: 2b575417-e523-4ee6-9f2e-4231df964a8e spec: containers: - env: - name: CONFIG_LOGGING_NAME value: pac-config-logging - name: TLS_KEY value: key - name: TLS_CERT value: cert - name: TLS_SECRET_NAME value: pipelines-as-code-tls-secret - name: SYSTEM_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: K_METRICS_CONFIG value: '{"Domain":"pipelinesascode.tekton.dev/controller","Component":"pac_controller","PrometheusPort":9090,"ConfigMap":{"name":"pipelines-as-code-config-observability"}}' - name: K_TRACING_CONFIG value: '{"backend":"prometheus","debug":"false","sample-rate":"0"}' - name: K_SINK_TIMEOUT value: "30" - name: PAC_CONTROLLER_LABEL value: default - name: PAC_CONTROLLER_SECRET value: pipelines-as-code-secret - name: PAC_CONTROLLER_CONFIGMAP value: pipelines-as-code - name: KUBERNETES_MIN_VERSION value: v1.28.0 image: ghcr.io/openshift-pipelines/pipelines-as-code/pipelines-as-code-controller:v0.41.0 imagePullPolicy: Always livenessProbe: failureThreshold: 3 httpGet: path: /live port: api scheme: HTTP periodSeconds: 15 successThreshold: 1 timeoutSeconds: 1 name: pac-controller ports: - containerPort: 8082 name: api protocol: TCP - containerPort: 9090 name: metrics protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /live port: api scheme: HTTP periodSeconds: 15 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 100m memory: 100Mi requests: cpu: 50m memory: 50Mi securityContext: readOnlyRootFilesystem: true runAsNonRoot: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/pipelines-as-code/tls name: tls readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-v6vkg readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: kind-mapt-control-plane preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault serviceAccount: pipelines-as-code-controller serviceAccountName: pipelines-as-code-controller terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: tls secret: defaultMode: 420 optional: true secretName: pipelines-as-code-tls-secret - name: kube-api-access-v6vkg projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2026-01-21T13:03:12Z" status: "True" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2026-01-21T13:03:08Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2026-01-21T13:03:13Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2026-01-21T13:03:13Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2026-01-21T13:03:08Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://a9cf0dd524216d782ba5be7ea2e488f24e6f32d47918bc72fc6b0ba3de35a3c6 image: ghcr.io/openshift-pipelines/pipelines-as-code/pipelines-as-code-controller:v0.41.0 imageID: ghcr.io/openshift-pipelines/pipelines-as-code/pipelines-as-code-controller@sha256:6e3e194f278af27019b014545317f2175ff52520cdd559c30c12b7bdf9c89640 lastState: {} name: pac-controller ready: true restartCount: 0 started: true state: running: startedAt: "2026-01-21T13:03:12Z" volumeMounts: - mountPath: /etc/pipelines-as-code/tls name: tls readOnly: true recursiveReadOnly: Disabled - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-v6vkg readOnly: true recursiveReadOnly: Disabled hostIP: 10.89.0.2 hostIPs: - ip: 10.89.0.2 phase: Running podIP: 10.244.0.30 podIPs: - ip: 10.244.0.30 qosClass: Burstable startTime: "2026-01-21T13:03:08Z" --- Pod 'pipelines-as-code-controller-668fdd7d95-5mvkr' under namespace 'pipelines-as-code': Pod pipelines-as-code-controller-668fdd7d95-5mvkr Readiness probe failed: Get "http://10.244.0.30:8082/live": dial tcp 10.244.0.30:8082: connect: connection refused (Unhealthy) {"level":"info","ts":"2026-01-21T13:03:12.872Z","logger":"pipelinesascode","caller":"v2/configurator_configmap.go:104","msg":"Adding Watcher on ConfigMap pac-config-logging for logs","commit":"c909416"} {"level":"info","ts":"2026-01-21T13:03:12.872Z","logger":"pipelinesascode","caller":"v2/main.go:225","msg":"ConfigMap watcher is enabled","commit":"c909416"} {"level":"info","ts":"2026-01-21T13:03:13.073Z","logger":"pipelinesascode","caller":"adapter/adapter.go:86","msg":"Starting Pipelines as Code version: v0.41.0","commit":"c909416"} {"level":"info","ts":"2026-01-21T13:03:13.073Z","logger":"pipelinesascode","caller":"injection/health_check.go:43","msg":"Probes server listening on port 8080","commit":"c909416"} {"level":"info","ts":1769000593.074525,"caller":"configutil/config.go:50","msg":"updating value for field ApplicationName: from 'Pipelines as Code CI' to 'Local Konflux'"} {"level":"info","ts":1769000593.0745811,"caller":"configutil/config.go:50","msg":"updating value for field CustomConsoleName: from '' to 'Local Konflux'"} {"level":"info","ts":1769000593.0745995,"caller":"configutil/config.go:50","msg":"updating value for field CustomConsoleURL: from '' to 'https://54.71.21.136:9443'"} {"level":"info","ts":1769000593.0746043,"caller":"configutil/config.go:50","msg":"updating value for field CustomConsolePRdetail: from '' to 'https://54.71.21.136:9443/ns/{{ namespace }}/pipelinerun/{{ pr }}'"} {"level":"info","ts":1769000593.0746107,"caller":"configutil/config.go:50","msg":"updating value for field CustomConsolePRTaskLog: from '' to 'https://54.71.21.136:9443/ns/{{ namespace }}/pipelinerun/{{ pr }}/logs/{{ task }}'"} {"level":"info","ts":1769000593.074618,"caller":"configutil/config.go:50","msg":"updating value for field CustomConsoleNamespaceURL: from '' to 'https://54.71.21.136:9443/ns/{{ namespace }}'"} {"level":"info","ts":1769000593.0746307,"caller":"params/run.go:45","msg":"updating console url to: https://54.71.21.136:9443"} ---------- namespace 'release-service' ---------- apiVersion: v1 kind: Pod metadata: annotations: kubectl.kubernetes.io/default-container: manager creationTimestamp: "2026-01-21T13:04:23Z" generateName: release-service-controller-manager-98c694ddb- labels: control-plane: controller-manager pod-template-hash: 98c694ddb name: release-service-controller-manager-98c694ddb-v49hj namespace: release-service ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: release-service-controller-manager-98c694ddb uid: 03411c8e-0874-4685-8bf5-b45d134c9042 resourceVersion: "4291" uid: d7ce8c7d-d1cc-41e7-88c5-6e232e338a17 spec: containers: - args: - --metrics-bind-address=:8080 - --leader-elect=false command: - /manager env: - name: DEFAULT_RELEASE_PVC valueFrom: configMapKeyRef: key: DEFAULT_RELEASE_PVC name: release-service-manager-properties optional: true - name: DEFAULT_RELEASE_WORKSPACE_NAME valueFrom: configMapKeyRef: key: DEFAULT_RELEASE_WORKSPACE_NAME name: release-service-manager-properties optional: true - name: DEFAULT_RELEASE_WORKSPACE_SIZE valueFrom: configMapKeyRef: key: DEFAULT_RELEASE_WORKSPACE_SIZE name: release-service-manager-properties optional: true - name: SERVICE_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace image: quay.io/konflux-ci/release-service:8b9231d38b15c8c1612bc6ec7ab60e393a77f2ea imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8081 scheme: HTTP initialDelaySeconds: 15 periodSeconds: 20 successThreshold: 1 timeoutSeconds: 1 name: manager ports: - containerPort: 9443 name: webhook-server protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: 8081 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 500m memory: 128Mi requests: cpu: 10m memory: 64Mi securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /tmp/k8s-webhook-server/serving-certs name: cert readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-5zd4s readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: kind-mapt-control-plane preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: runAsNonRoot: true serviceAccount: release-service-controller-manager serviceAccountName: release-service-controller-manager terminationGracePeriodSeconds: 10 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: cert secret: defaultMode: 420 secretName: webhook-server-cert - name: kube-api-access-5zd4s projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2026-01-21T13:04:29Z" status: "True" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2026-01-21T13:04:23Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2026-01-21T13:04:40Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2026-01-21T13:04:40Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2026-01-21T13:04:23Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://364a1c5c67ac301e08f9ccfa9f9876c354ef5d331e1042fdf7bcf49481a5f66f image: quay.io/konflux-ci/release-service:8b9231d38b15c8c1612bc6ec7ab60e393a77f2ea imageID: quay.io/konflux-ci/release-service@sha256:4a97672b239562b20fb74c8bd482d54affc7e09019c0fab97a334c1bc245fbef lastState: {} name: manager ready: true restartCount: 0 started: true state: running: startedAt: "2026-01-21T13:04:28Z" volumeMounts: - mountPath: /tmp/k8s-webhook-server/serving-certs name: cert readOnly: true recursiveReadOnly: Disabled - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-5zd4s readOnly: true recursiveReadOnly: Disabled hostIP: 10.89.0.2 hostIPs: - ip: 10.89.0.2 phase: Running podIP: 10.244.0.39 podIPs: - ip: 10.244.0.39 qosClass: Burstable startTime: "2026-01-21T13:04:23Z" --- Pod 'release-service-controller-manager-98c694ddb-v49hj' under namespace 'release-service': Pod release-service-controller-manager-98c694ddb-v49hj MountVolume.SetUp failed for volume "cert" : secret "webhook-server-cert" not found (FailedMount) 2026-01-21T13:04:28.419Z INFO controller-runtime.webhook Registering webhook {"path": "/mutate-appstudio-redhat-com-v1alpha1-author"} 2026-01-21T13:04:28.419Z INFO controller-runtime.builder Registering a mutating webhook {"GVK": "appstudio.redhat.com/v1alpha1, Kind=Release", "path": "/mutate-appstudio-redhat-com-v1alpha1-release"} 2026-01-21T13:04:28.419Z INFO controller-runtime.webhook Registering webhook {"path": "/mutate-appstudio-redhat-com-v1alpha1-release"} 2026-01-21T13:04:28.419Z INFO controller-runtime.builder Registering a validating webhook {"GVK": "appstudio.redhat.com/v1alpha1, Kind=Release", "path": "/validate-appstudio-redhat-com-v1alpha1-release"} 2026-01-21T13:04:28.419Z INFO controller-runtime.webhook Registering webhook {"path": "/validate-appstudio-redhat-com-v1alpha1-release"} 2026-01-21T13:04:28.419Z INFO controller-runtime.builder Registering a mutating webhook {"GVK": "appstudio.redhat.com/v1alpha1, Kind=ReleasePlan", "path": "/mutate-appstudio-redhat-com-v1alpha1-releaseplan"} 2026-01-21T13:04:28.419Z INFO controller-runtime.webhook Registering webhook {"path": "/mutate-appstudio-redhat-com-v1alpha1-releaseplan"} 2026-01-21T13:04:28.419Z INFO controller-runtime.builder Registering a validating webhook {"GVK": "appstudio.redhat.com/v1alpha1, Kind=ReleasePlan", "path": "/validate-appstudio-redhat-com-v1alpha1-releaseplan"} 2026-01-21T13:04:28.419Z INFO controller-runtime.webhook Registering webhook {"path": "/validate-appstudio-redhat-com-v1alpha1-releaseplan"} 2026-01-21T13:04:28.419Z INFO controller-runtime.builder Registering a mutating webhook {"GVK": "appstudio.redhat.com/v1alpha1, Kind=ReleasePlanAdmission", "path": "/mutate-appstudio-redhat-com-v1alpha1-releaseplanadmission"} 2026-01-21T13:04:28.419Z INFO controller-runtime.webhook Registering webhook {"path": "/mutate-appstudio-redhat-com-v1alpha1-releaseplanadmission"} 2026-01-21T13:04:28.419Z INFO controller-runtime.builder Registering a validating webhook {"GVK": "appstudio.redhat.com/v1alpha1, Kind=ReleasePlanAdmission", "path": "/validate-appstudio-redhat-com-v1alpha1-releaseplanadmission"} 2026-01-21T13:04:28.419Z INFO controller-runtime.webhook Registering webhook {"path": "/validate-appstudio-redhat-com-v1alpha1-releaseplanadmission"} 2026-01-21T13:04:28.419Z INFO setup starting manager 2026-01-21T13:04:28.420Z INFO controller-runtime.metrics Starting metrics server 2026-01-21T13:04:28.420Z INFO starting server {"name": "health probe", "addr": "[::]:8081"} 2026-01-21T13:04:28.420Z INFO controller-runtime.metrics Serving metrics server {"bindAddress": ":8080", "secure": false} 2026-01-21T13:04:28.420Z INFO controller-runtime.webhook Starting webhook server 2026-01-21T13:04:28.420Z INFO setup disabling http/2 2026-01-21T13:04:28.420Z INFO controller-runtime.certwatcher Updated current TLS certificate {"cert": "/tmp/k8s-webhook-server/serving-certs/tls.crt", "key": "/tmp/k8s-webhook-server/serving-certs/tls.key"} 2026-01-21T13:04:28.420Z INFO controller-runtime.webhook Serving webhook server {"host": "", "port": 9443} 2026-01-21T13:04:28.420Z INFO controller-runtime.certwatcher Starting certificate poll+watcher {"cert": "/tmp/k8s-webhook-server/serving-certs/tls.crt", "key": "/tmp/k8s-webhook-server/serving-certs/tls.key", "interval": "10s"} 2026-01-21T13:04:29.521Z INFO Starting EventSource {"controller": "release", "controllerGroup": "appstudio.redhat.com", "controllerKind": "Release", "source": "kind source: *v1.PipelineRun"} 2026-01-21T13:04:29.521Z INFO Starting EventSource {"controller": "releaseplan", "controllerGroup": "appstudio.redhat.com", "controllerKind": "ReleasePlan", "source": "kind source: *v1alpha1.ReleasePlanAdmission"} 2026-01-21T13:04:29.521Z INFO Starting EventSource {"controller": "releaseplanadmission", "controllerGroup": "appstudio.redhat.com", "controllerKind": "ReleasePlanAdmission", "source": "kind source: *v1alpha1.ReleasePlan"} 2026-01-21T13:04:29.521Z INFO Starting EventSource {"controller": "release", "controllerGroup": "appstudio.redhat.com", "controllerKind": "Release", "source": "kind source: *v1alpha1.Release"} 2026-01-21T13:04:29.521Z INFO Starting EventSource {"controller": "releaseplan", "controllerGroup": "appstudio.redhat.com", "controllerKind": "ReleasePlan", "source": "kind source: *v1alpha1.ReleasePlan"} 2026-01-21T13:04:29.521Z INFO Starting EventSource {"controller": "releaseplanadmission", "controllerGroup": "appstudio.redhat.com", "controllerKind": "ReleasePlanAdmission", "source": "kind source: *v1alpha1.ReleasePlanAdmission"} 2026-01-21T13:04:29.622Z INFO Starting Controller {"controller": "releaseplan", "controllerGroup": "appstudio.redhat.com", "controllerKind": "ReleasePlan"} 2026-01-21T13:04:29.622Z INFO Starting workers {"controller": "releaseplan", "controllerGroup": "appstudio.redhat.com", "controllerKind": "ReleasePlan", "worker count": 1} 2026-01-21T13:04:29.622Z INFO Starting Controller {"controller": "release", "controllerGroup": "appstudio.redhat.com", "controllerKind": "Release"} 2026-01-21T13:04:29.622Z INFO Starting Controller {"controller": "releaseplanadmission", "controllerGroup": "appstudio.redhat.com", "controllerKind": "ReleasePlanAdmission"} 2026-01-21T13:04:29.622Z INFO Starting workers {"controller": "releaseplanadmission", "controllerGroup": "appstudio.redhat.com", "controllerKind": "ReleasePlanAdmission", "worker count": 1} 2026-01-21T13:04:29.622Z INFO Starting workers {"controller": "release", "controllerGroup": "appstudio.redhat.com", "controllerKind": "Release", "worker count": 1} ---------- namespace 'smee-client' ---------- apiVersion: v1 kind: Pod metadata: creationTimestamp: "2026-01-21T13:03:36Z" generateName: gosmee-client-9d74466cf- labels: app: gosmee-client pod-template-hash: 9d74466cf name: gosmee-client-9d74466cf-rz6b5 namespace: smee-client ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: gosmee-client-9d74466cf uid: 45a38e04-548c-448b-ba90-09f82640c1ac resourceVersion: "3505" uid: a813e05a-ed88-4b4d-8b8c-af0dd9401fdc spec: containers: - args: - client - https://smee.io/YARnqp67zyD8kvgySVJeMEqX7fLEKpSoaHVxXC2 - http://localhost:8080 image: ghcr.io/chmouel/gosmee:v0.28.3 imagePullPolicy: Always livenessProbe: exec: command: - /shared/check-smee-health.sh failureThreshold: 2 initialDelaySeconds: 20 periodSeconds: 30 successThreshold: 1 timeoutSeconds: 10 name: gosmee resources: limits: cpu: 100m memory: 32Mi requests: cpu: 10m memory: 32Mi securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 65532 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /shared name: shared-health - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-c8cv2 readOnly: true - env: - name: DOWNSTREAM_SERVICE_URL value: http://54.71.21.136:8180 - name: SMEE_CHANNEL_URL value: https://smee.io/YARnqp67zyD8kvgySVJeMEqX7fLEKpSoaHVxXC2 - name: INSECURE_SKIP_VERIFY value: "true" - name: HEALTH_CHECK_TIMEOUT_SECONDS value: "20" image: quay.io/konflux-ci/smee-sidecar:latest@sha256:9d81addccbe9ae1a89be12e9d4c5d5e7c7767dddcb3a7f24f1931080d3bd8629 imagePullPolicy: IfNotPresent livenessProbe: exec: command: - /shared/check-sidecar-health.sh failureThreshold: 2 initialDelaySeconds: 10 periodSeconds: 30 successThreshold: 1 timeoutSeconds: 10 name: health-check-sidecar ports: - containerPort: 8080 name: http protocol: TCP - containerPort: 9100 name: metrics protocol: TCP resources: limits: cpu: 100m memory: 32Mi requests: cpu: 10m memory: 32Mi securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 65532 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /shared name: shared-health - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-c8cv2 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: kind-mapt-control-plane preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 65532 serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - emptyDir: {} name: shared-health - name: kube-api-access-c8cv2 projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2026-01-21T13:03:43Z" status: "True" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2026-01-21T13:03:36Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2026-01-21T13:03:43Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2026-01-21T13:03:43Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2026-01-21T13:03:36Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://f804ee4024592d0fcd55eb27c489bf0c5050717d9718b60b6db4c6a6f7b330dd image: ghcr.io/chmouel/gosmee:v0.28.3 imageID: ghcr.io/chmouel/gosmee@sha256:3924c4fd119281d8f0e4605e3c4354a4653d752df502ff14dc5de233b944c5e9 lastState: {} name: gosmee ready: true restartCount: 0 started: true state: running: startedAt: "2026-01-21T13:03:40Z" volumeMounts: - mountPath: /shared name: shared-health - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-c8cv2 readOnly: true recursiveReadOnly: Disabled - containerID: containerd://b68c8abbd0795193ec0d44d8ad4b3e8e3596376661f600f45e0e38c777d3f27c image: sha256:87353128374ba7600c8b24db2f5cb0b5da597107ff54a2da74db9665c63f31da imageID: quay.io/konflux-ci/smee-sidecar@sha256:9d81addccbe9ae1a89be12e9d4c5d5e7c7767dddcb3a7f24f1931080d3bd8629 lastState: {} name: health-check-sidecar ready: true restartCount: 0 started: true state: running: startedAt: "2026-01-21T13:03:43Z" volumeMounts: - mountPath: /shared name: shared-health - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-c8cv2 readOnly: true recursiveReadOnly: Disabled hostIP: 10.89.0.2 hostIPs: - ip: 10.89.0.2 phase: Running podIP: 10.244.0.36 podIPs: - ip: 10.244.0.36 qosClass: Burstable startTime: "2026-01-21T13:03:36Z" --- Pod 'gosmee-client-9d74466cf-rz6b5' under namespace 'smee-client': Pod gosmee-client-9d74466cf-rz6b5 Liveness probe failed: Health file missing: /shared/health-status.txt Wed, 21 Jan 2026 13:03:40 UTC INF Starting gosmee client version: dev Wed, 21 Jan 2026 13:03:40 UTC WRN Could not parse server version: invalid character '<' looking for beginning of value Wed, 21 Jan 2026 13:03:40 UTC INF Configured reconnection strategy to retry indefinitely Wed, 21 Jan 2026 13:03:40 UTC INF 2026-01-21T13.03.01.931 Forwarding https://smee.io/YARnqp67zyD8kvgySVJeMEqX7fLEKpSoaHVxXC2 to http://localhost:8080 Wed, 21 Jan 2026 13:04:13 UTC INF 2026-01-21T13.04.01.618 request replayed to http://localhost:8080, status: 200 Wed, 21 Jan 2026 13:04:43 UTC INF 2026-01-21T13.04.01.615 request replayed to http://localhost:8080, status: 200 Wed, 21 Jan 2026 13:05:13 UTC INF 2026-01-21T13.05.01.598 request replayed to http://localhost:8080, status: 200 Wed, 21 Jan 2026 13:05:43 UTC INF 2026-01-21T13.05.01.619 request replayed to http://localhost:8080, status: 200 Wed, 21 Jan 2026 13:06:13 UTC INF 2026-01-21T13.06.01.635 request replayed to http://localhost:8080, status: 200 Wed, 21 Jan 2026 13:06:43 UTC INF 2026-01-21T13.06.01.620 request replayed to http://localhost:8080, status: 200 Wed, 21 Jan 2026 13:07:13 UTC INF 2026-01-21T13.07.01.612 request replayed to http://localhost:8080, status: 200 Wed, 21 Jan 2026 13:07:43 UTC INF 2026-01-21T13.07.01.621 request replayed to http://localhost:8080, status: 200 Wed, 21 Jan 2026 13:08:13 UTC INF 2026-01-21T13.08.01.596 request replayed to http://localhost:8080, status: 200 Wed, 21 Jan 2026 13:08:43 UTC INF 2026-01-21T13.08.01.676 request replayed to http://localhost:8080, status: 200 Wed, 21 Jan 2026 13:09:13 UTC INF 2026-01-21T13.09.01.636 request replayed to http://localhost:8080, status: 200 2026/01/21 13:03:43 Starting Smee instrumentation sidecar... 2026/01/21 13:03:43 Wrote read-only probe script: /shared/check-smee-health.sh 2026/01/21 13:03:43 Wrote read-only probe script: /shared/check-sidecar-health.sh 2026/01/21 13:03:43 Wrote read-only probe script: /shared/check-file-age.sh 2026/01/21 13:03:43 pprof endpoints disabled (set ENABLE_PPROF=true to enable) 2026/01/21 13:03:43 Management server (metrics) listening on :9100 2026/01/21 13:03:43 Relay server listening on :8080 with timeouts (read: 180s, write: 60s, idle: 600s) 2026/01/21 13:03:43 Starting background health checker (interval: 30s, timeout: 20s) 2026/01/21 13:04:13 Health check completed: success (Health check completed successfully) 2026/01/21 13:04:43 Health check completed: success (Health check completed successfully) 2026/01/21 13:05:13 Health check completed: success (Health check completed successfully) 2026/01/21 13:05:43 Health check completed: success (Health check completed successfully) 2026/01/21 13:06:13 Health check completed: success (Health check completed successfully) 2026/01/21 13:06:43 Health check completed: success (Health check completed successfully) 2026/01/21 13:07:13 Health check completed: success (Health check completed successfully) 2026/01/21 13:07:43 Health check completed: success (Health check completed successfully) 2026/01/21 13:08:13 Health check completed: success (Health check completed successfully) 2026/01/21 13:08:43 Health check completed: success (Health check completed successfully) 2026/01/21 13:09:13 Health check completed: success (Health check completed successfully) apiVersion: v1 kind: Pod metadata: creationTimestamp: "2026-01-21T13:03:36Z" generateName: gosmee-client-9d74466cf- labels: app: gosmee-client pod-template-hash: 9d74466cf name: gosmee-client-9d74466cf-rz6b5 namespace: smee-client ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: gosmee-client-9d74466cf uid: 45a38e04-548c-448b-ba90-09f82640c1ac resourceVersion: "3505" uid: a813e05a-ed88-4b4d-8b8c-af0dd9401fdc spec: containers: - args: - client - https://smee.io/YARnqp67zyD8kvgySVJeMEqX7fLEKpSoaHVxXC2 - http://localhost:8080 image: ghcr.io/chmouel/gosmee:v0.28.3 imagePullPolicy: Always livenessProbe: exec: command: - /shared/check-smee-health.sh failureThreshold: 2 initialDelaySeconds: 20 periodSeconds: 30 successThreshold: 1 timeoutSeconds: 10 name: gosmee resources: limits: cpu: 100m memory: 32Mi requests: cpu: 10m memory: 32Mi securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 65532 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /shared name: shared-health - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-c8cv2 readOnly: true - env: - name: DOWNSTREAM_SERVICE_URL value: http://54.71.21.136:8180 - name: SMEE_CHANNEL_URL value: https://smee.io/YARnqp67zyD8kvgySVJeMEqX7fLEKpSoaHVxXC2 - name: INSECURE_SKIP_VERIFY value: "true" - name: HEALTH_CHECK_TIMEOUT_SECONDS value: "20" image: quay.io/konflux-ci/smee-sidecar:latest@sha256:9d81addccbe9ae1a89be12e9d4c5d5e7c7767dddcb3a7f24f1931080d3bd8629 imagePullPolicy: IfNotPresent livenessProbe: exec: command: - /shared/check-sidecar-health.sh failureThreshold: 2 initialDelaySeconds: 10 periodSeconds: 30 successThreshold: 1 timeoutSeconds: 10 name: health-check-sidecar ports: - containerPort: 8080 name: http protocol: TCP - containerPort: 9100 name: metrics protocol: TCP resources: limits: cpu: 100m memory: 32Mi requests: cpu: 10m memory: 32Mi securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 65532 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /shared name: shared-health - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-c8cv2 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: kind-mapt-control-plane preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 65532 serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - emptyDir: {} name: shared-health - name: kube-api-access-c8cv2 projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2026-01-21T13:03:43Z" status: "True" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2026-01-21T13:03:36Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2026-01-21T13:03:43Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2026-01-21T13:03:43Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2026-01-21T13:03:36Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://f804ee4024592d0fcd55eb27c489bf0c5050717d9718b60b6db4c6a6f7b330dd image: ghcr.io/chmouel/gosmee:v0.28.3 imageID: ghcr.io/chmouel/gosmee@sha256:3924c4fd119281d8f0e4605e3c4354a4653d752df502ff14dc5de233b944c5e9 lastState: {} name: gosmee ready: true restartCount: 0 started: true state: running: startedAt: "2026-01-21T13:03:40Z" volumeMounts: - mountPath: /shared name: shared-health - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-c8cv2 readOnly: true recursiveReadOnly: Disabled - containerID: containerd://b68c8abbd0795193ec0d44d8ad4b3e8e3596376661f600f45e0e38c777d3f27c image: sha256:87353128374ba7600c8b24db2f5cb0b5da597107ff54a2da74db9665c63f31da imageID: quay.io/konflux-ci/smee-sidecar@sha256:9d81addccbe9ae1a89be12e9d4c5d5e7c7767dddcb3a7f24f1931080d3bd8629 lastState: {} name: health-check-sidecar ready: true restartCount: 0 started: true state: running: startedAt: "2026-01-21T13:03:43Z" volumeMounts: - mountPath: /shared name: shared-health - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-c8cv2 readOnly: true recursiveReadOnly: Disabled hostIP: 10.89.0.2 hostIPs: - ip: 10.89.0.2 phase: Running podIP: 10.244.0.36 podIPs: - ip: 10.244.0.36 qosClass: Burstable startTime: "2026-01-21T13:03:36Z" --- Pod 'gosmee-client-9d74466cf-rz6b5' under namespace 'smee-client': Pod gosmee-client-9d74466cf-rz6b5 Liveness probe failed: Health file missing: /shared/health-status.txt Wed, 21 Jan 2026 13:03:40 UTC INF Starting gosmee client version: dev Wed, 21 Jan 2026 13:03:40 UTC WRN Could not parse server version: invalid character '<' looking for beginning of value Wed, 21 Jan 2026 13:03:40 UTC INF Configured reconnection strategy to retry indefinitely Wed, 21 Jan 2026 13:03:40 UTC INF 2026-01-21T13.03.01.931 Forwarding https://smee.io/YARnqp67zyD8kvgySVJeMEqX7fLEKpSoaHVxXC2 to http://localhost:8080 Wed, 21 Jan 2026 13:04:13 UTC INF 2026-01-21T13.04.01.618 request replayed to http://localhost:8080, status: 200 Wed, 21 Jan 2026 13:04:43 UTC INF 2026-01-21T13.04.01.615 request replayed to http://localhost:8080, status: 200 Wed, 21 Jan 2026 13:05:13 UTC INF 2026-01-21T13.05.01.598 request replayed to http://localhost:8080, status: 200 Wed, 21 Jan 2026 13:05:43 UTC INF 2026-01-21T13.05.01.619 request replayed to http://localhost:8080, status: 200 Wed, 21 Jan 2026 13:06:13 UTC INF 2026-01-21T13.06.01.635 request replayed to http://localhost:8080, status: 200 Wed, 21 Jan 2026 13:06:43 UTC INF 2026-01-21T13.06.01.620 request replayed to http://localhost:8080, status: 200 Wed, 21 Jan 2026 13:07:13 UTC INF 2026-01-21T13.07.01.612 request replayed to http://localhost:8080, status: 200 Wed, 21 Jan 2026 13:07:43 UTC INF 2026-01-21T13.07.01.621 request replayed to http://localhost:8080, status: 200 Wed, 21 Jan 2026 13:08:13 UTC INF 2026-01-21T13.08.01.596 request replayed to http://localhost:8080, status: 200 Wed, 21 Jan 2026 13:08:43 UTC INF 2026-01-21T13.08.01.676 request replayed to http://localhost:8080, status: 200 Wed, 21 Jan 2026 13:09:13 UTC INF 2026-01-21T13.09.01.636 request replayed to http://localhost:8080, status: 200 2026/01/21 13:03:43 Starting Smee instrumentation sidecar... 2026/01/21 13:03:43 Wrote read-only probe script: /shared/check-smee-health.sh 2026/01/21 13:03:43 Wrote read-only probe script: /shared/check-sidecar-health.sh 2026/01/21 13:03:43 Wrote read-only probe script: /shared/check-file-age.sh 2026/01/21 13:03:43 pprof endpoints disabled (set ENABLE_PPROF=true to enable) 2026/01/21 13:03:43 Management server (metrics) listening on :9100 2026/01/21 13:03:43 Relay server listening on :8080 with timeouts (read: 180s, write: 60s, idle: 600s) 2026/01/21 13:03:43 Starting background health checker (interval: 30s, timeout: 20s) 2026/01/21 13:04:13 Health check completed: success (Health check completed successfully) 2026/01/21 13:04:43 Health check completed: success (Health check completed successfully) 2026/01/21 13:05:13 Health check completed: success (Health check completed successfully) 2026/01/21 13:05:43 Health check completed: success (Health check completed successfully) 2026/01/21 13:06:13 Health check completed: success (Health check completed successfully) 2026/01/21 13:06:43 Health check completed: success (Health check completed successfully) 2026/01/21 13:07:13 Health check completed: success (Health check completed successfully) 2026/01/21 13:07:43 Health check completed: success (Health check completed successfully) 2026/01/21 13:08:13 Health check completed: success (Health check completed successfully) 2026/01/21 13:08:43 Health check completed: success (Health check completed successfully) 2026/01/21 13:09:13 Health check completed: success (Health check completed successfully) ---------- namespace 'tekton-operator' ---------- ---------- namespace 'tekton-pipelines' ---------- apiVersion: v1 kind: Pod metadata: annotations: cluster-autoscaler.kubernetes.io/safe-to-evict: "false" creationTimestamp: "2026-01-21T13:02:22Z" generateName: tekton-results-api-546b75cb88- labels: app: tekton-results-api app.kubernetes.io/name: tekton-results-api app.kubernetes.io/version: v0.17.1 operator.tekton.dev/deployment-spec-applied-hash: 94fec08c437e0ae3635dfa4fb552d015 pod-template-hash: 546b75cb88 name: tekton-results-api-546b75cb88-w94r5 namespace: tekton-pipelines ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: tekton-results-api-546b75cb88 uid: c7747fc1-03ca-451b-98aa-8a270c63ee95 resourceVersion: "2358" uid: 64dafaae-3d1e-4aba-a975-f4f4ce42a60b spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/os operator: NotIn values: - windows podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/name: tekton-results-api app.kubernetes.io/version: v0.17.1 topologyKey: kubernetes.io/hostname weight: 100 containers: - env: - name: DB_HOST value: tekton-results-postgres-service.tekton-pipelines.svc.cluster.local - name: DB_PASSWORD valueFrom: secretKeyRef: key: POSTGRES_PASSWORD name: tekton-results-postgres - name: DB_USER valueFrom: secretKeyRef: key: POSTGRES_USER name: tekton-results-postgres - name: IS_EXTERNAL_DB value: "false" - name: KUBERNETES_MIN_VERSION value: v1.0.0 - name: LOGGING_PLUGIN_TLS_VERIFICATION_DISABLE value: "false" - name: ROUTE_ENABLED value: "false" - name: ROUTE_TLS_TERMINATION value: edge image: ghcr.io/tektoncd/results/api-b1b7ffa9ba32f7c3020c3b68830b30a8:v0.17.1@sha256:74d09ec29a0382c0ddd2e116375eef984297304190f1516ae35ccbc748fac5b2 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8080 scheme: HTTPS initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 name: api readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8080 scheme: HTTPS initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: {} securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true runAsNonRoot: true seccompProfile: type: RuntimeDefault startupProbe: failureThreshold: 10 httpGet: path: /healthz port: 8080 scheme: HTTPS initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/tekton/results name: config readOnly: true - mountPath: /etc/tls name: tls readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-xn2dk readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: kind-mapt-control-plane preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault serviceAccount: tekton-results-api serviceAccountName: tekton-results-api terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - configMap: defaultMode: 420 name: tekton-results-api-config name: config - name: tls secret: defaultMode: 420 secretName: tekton-results-tls - name: kube-api-access-xn2dk projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2026-01-21T13:02:27Z" status: "True" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2026-01-21T13:02:22Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2026-01-21T13:02:53Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2026-01-21T13:02:53Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2026-01-21T13:02:22Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://06de2e5e0fbb88aef75441c2e80ead5521ce5690181941b377f02019f3037580 image: sha256:ea40367e0e2cd3cdac088f43d4f732fb1101b2461e0b23770a56f81f5c88788b imageID: ghcr.io/tektoncd/results/api-b1b7ffa9ba32f7c3020c3b68830b30a8@sha256:74d09ec29a0382c0ddd2e116375eef984297304190f1516ae35ccbc748fac5b2 lastState: {} name: api ready: true restartCount: 0 started: true state: running: startedAt: "2026-01-21T13:02:27Z" volumeMounts: - mountPath: /etc/tekton/results name: config readOnly: true recursiveReadOnly: Disabled - mountPath: /etc/tls name: tls readOnly: true recursiveReadOnly: Disabled - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-xn2dk readOnly: true recursiveReadOnly: Disabled hostIP: 10.89.0.2 hostIPs: - ip: 10.89.0.2 phase: Running podIP: 10.244.0.22 podIPs: - ip: 10.244.0.22 qosClass: BestEffort startTime: "2026-01-21T13:02:22Z" --- Pod 'tekton-results-api-546b75cb88-w94r5' under namespace 'tekton-pipelines': Pod tekton-results-api-546b75cb88-w94r5 Startup probe failed: Get "https://10.244.0.22:8080/healthz": dial tcp 10.244.0.22:8080: connect: connection refused (Unhealthy) 2026/01/21 13:02:27 maxprocs: Leaving GOMAXPROCS=48: CPU quota undefined {"level":"warn","ts":1769000547.4034355,"caller":"api/main.go:149","msg":"Database ping failed (retrying in 10s): failed to connect to `user=result database=tekton-results`: 10.96.93.199:5432 (tekton-results-postgres-service.tekton-pipelines.svc.cluster.local): dial error: dial tcp 10.96.93.199:5432: connect: connection refused"} {"level":"warn","ts":1769000557.403608,"caller":"api/main.go:149","msg":"Database ping failed (retrying in 10s): failed to connect to `user=result database=tekton-results`: 10.96.93.199:5432 (tekton-results-postgres-service.tekton-pipelines.svc.cluster.local): dial error: dial tcp 10.96.93.199:5432: connect: connection refused"} {"level":"info","ts":1769000567.4081953,"caller":"api/main.go:208","msg":"Kubernetes RBAC authorization check enabled"} {"level":"info","ts":1769000567.4085817,"caller":"api/main.go:229","msg":"Kubernetes RBAC impersonation enabled"} {"level":"warn","ts":1769000567.431657,"caller":"plugin/plugin_logs.go:720","msg":"Plugin Logs API Disable: unsupported type of logs given for plugin, legacy logging system might work"} {"level":"info","ts":1769000567.4322064,"caller":"api/main.go:318","msg":"Prometheus server listening on: 9090"} {"level":"info","ts":1769000567.432524,"caller":"api/main.go:369","msg":"gRPC and REST server listening on: 8080"} apiVersion: v1 items: [] kind: List metadata: resourceVersion: "" apiVersion: v1 items: [] kind: List metadata: resourceVersion: "" Generated logs successfully [INFO] Reading required secrets... [INFO] Deploying image-controller... ๐Ÿ’ง Starting Image Controller deployment... ๐ŸŒŠ Deploying Image Controller components... namespace/image-controller created customresourcedefinition.apiextensions.k8s.io/imagerepositories.appstudio.redhat.com created serviceaccount/image-controller-controller-manager created role.rbac.authorization.k8s.io/image-controller-leader-election-role created clusterrole.rbac.authorization.k8s.io/image-controller-imagerepository-editor-role created clusterrole.rbac.authorization.k8s.io/image-controller-imagerepository-viewer-role created clusterrole.rbac.authorization.k8s.io/image-controller-manager-role created clusterrole.rbac.authorization.k8s.io/image-controller-metrics-auth-role created rolebinding.rbac.authorization.k8s.io/image-controller-leader-election-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/image-controller-manager-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/image-controller-metrics-auth-rolebinding created configmap/image-controller-image-pruner-configmap-tc4f9c8t66 created configmap/image-controller-notification-resetter-configmap-b2kb8hg596 created service/image-controller-controller-manager-metrics-service created deployment.apps/image-controller-controller-manager created cronjob.batch/image-controller-image-pruner-cronjob created cronjob.batch/image-controller-notification-resetter-cronjob created ๐Ÿ”‘ Setting up Quay credentials... ๐Ÿ”‘ Creating new Quay secret... secret/quaytoken created โณ Waiting for Image Controller to be ready... โณ Waiting for Image Controller pods to be ready... pod/image-controller-controller-manager-67855b78f4-m6c4v condition met [INFO] Adding PaC secrets to pipelines-as-code... secret/pipelines-as-code-secret created secret/pipelines-as-code-secret created secret/pipelines-as-code-secret created [INFO] Deploying smee client... namespace/smee-client unchanged deployment.apps/gosmee-client configured โณ Waiting for Tekton configuration to be ready... tektonconfig.operator.tekton.dev/config condition met โณ Waiting for all deployments to be available... deployment.apps/build-service-controller-manager condition met deployment.apps/cert-manager condition met deployment.apps/cert-manager-cainjector condition met deployment.apps/cert-manager-webhook condition met deployment.apps/trust-manager condition met deployment.apps/dex condition met deployment.apps/image-controller-controller-manager condition met timed out waiting for the condition on deployments/integration-service-controller-manager timed out waiting for the condition on deployments/registry timed out waiting for the condition on deployments/proxy timed out waiting for the condition on deployments/coredns timed out waiting for the condition on deployments/kyverno-admission-controller timed out waiting for the condition on deployments/kyverno-background-controller timed out waiting for the condition on deployments/kyverno-cleanup-controller timed out waiting for the condition on deployments/kyverno-reports-controller timed out waiting for the condition on deployments/local-path-provisioner timed out waiting for the condition on deployments/namespace-lister timed out waiting for the condition on deployments/pipelines-as-code-controller timed out waiting for the condition on deployments/pipelines-as-code-watcher timed out waiting for the condition on deployments/pipelines-as-code-webhook timed out waiting for the condition on deployments/release-service-controller-manager timed out waiting for the condition on deployments/gosmee-client timed out waiting for the condition on deployments/tekton-operator timed out waiting for the condition on deployments/tekton-operator-webhook timed out waiting for the condition on deployments/tekton-chains-controller timed out waiting for the condition on deployments/tekton-events-controller timed out waiting for the condition on deployments/tekton-operator-proxy-webhook timed out waiting for the condition on deployments/tekton-pipelines-controller timed out waiting for the condition on deployments/tekton-pipelines-remote-resolvers timed out waiting for the condition on deployments/tekton-pipelines-webhook timed out waiting for the condition on deployments/tekton-results-api timed out waiting for the condition on deployments/tekton-results-retention-policy-agent timed out waiting for the condition on deployments/tekton-results-watcher timed out waiting for the condition on deployments/tekton-triggers-controller timed out waiting for the condition on deployments/tekton-triggers-core-interceptors timed out waiting for the condition on deployments/tekton-triggers-webhook [INFO] Creating Test Resources... ๐Ÿงช Deploying test resources... ๐Ÿ‘ฅ Setting up demo users... namespace/user-ns1 created namespace/user-ns2 created serviceaccount/release-pipeline created Warning: resource serviceaccounts/default is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. serviceaccount/default configured serviceaccount/release-pipeline created role.rbac.authorization.k8s.io/ns2-pod-viewer-job-creator created rolebinding.rbac.authorization.k8s.io/release-pipeline-resource-role-binding created rolebinding.rbac.authorization.k8s.io/user1-konflux-admin created rolebinding.rbac.authorization.k8s.io/user2-konflux-admin created rolebinding.rbac.authorization.k8s.io/ns2-pod-viewer-job-creator-binding created rolebinding.rbac.authorization.k8s.io/release-pipeline-resource-role-binding created rolebinding.rbac.authorization.k8s.io/user1-konflux-admin created rolebinding.rbac.authorization.k8s.io/user2-konflux-admin created clusterrolebinding.rbac.authorization.k8s.io/managed1-self-access-review created clusterrolebinding.rbac.authorization.k8s.io/managed2-self-access-review created clusterrolebinding.rbac.authorization.k8s.io/user1-self-access-review created secret/regcred-empty created application.appstudio.redhat.com/sample-component created component.appstudio.redhat.com/sample-component created releaseplan.appstudio.redhat.com/local-release created releaseplan.appstudio.redhat.com/sample-component created Error from server (InternalError): error when creating "./test/resources/demo-users/user/": Internal error occurred: failed calling webhook "vintegrationtestscenario.kb.io": failed to call webhook: Post "https://integration-service-webhook-service.integration-service.svc:443/validate-appstudio-redhat-com-v1beta2-integrationtestscenario?timeout=10s": dial tcp 10.96.131.28:443: connect: connection refused