[INFO] Fetching and executing solve-pr-pairing.sh... [INFO] Looking for paired PR in konflux-ci/release-service for 'release-service-catalog' [INFO] No paired PR found in konflux-ci/release-service by on branch refs/heads/staging [INFO] Checking for paired PR in redhat-appstudio/infra-deployments [INFO] No paired PR found in redhat-appstudio/infra-deployments. Falling back to branch: main [INFO] Downloading release_service_config.yaml... [INFO] Downloaded release-service-config.yaml [INFO] Downloading release-pipeline-resources-clusterrole.yaml... [INFO] Downloaded release-pipeline-resources-clusterrole.yaml [INFO] Configuration files updated. [INFO] Loading env vars from parameters [WARNING] No substitutions will be applied as the kustomization file for release-service-catalog has not been found. Kubernetes control plane is running at https://3.7.161.19:6443 CoreDNS is running at https://3.7.161.19:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. [INFO] Installing Konflux CI dependencies ๐Ÿ” Checking requirements kubectl is installed openssl is installed All requirements are met Continue ๐Ÿงช Testing PVC creation for default storage class Creating PVC from './dependencies/pre-deployment-pvc-binding' using the cluster's default storage class namespace/test-pvc-ns created persistentvolumeclaim/test-pvc created pod/test-pvc-consumer created persistentvolumeclaim/test-pvc condition met namespace "test-pvc-ns" deleted persistentvolumeclaim "test-pvc" deleted pod "test-pvc-consumer" deleted PVC binding successfull ๐ŸŒŠ Deploying Konflux Dependencies ๐Ÿ” Deploying Cert Manager... namespace/cert-manager created customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created serviceaccount/cert-manager created serviceaccount/cert-manager-cainjector created serviceaccount/cert-manager-webhook created role.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created role.rbac.authorization.k8s.io/cert-manager:leaderelection created clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created clusterrole.rbac.authorization.k8s.io/cert-manager-cluster-view created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created clusterrole.rbac.authorization.k8s.io/cert-manager-edit created clusterrole.rbac.authorization.k8s.io/cert-manager-view created clusterrole.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created service/cert-manager created service/cert-manager-webhook created deployment.apps/cert-manager created deployment.apps/cert-manager-cainjector created deployment.apps/cert-manager-webhook created mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created pod/cert-manager-66d46f75d6-6rxgl condition met pod/cert-manager-cainjector-856bdc4b95-sp5mg condition met pod/cert-manager-webhook-7fdfc5cd79-nznkz condition met ๐Ÿค Deploying Trust Manager... customresourcedefinition.apiextensions.k8s.io/bundles.trust.cert-manager.io created serviceaccount/trust-manager created role.rbac.authorization.k8s.io/trust-manager created role.rbac.authorization.k8s.io/trust-manager:leaderelection created clusterrole.rbac.authorization.k8s.io/trust-manager created rolebinding.rbac.authorization.k8s.io/trust-manager created rolebinding.rbac.authorization.k8s.io/trust-manager:leaderelection created clusterrolebinding.rbac.authorization.k8s.io/trust-manager created service/trust-manager created service/trust-manager-metrics created deployment.apps/trust-manager created certificate.cert-manager.io/trust-manager created issuer.cert-manager.io/trust-manager created validatingwebhookconfiguration.admissionregistration.k8s.io/trust-manager created pod/trust-manager-7c9f8b8f7d-cflrm condition met ๐Ÿ“œ Setting up Cluster Issuer... certificate.cert-manager.io/selfsigned-ca created clusterissuer.cert-manager.io/ca-issuer created clusterissuer.cert-manager.io/self-signed-cluster-issuer created ๐Ÿฑ Deploying Tekton... ๐Ÿฑ Installing Tekton Operator... namespace/tekton-operator created customresourcedefinition.apiextensions.k8s.io/manualapprovalgates.operator.tekton.dev created customresourcedefinition.apiextensions.k8s.io/tektonchains.operator.tekton.dev created customresourcedefinition.apiextensions.k8s.io/tektonconfigs.operator.tekton.dev created customresourcedefinition.apiextensions.k8s.io/tektondashboards.operator.tekton.dev created customresourcedefinition.apiextensions.k8s.io/tektonhubs.operator.tekton.dev created customresourcedefinition.apiextensions.k8s.io/tektoninstallersets.operator.tekton.dev created customresourcedefinition.apiextensions.k8s.io/tektonpipelines.operator.tekton.dev created customresourcedefinition.apiextensions.k8s.io/tektonresults.operator.tekton.dev created customresourcedefinition.apiextensions.k8s.io/tektontriggers.operator.tekton.dev created serviceaccount/tekton-operator created role.rbac.authorization.k8s.io/tekton-operator-info created clusterrole.rbac.authorization.k8s.io/tekton-config-read-role created clusterrole.rbac.authorization.k8s.io/tekton-operator created clusterrole.rbac.authorization.k8s.io/tekton-result-read-role created rolebinding.rbac.authorization.k8s.io/tekton-operator-info created clusterrolebinding.rbac.authorization.k8s.io/tekton-config-read-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/tekton-operator created clusterrolebinding.rbac.authorization.k8s.io/tekton-result-read-rolebinding created configmap/config-logging created configmap/tekton-config-defaults created configmap/tekton-config-observability created configmap/tekton-operator-controller-config-leader-election created configmap/tekton-operator-info created configmap/tekton-operator-webhook-config-leader-election created secret/tekton-operator-webhook-certs created service/tekton-operator created service/tekton-operator-webhook created deployment.apps/tekton-operator created deployment.apps/tekton-operator-webhook created pod/tekton-operator-596b885757-zzsxb condition met pod/tekton-operator-webhook-85dd8445c9-kknzh condition met tektonconfig.operator.tekton.dev/config condition met โš™๏ธ Configuring Tekton... Warning: resource tektonconfigs/config is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. tektonconfig.operator.tekton.dev/config configured ๐Ÿ”„ Setting up Pipeline As Code... namespace/pipelines-as-code created customresourcedefinition.apiextensions.k8s.io/repositories.pipelinesascode.tekton.dev created serviceaccount/pipelines-as-code-controller created serviceaccount/pipelines-as-code-watcher created serviceaccount/pipelines-as-code-webhook created role.rbac.authorization.k8s.io/pipelines-as-code-controller-role created role.rbac.authorization.k8s.io/pipelines-as-code-info created role.rbac.authorization.k8s.io/pipelines-as-code-watcher-role created role.rbac.authorization.k8s.io/pipelines-as-code-webhook-role created clusterrole.rbac.authorization.k8s.io/pipeline-as-code-controller-clusterrole created clusterrole.rbac.authorization.k8s.io/pipeline-as-code-watcher-clusterrole created clusterrole.rbac.authorization.k8s.io/pipeline-as-code-webhook-clusterrole created clusterrole.rbac.authorization.k8s.io/pipelines-as-code-aggregate created rolebinding.rbac.authorization.k8s.io/pipelines-as-code-controller-binding created rolebinding.rbac.authorization.k8s.io/pipelines-as-code-info created rolebinding.rbac.authorization.k8s.io/pipelines-as-code-watcher-binding created rolebinding.rbac.authorization.k8s.io/pipelines-as-code-webhook-binding created clusterrolebinding.rbac.authorization.k8s.io/pipelines-as-code-controller-clusterbinding created clusterrolebinding.rbac.authorization.k8s.io/pipelines-as-code-watcher-clusterbinding created clusterrolebinding.rbac.authorization.k8s.io/pipelines-as-code-webhook-clusterbinding created configmap/pac-config-logging created configmap/pac-watcher-config-leader-election created configmap/pac-webhook-config-leader-election created configmap/pipelines-as-code created configmap/pipelines-as-code-config-observability created configmap/pipelines-as-code-info created secret/pipelines-as-code-webhook-certs created service/pipelines-as-code-controller created service/pipelines-as-code-watcher created service/pipelines-as-code-webhook created deployment.apps/pipelines-as-code-controller created deployment.apps/pipelines-as-code-watcher created deployment.apps/pipelines-as-code-webhook created validatingwebhookconfiguration.admissionregistration.k8s.io/validation.pipelinesascode.tekton.dev created tektonconfig.operator.tekton.dev/config condition met ๐Ÿ“Š Setting up Tekton Results... NAME TYPE DATA AGE tekton-results-postgres Opaque 2 104s Warning: resource serviceaccounts/tekton-results-api is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. serviceaccount/tekton-results-api configured Warning: resource serviceaccounts/tekton-results-watcher is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. serviceaccount/tekton-results-watcher configured Warning: resource roles/tekton-results-info is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. role.rbac.authorization.k8s.io/tekton-results-info configured Warning: resource clusterroles/tekton-results-admin is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. clusterrole.rbac.authorization.k8s.io/tekton-results-admin configured Warning: resource clusterroles/tekton-results-api is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. clusterrole.rbac.authorization.k8s.io/tekton-results-api configured Warning: resource clusterroles/tekton-results-readonly is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. clusterrole.rbac.authorization.k8s.io/tekton-results-readonly configured Warning: resource clusterroles/tekton-results-readwrite is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. clusterrole.rbac.authorization.k8s.io/tekton-results-readwrite configured Warning: resource clusterroles/tekton-results-watcher is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. clusterrole.rbac.authorization.k8s.io/tekton-results-watcher configured Warning: resource rolebindings/tekton-results-info is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. rolebinding.rbac.authorization.k8s.io/tekton-results-info configured Warning: resource clusterrolebindings/tekton-results-api is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. clusterrolebinding.rbac.authorization.k8s.io/tekton-results-api configured Warning: resource clusterrolebindings/tekton-results-watcher is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. clusterrolebinding.rbac.authorization.k8s.io/tekton-results-watcher configured Warning: resource configmaps/tekton-results-api-config is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. configmap/tekton-results-api-config configured Warning: resource configmaps/tekton-results-config-leader-election is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. configmap/tekton-results-config-leader-election configured Warning: resource configmaps/tekton-results-config-logging is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. configmap/tekton-results-config-logging configured Warning: resource configmaps/tekton-results-config-observability is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. configmap/tekton-results-config-observability configured Warning: resource configmaps/tekton-results-config-results-retention-policy is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. configmap/tekton-results-config-results-retention-policy configured Warning: resource configmaps/tekton-results-info is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. configmap/tekton-results-info configured Warning: resource configmaps/tekton-results-postgres is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. configmap/tekton-results-postgres configured Warning: resource services/tekton-results-api-service is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. service/tekton-results-api-service configured Warning: resource services/tekton-results-postgres-service is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. service/tekton-results-postgres-service configured Warning: resource services/tekton-results-watcher is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. service/tekton-results-watcher configured Warning: resource deployments/tekton-results-api is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. deployment.apps/tekton-results-api configured Warning: resource deployments/tekton-results-retention-policy-agent is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. deployment.apps/tekton-results-retention-policy-agent configured Warning: resource deployments/tekton-results-watcher is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. deployment.apps/tekton-results-watcher configured Warning: resource statefulsets/tekton-results-postgres is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. statefulset.apps/tekton-results-postgres configured certificate.cert-manager.io/serving-cert created ๐Ÿ”‘ Deploying Dex... namespace/dex created serviceaccount/dex created clusterrole.rbac.authorization.k8s.io/dex created clusterrolebinding.rbac.authorization.k8s.io/dex created configmap/dex-4k6bdhgm54 created service/dex created deployment.apps/dex created certificate.cert-manager.io/dex-cert created Error from server (NotFound): secrets "oauth2-proxy-client-secret" not found ๐Ÿ”‘ Creating secret oauth2-proxy-client-secret secret/oauth2-proxy-client-secret created ๐Ÿ“ฆ Deploying Registry... namespace/kind-registry created service/registry-service created deployment.apps/registry created certificate.cert-manager.io/registry-cert created bundle.trust.cert-manager.io/trusted-ca created pod/registry-68dcdc78fb-lslt2 condition met ๐Ÿ”„ Deploying Smee... Randomizing smee-channel ID namespace/smee-client created deployment.apps/gosmee-client created ๐Ÿ›ก๏ธ Deploying Kyverno... namespace/kyverno serverside-applied customresourcedefinition.apiextensions.k8s.io/admissionreports.kyverno.io serverside-applied customresourcedefinition.apiextensions.k8s.io/backgroundscanreports.kyverno.io serverside-applied customresourcedefinition.apiextensions.k8s.io/cleanuppolicies.kyverno.io serverside-applied customresourcedefinition.apiextensions.k8s.io/clusteradmissionreports.kyverno.io serverside-applied customresourcedefinition.apiextensions.k8s.io/clusterbackgroundscanreports.kyverno.io serverside-applied customresourcedefinition.apiextensions.k8s.io/clustercleanuppolicies.kyverno.io serverside-applied customresourcedefinition.apiextensions.k8s.io/clusterpolicies.kyverno.io serverside-applied customresourcedefinition.apiextensions.k8s.io/clusterpolicyreports.wgpolicyk8s.io serverside-applied customresourcedefinition.apiextensions.k8s.io/policies.kyverno.io serverside-applied customresourcedefinition.apiextensions.k8s.io/policyexceptions.kyverno.io serverside-applied customresourcedefinition.apiextensions.k8s.io/policyreports.wgpolicyk8s.io serverside-applied customresourcedefinition.apiextensions.k8s.io/updaterequests.kyverno.io serverside-applied serviceaccount/kyverno-admission-controller serverside-applied serviceaccount/kyverno-background-controller serverside-applied serviceaccount/kyverno-cleanup-controller serverside-applied serviceaccount/kyverno-cleanup-jobs serverside-applied serviceaccount/kyverno-reports-controller serverside-applied role.rbac.authorization.k8s.io/kyverno:admission-controller serverside-applied role.rbac.authorization.k8s.io/kyverno:background-controller serverside-applied role.rbac.authorization.k8s.io/kyverno:cleanup-controller serverside-applied role.rbac.authorization.k8s.io/kyverno:reports-controller serverside-applied clusterrole.rbac.authorization.k8s.io/kyverno-cleanup-jobs serverside-applied clusterrole.rbac.authorization.k8s.io/kyverno:admission-controller serverside-applied clusterrole.rbac.authorization.k8s.io/kyverno:admission-controller:core serverside-applied clusterrole.rbac.authorization.k8s.io/kyverno:background-controller serverside-applied clusterrole.rbac.authorization.k8s.io/kyverno:background-controller:core serverside-applied clusterrole.rbac.authorization.k8s.io/kyverno:cleanup-controller serverside-applied clusterrole.rbac.authorization.k8s.io/kyverno:cleanup-controller:core serverside-applied clusterrole.rbac.authorization.k8s.io/kyverno:rbac:admin:policies serverside-applied clusterrole.rbac.authorization.k8s.io/kyverno:rbac:admin:policyreports serverside-applied clusterrole.rbac.authorization.k8s.io/kyverno:rbac:admin:reports serverside-applied clusterrole.rbac.authorization.k8s.io/kyverno:rbac:admin:updaterequests serverside-applied clusterrole.rbac.authorization.k8s.io/kyverno:rbac:view:policies serverside-applied clusterrole.rbac.authorization.k8s.io/kyverno:rbac:view:policyreports serverside-applied clusterrole.rbac.authorization.k8s.io/kyverno:rbac:view:reports serverside-applied clusterrole.rbac.authorization.k8s.io/kyverno:rbac:view:updaterequests serverside-applied clusterrole.rbac.authorization.k8s.io/kyverno:reports-controller serverside-applied clusterrole.rbac.authorization.k8s.io/kyverno:reports-controller:core serverside-applied rolebinding.rbac.authorization.k8s.io/kyverno:admission-controller serverside-applied rolebinding.rbac.authorization.k8s.io/kyverno:background-controller serverside-applied rolebinding.rbac.authorization.k8s.io/kyverno:cleanup-controller serverside-applied rolebinding.rbac.authorization.k8s.io/kyverno:reports-controller serverside-applied clusterrolebinding.rbac.authorization.k8s.io/kyverno-cleanup-jobs serverside-applied clusterrolebinding.rbac.authorization.k8s.io/kyverno:admission-controller serverside-applied clusterrolebinding.rbac.authorization.k8s.io/kyverno:background-controller serverside-applied clusterrolebinding.rbac.authorization.k8s.io/kyverno:cleanup-controller serverside-applied clusterrolebinding.rbac.authorization.k8s.io/kyverno:reports-controller serverside-applied configmap/kyverno serverside-applied configmap/kyverno-metrics serverside-applied service/kyverno-background-controller-metrics serverside-applied service/kyverno-cleanup-controller serverside-applied service/kyverno-cleanup-controller-metrics serverside-applied service/kyverno-reports-controller-metrics serverside-applied service/kyverno-svc serverside-applied service/kyverno-svc-metrics serverside-applied deployment.apps/kyverno-admission-controller serverside-applied deployment.apps/kyverno-background-controller serverside-applied deployment.apps/kyverno-cleanup-controller serverside-applied deployment.apps/kyverno-reports-controller serverside-applied cronjob.batch/kyverno-cleanup-admission-reports serverside-applied cronjob.batch/kyverno-cleanup-cluster-admission-reports serverside-applied clusterrole.rbac.authorization.k8s.io/kyverno-manage-resources created clusterpolicy.kyverno.io/reduce-tekton-pr-taskrun-resource-requests created โณ Waiting for the dependencies to be ready โณ Waiting for Tekton configuration to be ready... tektonconfig.operator.tekton.dev/config condition met โณ Waiting for all deployments to be available... deployment.apps/cert-manager condition met deployment.apps/cert-manager-cainjector condition met deployment.apps/cert-manager-webhook condition met deployment.apps/trust-manager condition met deployment.apps/dex condition met deployment.apps/registry condition met deployment.apps/coredns condition met deployment.apps/kyverno-admission-controller condition met deployment.apps/kyverno-background-controller condition met deployment.apps/kyverno-cleanup-controller condition met deployment.apps/kyverno-reports-controller condition met deployment.apps/local-path-provisioner condition met deployment.apps/pipelines-as-code-controller condition met deployment.apps/pipelines-as-code-watcher condition met deployment.apps/pipelines-as-code-webhook condition met deployment.apps/gosmee-client condition met deployment.apps/tekton-operator condition met deployment.apps/tekton-operator-webhook condition met deployment.apps/tekton-chains-controller condition met deployment.apps/tekton-events-controller condition met deployment.apps/tekton-operator-proxy-webhook condition met deployment.apps/tekton-pipelines-controller condition met deployment.apps/tekton-pipelines-remote-resolvers condition met deployment.apps/tekton-pipelines-webhook condition met deployment.apps/tekton-results-api condition met deployment.apps/tekton-results-retention-policy-agent condition met deployment.apps/tekton-results-watcher condition met deployment.apps/tekton-triggers-controller condition met deployment.apps/tekton-triggers-core-interceptors condition met deployment.apps/tekton-triggers-webhook condition met โณ Waiting for Tekton configuration to be ready... tektonconfig.operator.tekton.dev/config condition met โณ Waiting for all deployments to be available... deployment.apps/cert-manager condition met deployment.apps/cert-manager-cainjector condition met deployment.apps/cert-manager-webhook condition met deployment.apps/trust-manager condition met deployment.apps/dex condition met deployment.apps/registry condition met deployment.apps/coredns condition met deployment.apps/kyverno-admission-controller condition met deployment.apps/kyverno-background-controller condition met deployment.apps/kyverno-cleanup-controller condition met deployment.apps/kyverno-reports-controller condition met deployment.apps/local-path-provisioner condition met deployment.apps/pipelines-as-code-controller condition met deployment.apps/pipelines-as-code-watcher condition met deployment.apps/pipelines-as-code-webhook condition met deployment.apps/gosmee-client condition met deployment.apps/tekton-operator condition met deployment.apps/tekton-operator-webhook condition met deployment.apps/tekton-chains-controller condition met deployment.apps/tekton-events-controller condition met deployment.apps/tekton-operator-proxy-webhook condition met deployment.apps/tekton-pipelines-controller condition met deployment.apps/tekton-pipelines-remote-resolvers condition met deployment.apps/tekton-pipelines-webhook condition met deployment.apps/tekton-results-api condition met deployment.apps/tekton-results-retention-policy-agent condition met deployment.apps/tekton-results-watcher condition met deployment.apps/tekton-triggers-controller condition met deployment.apps/tekton-triggers-core-interceptors condition met deployment.apps/tekton-triggers-webhook condition met [INFO] Installing Konflux CI... Deploying Konflux ๐Ÿš€ Deploying Application API CRDs... customresourcedefinition.apiextensions.k8s.io/applications.appstudio.redhat.com created customresourcedefinition.apiextensions.k8s.io/componentdetectionqueries.appstudio.redhat.com created customresourcedefinition.apiextensions.k8s.io/components.appstudio.redhat.com created customresourcedefinition.apiextensions.k8s.io/deploymenttargetclaims.appstudio.redhat.com created customresourcedefinition.apiextensions.k8s.io/deploymenttargetclasses.appstudio.redhat.com created customresourcedefinition.apiextensions.k8s.io/deploymenttargets.appstudio.redhat.com created customresourcedefinition.apiextensions.k8s.io/environments.appstudio.redhat.com created customresourcedefinition.apiextensions.k8s.io/promotionruns.appstudio.redhat.com created customresourcedefinition.apiextensions.k8s.io/snapshotenvironmentbindings.appstudio.redhat.com created customresourcedefinition.apiextensions.k8s.io/snapshots.appstudio.redhat.com created ๐Ÿ‘ฅ Setting up RBAC permissions... namespace/openshift-pipelines created serviceaccount/chains-secrets-admin created role.rbac.authorization.k8s.io/chains-secret-admin created role.rbac.authorization.k8s.io/chains-secret-admin created clusterrole.rbac.authorization.k8s.io/konflux-admin-user-actions created clusterrole.rbac.authorization.k8s.io/konflux-self-access-reviewer created clusterrole.rbac.authorization.k8s.io/konflux-viewer-user-actions created clusterrole.rbac.authorization.k8s.io/tekton-chains-public-key-viewer created rolebinding.rbac.authorization.k8s.io/chains-secret-admin created rolebinding.rbac.authorization.k8s.io/tekton-chains-public-key-viewer created rolebinding.rbac.authorization.k8s.io/chains-secret-admin created rolebinding.rbac.authorization.k8s.io/tekton-chains-public-key-viewer created job.batch/tekton-chains-signing-secret created ๐Ÿ“œ Deploying Enterprise Contract... namespace/enterprise-contract-service created customresourcedefinition.apiextensions.k8s.io/enterprisecontractpolicies.appstudio.redhat.com created clusterrole.rbac.authorization.k8s.io/enterprisecontractpolicy-editor-role created clusterrole.rbac.authorization.k8s.io/enterprisecontractpolicy-viewer-role created rolebinding.rbac.authorization.k8s.io/public-ec-cm created rolebinding.rbac.authorization.k8s.io/public-ecp created configmap/ec-defaults created resource mapping not found for name: "all" namespace: "enterprise-contract-service" from "./konflux-ci/enterprise-contract": no matches for kind "EnterpriseContractPolicy" in version "appstudio.redhat.com/v1alpha1" ensure CRDs are installed first resource mapping not found for name: "default" namespace: "enterprise-contract-service" from "./konflux-ci/enterprise-contract": no matches for kind "EnterpriseContractPolicy" in version "appstudio.redhat.com/v1alpha1" ensure CRDs are installed first resource mapping not found for name: "redhat" namespace: "enterprise-contract-service" from "./konflux-ci/enterprise-contract": no matches for kind "EnterpriseContractPolicy" in version "appstudio.redhat.com/v1alpha1" ensure CRDs are installed first resource mapping not found for name: "redhat-no-hermetic" namespace: "enterprise-contract-service" from "./konflux-ci/enterprise-contract": no matches for kind "EnterpriseContractPolicy" in version "appstudio.redhat.com/v1alpha1" ensure CRDs are installed first resource mapping not found for name: "redhat-trusted-tasks" namespace: "enterprise-contract-service" from "./konflux-ci/enterprise-contract": no matches for kind "EnterpriseContractPolicy" in version "appstudio.redhat.com/v1alpha1" ensure CRDs are installed first resource mapping not found for name: "slsa3" namespace: "enterprise-contract-service" from "./konflux-ci/enterprise-contract": no matches for kind "EnterpriseContractPolicy" in version "appstudio.redhat.com/v1alpha1" ensure CRDs are installed first ๐Ÿ”„ Retrying command (attempt 2/3)... namespace/enterprise-contract-service unchanged customresourcedefinition.apiextensions.k8s.io/enterprisecontractpolicies.appstudio.redhat.com unchanged clusterrole.rbac.authorization.k8s.io/enterprisecontractpolicy-editor-role unchanged clusterrole.rbac.authorization.k8s.io/enterprisecontractpolicy-viewer-role unchanged rolebinding.rbac.authorization.k8s.io/public-ec-cm unchanged rolebinding.rbac.authorization.k8s.io/public-ecp unchanged configmap/ec-defaults unchanged enterprisecontractpolicy.appstudio.redhat.com/all created enterprisecontractpolicy.appstudio.redhat.com/default created enterprisecontractpolicy.appstudio.redhat.com/redhat created enterprisecontractpolicy.appstudio.redhat.com/redhat-no-hermetic created enterprisecontractpolicy.appstudio.redhat.com/redhat-trusted-tasks created enterprisecontractpolicy.appstudio.redhat.com/slsa3 created ๐ŸŽฏ Deploying Release Service... namespace/release-service serverside-applied customresourcedefinition.apiextensions.k8s.io/internalrequests.appstudio.redhat.com serverside-applied customresourcedefinition.apiextensions.k8s.io/internalservicesconfigs.appstudio.redhat.com serverside-applied customresourcedefinition.apiextensions.k8s.io/releaseplanadmissions.appstudio.redhat.com serverside-applied customresourcedefinition.apiextensions.k8s.io/releaseplans.appstudio.redhat.com serverside-applied customresourcedefinition.apiextensions.k8s.io/releases.appstudio.redhat.com serverside-applied customresourcedefinition.apiextensions.k8s.io/releaseserviceconfigs.appstudio.redhat.com serverside-applied serviceaccount/release-service-controller-manager serverside-applied role.rbac.authorization.k8s.io/release-service-leader-election-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-pipeline-resource-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-application-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-component-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-environment-viewer-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-manager-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-metrics-auth-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-release-editor-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-release-viewer-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-releaseplan-editor-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-releaseplan-viewer-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-releaseplanadmission-editor-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-releaseplanadmission-viewer-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-snapshot-editor-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-snapshot-viewer-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-snapshotenvironmentbinding-editor-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-tekton-role serverside-applied clusterrole.rbac.authorization.k8s.io/releaseserviceconfig-role serverside-applied rolebinding.rbac.authorization.k8s.io/release-service-leader-election-rolebinding serverside-applied rolebinding.rbac.authorization.k8s.io/releaseserviceconfigs-rolebinding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-application-role-binding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-component-role-binding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-environment-role-binding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-manager-rolebinding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-metrics-auth-rolebinding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-release-role-binding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-releaseplan-role-binding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-releaseplanadmission-role-binding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-snapshot-role-binding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-snapshotenvironmentbinding-role-binding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-tekton-role-binding serverside-applied configmap/release-service-manager-config serverside-applied configmap/release-service-manager-properties serverside-applied service/release-service-controller-manager-metrics-service serverside-applied service/release-service-webhook-service serverside-applied deployment.apps/release-service-controller-manager serverside-applied certificate.cert-manager.io/serving-cert serverside-applied issuer.cert-manager.io/selfsigned-issuer serverside-applied mutatingwebhookconfiguration.admissionregistration.k8s.io/release-service-mutating-webhook-configuration serverside-applied validatingwebhookconfiguration.admissionregistration.k8s.io/release-service-validating-webhook-configuration serverside-applied error: resource mapping not found for name: "release-service-config" namespace: "release-service" from "./konflux-ci/release": no matches for kind "ReleaseServiceConfig" in version "appstudio.redhat.com/v1alpha1" ensure CRDs are installed first ๐Ÿ”„ Retrying command (attempt 2/3)... namespace/release-service serverside-applied customresourcedefinition.apiextensions.k8s.io/internalrequests.appstudio.redhat.com serverside-applied customresourcedefinition.apiextensions.k8s.io/internalservicesconfigs.appstudio.redhat.com serverside-applied customresourcedefinition.apiextensions.k8s.io/releaseplanadmissions.appstudio.redhat.com serverside-applied customresourcedefinition.apiextensions.k8s.io/releaseplans.appstudio.redhat.com serverside-applied customresourcedefinition.apiextensions.k8s.io/releases.appstudio.redhat.com serverside-applied customresourcedefinition.apiextensions.k8s.io/releaseserviceconfigs.appstudio.redhat.com serverside-applied serviceaccount/release-service-controller-manager serverside-applied role.rbac.authorization.k8s.io/release-service-leader-election-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-pipeline-resource-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-application-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-component-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-environment-viewer-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-manager-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-metrics-auth-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-release-editor-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-release-viewer-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-releaseplan-editor-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-releaseplan-viewer-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-releaseplanadmission-editor-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-releaseplanadmission-viewer-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-snapshot-editor-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-snapshot-viewer-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-snapshotenvironmentbinding-editor-role serverside-applied clusterrole.rbac.authorization.k8s.io/release-service-tekton-role serverside-applied clusterrole.rbac.authorization.k8s.io/releaseserviceconfig-role serverside-applied rolebinding.rbac.authorization.k8s.io/release-service-leader-election-rolebinding serverside-applied rolebinding.rbac.authorization.k8s.io/releaseserviceconfigs-rolebinding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-application-role-binding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-component-role-binding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-environment-role-binding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-manager-rolebinding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-metrics-auth-rolebinding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-release-role-binding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-releaseplan-role-binding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-releaseplanadmission-role-binding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-snapshot-role-binding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-snapshotenvironmentbinding-role-binding serverside-applied clusterrolebinding.rbac.authorization.k8s.io/release-service-tekton-role-binding serverside-applied configmap/release-service-manager-config serverside-applied configmap/release-service-manager-properties serverside-applied service/release-service-controller-manager-metrics-service serverside-applied service/release-service-webhook-service serverside-applied deployment.apps/release-service-controller-manager serverside-applied releaseserviceconfig.appstudio.redhat.com/release-service-config serverside-applied certificate.cert-manager.io/serving-cert serverside-applied issuer.cert-manager.io/selfsigned-issuer serverside-applied mutatingwebhookconfiguration.admissionregistration.k8s.io/release-service-mutating-webhook-configuration serverside-applied validatingwebhookconfiguration.admissionregistration.k8s.io/release-service-validating-webhook-configuration serverside-applied ๐Ÿ—๏ธ Deploying Build Service... namespace/build-service created serviceaccount/build-service-controller-manager created role.rbac.authorization.k8s.io/build-service-build-pipeline-config-read-only created role.rbac.authorization.k8s.io/build-service-leader-election-role created clusterrole.rbac.authorization.k8s.io/appstudio-pipelines-runner created clusterrole.rbac.authorization.k8s.io/build-service-manager-role created clusterrole.rbac.authorization.k8s.io/build-service-metrics-auth-role created rolebinding.rbac.authorization.k8s.io/build-pipeline-config-read-only-binding created rolebinding.rbac.authorization.k8s.io/build-service-leader-election-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/build-pipeline-runner-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/build-service-manager-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/build-service-metrics-auth-rolebinding created configmap/build-pipeline-config created service/build-service-controller-manager-metrics-service created deployment.apps/build-service-controller-manager created ๐Ÿ”„ Deploying Integration Service... namespace/integration-service created customresourcedefinition.apiextensions.k8s.io/integrationtestscenarios.appstudio.redhat.com created serviceaccount/integration-service-controller-manager created serviceaccount/integration-service-snapshot-garbage-collector created role.rbac.authorization.k8s.io/integration-service-leader-election-role created clusterrole.rbac.authorization.k8s.io/integration-service-integrationtestscenario-admin-role created clusterrole.rbac.authorization.k8s.io/integration-service-integrationtestscenario-editor-role created clusterrole.rbac.authorization.k8s.io/integration-service-integrationtestscenario-viewer-role created clusterrole.rbac.authorization.k8s.io/integration-service-manager-role created clusterrole.rbac.authorization.k8s.io/integration-service-metrics-auth-role created clusterrole.rbac.authorization.k8s.io/integration-service-snapshot-garbage-collector created clusterrole.rbac.authorization.k8s.io/integration-service-tekton-editor-role created clusterrole.rbac.authorization.k8s.io/konflux-integration-runner created rolebinding.rbac.authorization.k8s.io/integration-service-leader-election-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/integration-service-manager-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/integration-service-metrics-auth-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/integration-service-snapshot-garbage-collector created clusterrolebinding.rbac.authorization.k8s.io/integration-service-tekton-role-binding created configmap/integration-service-manager-config created service/integration-service-controller-manager-metrics-service created service/integration-service-webhook-service created deployment.apps/integration-service-controller-manager created cronjob.batch/integration-service-snapshot-garbage-collector created certificate.cert-manager.io/serving-cert created issuer.cert-manager.io/selfsigned-issuer created clusterpolicy.kyverno.io/init-ns-integration created mutatingwebhookconfiguration.admissionregistration.k8s.io/integration-service-mutating-webhook-configuration created validatingwebhookconfiguration.admissionregistration.k8s.io/integration-service-validating-webhook-configuration created ๐Ÿ“‹ Setting up Namespace Lister... namespace/namespace-lister created serviceaccount/namespace-lister created clusterrole.rbac.authorization.k8s.io/namespace-lister-authorizer created clusterrolebinding.rbac.authorization.k8s.io/namespace-lister-authorizer created service/namespace-lister created deployment.apps/namespace-lister created certificate.cert-manager.io/namespace-lister created networkpolicy.networking.k8s.io/namespace-lister-allow-from-konfluxui created networkpolicy.networking.k8s.io/namespace-lister-allow-to-apiserver created ๐ŸŽจ Deploying UI components... namespace/konflux-ui created serviceaccount/proxy created clusterrole.rbac.authorization.k8s.io/konflux-proxy created clusterrole.rbac.authorization.k8s.io/konflux-proxy-namespace-lister created clusterrolebinding.rbac.authorization.k8s.io/konflux-proxy created clusterrolebinding.rbac.authorization.k8s.io/konflux-proxy-namespace-lister created configmap/nginx-idp-location-h959ghd6bh created configmap/proxy-cb8kd95k47 created configmap/proxy-nginx-static-fmmfg7d22f created configmap/proxy-nginx-templates-4m8fgtf4m9 created secret/proxy created service/proxy created deployment.apps/proxy created certificate.cert-manager.io/serving-cert created Error from server (NotFound): secrets "oauth2-proxy-client-secret" not found ๐Ÿ”‘ Setting up OAuth2 proxy client secret... secret/oauth2-proxy-client-secret created Error from server (NotFound): secrets "oauth2-proxy-cookie-secret" not found ๐Ÿช Creating OAuth2 proxy cookie secret... secret/oauth2-proxy-cookie-secret created Waiting for Konflux to be ready โณ Waiting for Tekton configuration to be ready... tektonconfig.operator.tekton.dev/config condition met โณ Waiting for all deployments to be available... timed out waiting for the condition on deployments/build-service-controller-manager timed out waiting for the condition on deployments/cert-manager timed out waiting for the condition on deployments/cert-manager-cainjector timed out waiting for the condition on deployments/cert-manager-webhook timed out waiting for the condition on deployments/trust-manager timed out waiting for the condition on deployments/dex timed out waiting for the condition on deployments/integration-service-controller-manager timed out waiting for the condition on deployments/registry timed out waiting for the condition on deployments/proxy timed out waiting for the condition on deployments/coredns timed out waiting for the condition on deployments/kyverno-admission-controller timed out waiting for the condition on deployments/kyverno-background-controller timed out waiting for the condition on deployments/kyverno-cleanup-controller timed out waiting for the condition on deployments/kyverno-reports-controller timed out waiting for the condition on deployments/local-path-provisioner timed out waiting for the condition on deployments/namespace-lister timed out waiting for the condition on deployments/pipelines-as-code-controller timed out waiting for the condition on deployments/pipelines-as-code-watcher timed out waiting for the condition on deployments/pipelines-as-code-webhook timed out waiting for the condition on deployments/release-service-controller-manager timed out waiting for the condition on deployments/gosmee-client timed out waiting for the condition on deployments/tekton-operator timed out waiting for the condition on deployments/tekton-operator-webhook timed out waiting for the condition on deployments/tekton-chains-controller timed out waiting for the condition on deployments/tekton-events-controller timed out waiting for the condition on deployments/tekton-operator-proxy-webhook timed out waiting for the condition on deployments/tekton-pipelines-controller timed out waiting for the condition on deployments/tekton-pipelines-remote-resolvers timed out waiting for the condition on deployments/tekton-pipelines-webhook timed out waiting for the condition on deployments/tekton-results-api timed out waiting for the condition on deployments/tekton-results-retention-policy-agent timed out waiting for the condition on deployments/tekton-results-watcher timed out waiting for the condition on deployments/tekton-triggers-controller timed out waiting for the condition on deployments/tekton-triggers-core-interceptors timed out waiting for the condition on deployments/tekton-triggers-webhook Deployment failed Generating error logs ---------- namespace 'build-service' ---------- ---------- namespace 'cert-manager' ---------- apiVersion: v1 kind: Pod metadata: creationTimestamp: "2025-09-06T05:08:27Z" generateName: trust-manager-7c9f8b8f7d- labels: app: trust-manager app.kubernetes.io/instance: trust-manager app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: trust-manager app.kubernetes.io/version: v0.12.0 helm.sh/chart: trust-manager-v0.12.0 pod-template-hash: 7c9f8b8f7d name: trust-manager-7c9f8b8f7d-cflrm namespace: cert-manager ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: trust-manager-7c9f8b8f7d uid: ee466a36-3bf6-4269-bda4-fd601cb55662 resourceVersion: "1217" uid: f7aca1d1-bc27-4f10-afdd-833685f0fb44 spec: containers: - args: - --log-format=text - --log-level=1 - --metrics-port=9402 - --readiness-probe-port=6060 - --readiness-probe-path=/readyz - --leader-election-lease-duration=15s - --leader-election-renew-deadline=10s - --trust-namespace=cert-manager - --webhook-host=0.0.0.0 - --webhook-port=6443 - --webhook-certificate-dir=/tls - --default-package-location=/packages/cert-manager-package-debian.json image: quay.io/jetstack/trust-manager:v0.12.0 imagePullPolicy: IfNotPresent name: trust-manager ports: - containerPort: 6443 protocol: TCP - containerPort: 9402 protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: 6060 scheme: HTTP initialDelaySeconds: 3 periodSeconds: 7 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 100m memory: 250Mi requests: cpu: 10m memory: 50Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true runAsNonRoot: true seccompProfile: type: RuntimeDefault terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /tls name: tls readOnly: true - mountPath: /packages name: packages readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-s5dgs readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true initContainers: - args: - /copyandmaybepause - /debian-package - /packages image: quay.io/jetstack/cert-manager-package-debian:20210119.0 imagePullPolicy: IfNotPresent name: cert-manager-package-debian resources: limits: cpu: 100m memory: 250Mi requests: cpu: 10m memory: 50Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true runAsNonRoot: true seccompProfile: type: RuntimeDefault terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /packages name: packages - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-s5dgs readOnly: true nodeName: kind-mapt-control-plane nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: trust-manager serviceAccountName: trust-manager terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - emptyDir: sizeLimit: 50M name: packages - name: tls secret: defaultMode: 420 secretName: trust-manager-tls - name: kube-api-access-s5dgs projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2025-09-06T05:09:02Z" status: "True" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2025-09-06T05:09:02Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2025-09-06T05:09:20Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2025-09-06T05:09:20Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2025-09-06T05:08:27Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://9463d1248b75bc33f5bde9efa75cf7dfa0ca68f44913c36aaf61be3d104438d4 image: quay.io/jetstack/trust-manager:v0.12.0 imageID: quay.io/jetstack/trust-manager@sha256:8285d0d1c374dcf6e29ddcac10a5c937502eb8c318dbd2411f9789ede0e23421 lastState: {} name: trust-manager ready: true restartCount: 0 started: true state: running: startedAt: "2025-09-06T05:09:11Z" volumeMounts: - mountPath: /tls name: tls readOnly: true recursiveReadOnly: Disabled - mountPath: /packages name: packages readOnly: true recursiveReadOnly: Disabled - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-s5dgs readOnly: true recursiveReadOnly: Disabled hostIP: 10.89.0.2 hostIPs: - ip: 10.89.0.2 initContainerStatuses: - containerID: containerd://554498408f27a390be312a2a79c8c482ae2b5f18518703b6c6c43478ec270b4f image: quay.io/jetstack/cert-manager-package-debian:20210119.0 imageID: quay.io/jetstack/cert-manager-package-debian@sha256:116133f68938ef568aca17a0c691d5b1ef73a9a207029c9a068cf4230053fed5 lastState: {} name: cert-manager-package-debian ready: true restartCount: 0 started: false state: terminated: containerID: containerd://554498408f27a390be312a2a79c8c482ae2b5f18518703b6c6c43478ec270b4f exitCode: 0 finishedAt: "2025-09-06T05:09:01Z" reason: Completed startedAt: "2025-09-06T05:09:01Z" volumeMounts: - mountPath: /packages name: packages - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-s5dgs readOnly: true recursiveReadOnly: Disabled phase: Running podIP: 10.244.0.10 podIPs: - ip: 10.244.0.10 qosClass: Burstable startTime: "2025-09-06T05:08:27Z" --- Pod 'trust-manager-7c9f8b8f7d-cflrm' under namespace 'cert-manager': Pod trust-manager-7c9f8b8f7d-cflrm MountVolume.SetUp failed for volume "tls" : secret "trust-manager-tls" not found (FailedMount) 2025/09/06 05:09:01 reading from /debian-package 2025/09/06 05:09:01 writing to /packages 2025/09/06 05:09:01 successfully copied /debian-package/cert-manager-package-debian.json to /packages/cert-manager-package-debian.json time=2025-09-06T05:09:11.601Z level=INFO msg="successfully loaded default package from filesystem" logger=trust/bundle path=/packages/cert-manager-package-debian.json time=2025-09-06T05:09:11.601Z level=INFO msg="registering webhook endpoints" logger=trust/webhook time=2025-09-06T05:09:11.601Z level=INFO msg="skip registering a mutating webhook, object does not implement admission.Defaulter or WithDefaulter wasn't called" logger=trust/manager/controller-runtime/builder GVK="trust.cert-manager.io/v1alpha1, Kind=Bundle" time=2025-09-06T05:09:11.601Z level=INFO msg="Registering a validating webhook" logger=trust/manager/controller-runtime/builder GVK="trust.cert-manager.io/v1alpha1, Kind=Bundle" path=/validate-trust-cert-manager-io-v1alpha1-bundle time=2025-09-06T05:09:11.602Z level=INFO msg="Registering webhook" path=/validate-trust-cert-manager-io-v1alpha1-bundle logger=trust/manager/controller-runtime/webhook time=2025-09-06T05:09:11.602Z level=INFO msg="Starting metrics server" logger=trust/manager/controller-runtime/metrics time=2025-09-06T05:09:11.602Z level=INFO msg="Serving metrics server" logger=trust/manager/controller-runtime/metrics bindAddress=0.0.0.0:9402 secure=false time=2025-09-06T05:09:11.602Z level=INFO msg="Starting webhook server" logger=trust/manager/controller-runtime/webhook time=2025-09-06T05:09:11.602Z level=INFO msg="starting server" name="health probe" addr=[::]:6060 logger=trust/manager time=2025-09-06T05:09:11.602Z level=INFO msg="Updated current TLS certificate" logger=trust/manager/controller-runtime/certwatcher time=2025-09-06T05:09:11.602Z level=INFO msg="Serving webhook server" logger=trust/manager/controller-runtime/webhook host=0.0.0.0 port=6443 time=2025-09-06T05:09:11.602Z level=INFO msg="attempting to acquire leader lease cert-manager/trust-manager-leader-election..." time=2025-09-06T05:09:11.602Z level=INFO msg="Starting certificate watcher" logger=trust/manager/controller-runtime/certwatcher time=2025-09-06T05:09:11.609Z level=INFO msg="successfully acquired lease cert-manager/trust-manager-leader-election" time=2025-09-06T05:09:11.609Z level=DEBUG+3 msg="trust-manager-7c9f8b8f7d-cflrm_fa109de2-0f72-401a-aab5-67d3d1032c21 became leader" logger=trust/manager/events type=Normal object="{Kind:Lease Namespace:cert-manager Name:trust-manager-leader-election UID:3e78c235-bf44-4717-bdeb-ee148851f3d0 APIVersion:coordination.k8s.io/v1 ResourceVersion:1196 FieldPath:}" reason=LeaderElection time=2025-09-06T05:09:11.609Z level=INFO msg="Starting EventSource" controller=bundles logger=trust/manager source="kind source: *v1alpha1.Bundle" time=2025-09-06T05:09:11.609Z level=INFO msg="Starting EventSource" controller=bundles logger=trust/manager source="kind source: *v1.Namespace" time=2025-09-06T05:09:11.609Z level=INFO msg="Starting EventSource" controller=bundles logger=trust/manager source="kind source: *v1.ConfigMap" time=2025-09-06T05:09:11.609Z level=INFO msg="Starting EventSource" controller=bundles logger=trust/manager source="kind source: *v1.Secret" time=2025-09-06T05:09:11.609Z level=INFO msg="Starting EventSource" controller=bundles logger=trust/manager source="kind source: *v1.PartialObjectMetadata" time=2025-09-06T05:09:11.609Z level=INFO msg="Starting Controller" controller=bundles logger=trust/manager time=2025-09-06T05:09:11.897Z level=INFO msg="Starting workers" controller=bundles logger=trust/manager "worker count"=1 time=2025-09-06T05:14:03.205Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4209 FieldPath:}" reason=Synced time=2025-09-06T05:14:03.205Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4209 FieldPath:}" reason=Synced time=2025-09-06T05:14:15.898Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4247 FieldPath:}" reason=Synced time=2025-09-06T05:14:15.898Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4247 FieldPath:}" reason=Synced time=2025-09-06T05:14:23.099Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4247 FieldPath:}" reason=Synced time=2025-09-06T05:14:23.099Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4247 FieldPath:}" reason=Synced time=2025-09-06T05:15:51.497Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4247 FieldPath:}" reason=Synced time=2025-09-06T05:15:51.497Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4247 FieldPath:}" reason=Synced time=2025-09-06T05:16:15.897Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4247 FieldPath:}" reason=Synced time=2025-09-06T05:16:15.897Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4247 FieldPath:}" reason=Synced time=2025-09-06T05:16:45.497Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4247 FieldPath:}" reason=Synced time=2025-09-06T05:16:45.497Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4247 FieldPath:}" reason=Synced time=2025-09-06T05:17:31.499Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4247 FieldPath:}" reason=Synced time=2025-09-06T05:17:31.499Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4247 FieldPath:}" reason=Synced time=2025-09-06T05:17:48.878Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4247 FieldPath:}" reason=Synced time=2025-09-06T05:17:48.878Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4247 FieldPath:}" reason=Synced time=2025-09-06T05:18:12.298Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4247 FieldPath:}" reason=Synced time=2025-09-06T05:18:12.298Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4247 FieldPath:}" reason=Synced time=2025-09-06T05:18:20.698Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4247 FieldPath:}" reason=Synced time=2025-09-06T05:18:20.698Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4247 FieldPath:}" reason=Synced apiVersion: v1 kind: Pod metadata: creationTimestamp: "2025-09-06T05:08:27Z" generateName: trust-manager-7c9f8b8f7d- labels: app: trust-manager app.kubernetes.io/instance: trust-manager app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: trust-manager app.kubernetes.io/version: v0.12.0 helm.sh/chart: trust-manager-v0.12.0 pod-template-hash: 7c9f8b8f7d name: trust-manager-7c9f8b8f7d-cflrm namespace: cert-manager ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: trust-manager-7c9f8b8f7d uid: ee466a36-3bf6-4269-bda4-fd601cb55662 resourceVersion: "1217" uid: f7aca1d1-bc27-4f10-afdd-833685f0fb44 spec: containers: - args: - --log-format=text - --log-level=1 - --metrics-port=9402 - --readiness-probe-port=6060 - --readiness-probe-path=/readyz - --leader-election-lease-duration=15s - --leader-election-renew-deadline=10s - --trust-namespace=cert-manager - --webhook-host=0.0.0.0 - --webhook-port=6443 - --webhook-certificate-dir=/tls - --default-package-location=/packages/cert-manager-package-debian.json image: quay.io/jetstack/trust-manager:v0.12.0 imagePullPolicy: IfNotPresent name: trust-manager ports: - containerPort: 6443 protocol: TCP - containerPort: 9402 protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: 6060 scheme: HTTP initialDelaySeconds: 3 periodSeconds: 7 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 100m memory: 250Mi requests: cpu: 10m memory: 50Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true runAsNonRoot: true seccompProfile: type: RuntimeDefault terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /tls name: tls readOnly: true - mountPath: /packages name: packages readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-s5dgs readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true initContainers: - args: - /copyandmaybepause - /debian-package - /packages image: quay.io/jetstack/cert-manager-package-debian:20210119.0 imagePullPolicy: IfNotPresent name: cert-manager-package-debian resources: limits: cpu: 100m memory: 250Mi requests: cpu: 10m memory: 50Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true runAsNonRoot: true seccompProfile: type: RuntimeDefault terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /packages name: packages - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-s5dgs readOnly: true nodeName: kind-mapt-control-plane nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: trust-manager serviceAccountName: trust-manager terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - emptyDir: sizeLimit: 50M name: packages - name: tls secret: defaultMode: 420 secretName: trust-manager-tls - name: kube-api-access-s5dgs projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2025-09-06T05:09:02Z" status: "True" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2025-09-06T05:09:02Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2025-09-06T05:09:20Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2025-09-06T05:09:20Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2025-09-06T05:08:27Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://9463d1248b75bc33f5bde9efa75cf7dfa0ca68f44913c36aaf61be3d104438d4 image: quay.io/jetstack/trust-manager:v0.12.0 imageID: quay.io/jetstack/trust-manager@sha256:8285d0d1c374dcf6e29ddcac10a5c937502eb8c318dbd2411f9789ede0e23421 lastState: {} name: trust-manager ready: true restartCount: 0 started: true state: running: startedAt: "2025-09-06T05:09:11Z" volumeMounts: - mountPath: /tls name: tls readOnly: true recursiveReadOnly: Disabled - mountPath: /packages name: packages readOnly: true recursiveReadOnly: Disabled - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-s5dgs readOnly: true recursiveReadOnly: Disabled hostIP: 10.89.0.2 hostIPs: - ip: 10.89.0.2 initContainerStatuses: - containerID: containerd://554498408f27a390be312a2a79c8c482ae2b5f18518703b6c6c43478ec270b4f image: quay.io/jetstack/cert-manager-package-debian:20210119.0 imageID: quay.io/jetstack/cert-manager-package-debian@sha256:116133f68938ef568aca17a0c691d5b1ef73a9a207029c9a068cf4230053fed5 lastState: {} name: cert-manager-package-debian ready: true restartCount: 0 started: false state: terminated: containerID: containerd://554498408f27a390be312a2a79c8c482ae2b5f18518703b6c6c43478ec270b4f exitCode: 0 finishedAt: "2025-09-06T05:09:01Z" reason: Completed startedAt: "2025-09-06T05:09:01Z" volumeMounts: - mountPath: /packages name: packages - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-s5dgs readOnly: true recursiveReadOnly: Disabled phase: Running podIP: 10.244.0.10 podIPs: - ip: 10.244.0.10 qosClass: Burstable startTime: "2025-09-06T05:08:27Z" --- Pod 'trust-manager-7c9f8b8f7d-cflrm' under namespace 'cert-manager': Pod trust-manager-7c9f8b8f7d-cflrm Failed to create pod sandbox: rpc error: code = Unavailable desc = error reading from server: read unix @->/run/containerd/containerd.sock: read: connection reset by peer (FailedCreatePodSandBox) 2025/09/06 05:09:01 reading from /debian-package 2025/09/06 05:09:01 writing to /packages 2025/09/06 05:09:01 successfully copied /debian-package/cert-manager-package-debian.json to /packages/cert-manager-package-debian.json time=2025-09-06T05:09:11.601Z level=INFO msg="successfully loaded default package from filesystem" logger=trust/bundle path=/packages/cert-manager-package-debian.json time=2025-09-06T05:09:11.601Z level=INFO msg="registering webhook endpoints" logger=trust/webhook time=2025-09-06T05:09:11.601Z level=INFO msg="skip registering a mutating webhook, object does not implement admission.Defaulter or WithDefaulter wasn't called" logger=trust/manager/controller-runtime/builder GVK="trust.cert-manager.io/v1alpha1, Kind=Bundle" time=2025-09-06T05:09:11.601Z level=INFO msg="Registering a validating webhook" logger=trust/manager/controller-runtime/builder GVK="trust.cert-manager.io/v1alpha1, Kind=Bundle" path=/validate-trust-cert-manager-io-v1alpha1-bundle time=2025-09-06T05:09:11.602Z level=INFO msg="Registering webhook" path=/validate-trust-cert-manager-io-v1alpha1-bundle logger=trust/manager/controller-runtime/webhook time=2025-09-06T05:09:11.602Z level=INFO msg="Starting metrics server" logger=trust/manager/controller-runtime/metrics time=2025-09-06T05:09:11.602Z level=INFO msg="Serving metrics server" logger=trust/manager/controller-runtime/metrics bindAddress=0.0.0.0:9402 secure=false time=2025-09-06T05:09:11.602Z level=INFO msg="Starting webhook server" logger=trust/manager/controller-runtime/webhook time=2025-09-06T05:09:11.602Z level=INFO msg="starting server" name="health probe" addr=[::]:6060 logger=trust/manager time=2025-09-06T05:09:11.602Z level=INFO msg="Updated current TLS certificate" logger=trust/manager/controller-runtime/certwatcher time=2025-09-06T05:09:11.602Z level=INFO msg="Serving webhook server" logger=trust/manager/controller-runtime/webhook host=0.0.0.0 port=6443 time=2025-09-06T05:09:11.602Z level=INFO msg="attempting to acquire leader lease cert-manager/trust-manager-leader-election..." time=2025-09-06T05:09:11.602Z level=INFO msg="Starting certificate watcher" logger=trust/manager/controller-runtime/certwatcher time=2025-09-06T05:09:11.609Z level=INFO msg="successfully acquired lease cert-manager/trust-manager-leader-election" time=2025-09-06T05:09:11.609Z level=DEBUG+3 msg="trust-manager-7c9f8b8f7d-cflrm_fa109de2-0f72-401a-aab5-67d3d1032c21 became leader" logger=trust/manager/events type=Normal object="{Kind:Lease Namespace:cert-manager Name:trust-manager-leader-election UID:3e78c235-bf44-4717-bdeb-ee148851f3d0 APIVersion:coordination.k8s.io/v1 ResourceVersion:1196 FieldPath:}" reason=LeaderElection time=2025-09-06T05:09:11.609Z level=INFO msg="Starting EventSource" controller=bundles logger=trust/manager source="kind source: *v1alpha1.Bundle" time=2025-09-06T05:09:11.609Z level=INFO msg="Starting EventSource" controller=bundles logger=trust/manager source="kind source: *v1.Namespace" time=2025-09-06T05:09:11.609Z level=INFO msg="Starting EventSource" controller=bundles logger=trust/manager source="kind source: *v1.ConfigMap" time=2025-09-06T05:09:11.609Z level=INFO msg="Starting EventSource" controller=bundles logger=trust/manager source="kind source: *v1.Secret" time=2025-09-06T05:09:11.609Z level=INFO msg="Starting EventSource" controller=bundles logger=trust/manager source="kind source: *v1.PartialObjectMetadata" time=2025-09-06T05:09:11.609Z level=INFO msg="Starting Controller" controller=bundles logger=trust/manager time=2025-09-06T05:09:11.897Z level=INFO msg="Starting workers" controller=bundles logger=trust/manager "worker count"=1 time=2025-09-06T05:14:03.205Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4209 FieldPath:}" reason=Synced time=2025-09-06T05:14:03.205Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4209 FieldPath:}" reason=Synced time=2025-09-06T05:14:15.898Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4247 FieldPath:}" reason=Synced time=2025-09-06T05:14:15.898Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4247 FieldPath:}" reason=Synced time=2025-09-06T05:14:23.099Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4247 FieldPath:}" reason=Synced time=2025-09-06T05:14:23.099Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4247 FieldPath:}" reason=Synced time=2025-09-06T05:15:51.497Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4247 FieldPath:}" reason=Synced time=2025-09-06T05:15:51.497Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4247 FieldPath:}" reason=Synced time=2025-09-06T05:16:15.897Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4247 FieldPath:}" reason=Synced time=2025-09-06T05:16:15.897Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4247 FieldPath:}" reason=Synced time=2025-09-06T05:16:45.497Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4247 FieldPath:}" reason=Synced time=2025-09-06T05:16:45.497Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4247 FieldPath:}" reason=Synced time=2025-09-06T05:17:31.499Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4247 FieldPath:}" reason=Synced time=2025-09-06T05:17:31.499Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4247 FieldPath:}" reason=Synced time=2025-09-06T05:17:48.878Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4247 FieldPath:}" reason=Synced time=2025-09-06T05:17:48.878Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4247 FieldPath:}" reason=Synced time=2025-09-06T05:18:12.298Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4247 FieldPath:}" reason=Synced time=2025-09-06T05:18:12.298Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4247 FieldPath:}" reason=Synced time=2025-09-06T05:18:20.698Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4247 FieldPath:}" reason=Synced time=2025-09-06T05:18:20.698Z level=DEBUG+3 msg="Successfully synced Bundle to all namespaces" logger=trust/manager/events type=Normal object="{Kind:Bundle Namespace: Name:trusted-ca UID:55ac604e-8d1a-4b97-91ce-77f53c65876f APIVersion:trust.cert-manager.io/v1alpha1 ResourceVersion:4247 FieldPath:}" reason=Synced ---------- namespace 'default' ---------- ---------- namespace 'dex' ---------- apiVersion: v1 kind: Pod metadata: creationTimestamp: "2025-09-06T05:13:53Z" generateName: dex-845c496cbf- labels: app: dex pod-template-hash: 845c496cbf name: dex-845c496cbf-4tbrx namespace: dex ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: dex-845c496cbf uid: 15942ee4-66a6-4fbc-a665-9362c36a683a resourceVersion: "4346" uid: 3669762b-529c-4cfc-bf25-2c656e2c54ca spec: containers: - command: - /usr/local/bin/dex - serve - /etc/dex/cfg/config.yaml env: - name: CLIENT_SECRET valueFrom: secretKeyRef: key: client-secret name: oauth2-proxy-client-secret image: ghcr.io/dexidp/dex:v2.44.0 imagePullPolicy: IfNotPresent name: dex ports: - containerPort: 9443 name: https protocol: TCP - containerPort: 5558 name: telemetry protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /healthz/ready port: telemetry scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 50m memory: 128Mi requests: cpu: 10m memory: 64Mi securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1001 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/dex/cfg name: dex - mountPath: /etc/dex/tls name: tls - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-dsgqk readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: kind-mapt-control-plane preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: dex serviceAccountName: dex terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - configMap: defaultMode: 420 items: - key: config.yaml path: config.yaml name: dex-4k6bdhgm54 name: dex - name: tls secret: defaultMode: 420 secretName: dex-cert - name: kube-api-access-dsgqk projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2025-09-06T05:14:06Z" status: "True" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2025-09-06T05:13:53Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2025-09-06T05:14:10Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2025-09-06T05:14:10Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2025-09-06T05:13:53Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://d97a14cb78b6c33e611ce3165ff4bcb89678bd09a2f1f4f152aa72e74ee9e75a image: ghcr.io/dexidp/dex:v2.44.0 imageID: ghcr.io/dexidp/dex@sha256:5d0656fce7d453c0e3b2706abf40c0d0ce5b371fb0b73b3cf714d05f35fa5f86 lastState: terminated: containerID: containerd://120cac42cb11d6ad8913778eb78b90aa591810b330d1bce7b4eb4dc9b91f05b0 exitCode: 2 finishedAt: "2025-09-06T05:14:07Z" reason: Error startedAt: "2025-09-06T05:14:06Z" name: dex ready: true restartCount: 1 started: true state: running: startedAt: "2025-09-06T05:14:09Z" volumeMounts: - mountPath: /etc/dex/cfg name: dex - mountPath: /etc/dex/tls name: tls - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-dsgqk readOnly: true recursiveReadOnly: Disabled hostIP: 10.89.0.2 hostIPs: - ip: 10.89.0.2 phase: Running podIP: 10.244.0.36 podIPs: - ip: 10.244.0.36 qosClass: Burstable startTime: "2025-09-06T05:13:53Z" --- Pod 'dex-845c496cbf-4tbrx' under namespace 'dex': Pod dex-845c496cbf-4tbrx MountVolume.SetUp failed for volume "tls" : secret "dex-cert" not found (FailedMount) time=2025-09-06T05:14:09.345Z level=INFO msg="Version info" dex_version=v2.44.0 go.version=go1.25.0 go.os=linux go.arch=amd64 time=2025-09-06T05:14:09.440Z level=INFO msg="config issuer" issuer=https://localhost:9443/idp/ time=2025-09-06T05:14:09.441Z level=INFO msg="kubernetes client" api_version=dex.coreos.com/v1 time=2025-09-06T05:14:09.444Z level=INFO msg="creating custom Kubernetes resources" time=2025-09-06T05:14:09.444Z level=INFO msg="checking if custom resource has already been created..." object=authcodes.dex.coreos.com time=2025-09-06T05:14:09.451Z level=INFO msg="the custom resource already available, skipping create" object=authcodes.dex.coreos.com time=2025-09-06T05:14:09.451Z level=INFO msg="checking if custom resource has already been created..." object=authrequests.dex.coreos.com time=2025-09-06T05:14:09.458Z level=INFO msg="the custom resource already available, skipping create" object=authrequests.dex.coreos.com time=2025-09-06T05:14:09.458Z level=INFO msg="checking if custom resource has already been created..." object=oauth2clients.dex.coreos.com time=2025-09-06T05:14:09.464Z level=INFO msg="the custom resource already available, skipping create" object=oauth2clients.dex.coreos.com time=2025-09-06T05:14:09.464Z level=INFO msg="checking if custom resource has already been created..." object=signingkeies.dex.coreos.com time=2025-09-06T05:14:09.470Z level=INFO msg="the custom resource already available, skipping create" object=signingkeies.dex.coreos.com time=2025-09-06T05:14:09.470Z level=INFO msg="checking if custom resource has already been created..." object=refreshtokens.dex.coreos.com time=2025-09-06T05:14:09.477Z level=INFO msg="the custom resource already available, skipping create" object=refreshtokens.dex.coreos.com time=2025-09-06T05:14:09.477Z level=INFO msg="checking if custom resource has already been created..." object=passwords.dex.coreos.com time=2025-09-06T05:14:09.483Z level=INFO msg="the custom resource already available, skipping create" object=passwords.dex.coreos.com time=2025-09-06T05:14:09.483Z level=INFO msg="checking if custom resource has already been created..." object=offlinesessionses.dex.coreos.com time=2025-09-06T05:14:09.509Z level=INFO msg="the custom resource already available, skipping create" object=offlinesessionses.dex.coreos.com time=2025-09-06T05:14:09.509Z level=INFO msg="checking if custom resource has already been created..." object=connectors.dex.coreos.com time=2025-09-06T05:14:09.540Z level=INFO msg="the custom resource already available, skipping create" object=connectors.dex.coreos.com time=2025-09-06T05:14:09.540Z level=INFO msg="checking if custom resource has already been created..." object=devicerequests.dex.coreos.com time=2025-09-06T05:14:09.548Z level=INFO msg="the custom resource already available, skipping create" object=devicerequests.dex.coreos.com time=2025-09-06T05:14:09.548Z level=INFO msg="checking if custom resource has already been created..." object=devicetokens.dex.coreos.com time=2025-09-06T05:14:09.554Z level=INFO msg="the custom resource already available, skipping create" object=devicetokens.dex.coreos.com time=2025-09-06T05:14:09.554Z level=INFO msg="config storage" storage_type=kubernetes time=2025-09-06T05:14:09.554Z level=INFO msg="config static client" client_name=oauth2-proxy time=2025-09-06T05:14:09.554Z level=INFO msg="config connector: local passwords enabled" time=2025-09-06T05:14:09.554Z level=INFO msg="config skipping approval screen" time=2025-09-06T05:14:09.554Z level=INFO msg="config using password grant connector" password_connector=local time=2025-09-06T05:14:09.554Z level=INFO msg="config refresh tokens rotation" enabled=true time=2025-09-06T05:14:09.642Z level=INFO msg="keys expired, rotating" time=2025-09-06T05:14:10.149Z level=INFO msg="keys rotated" next_rotation=2025-09-06T11:14:10.143Z time=2025-09-06T05:14:10.149Z level=INFO msg="listening on" server=telemetry address=0.0.0.0:5558 time=2025-09-06T05:14:10.149Z level=INFO msg="listening on" server=https address=0.0.0.0:9443 apiVersion: v1 kind: Pod metadata: creationTimestamp: "2025-09-06T05:13:53Z" generateName: dex-845c496cbf- labels: app: dex pod-template-hash: 845c496cbf name: dex-845c496cbf-4tbrx namespace: dex ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: dex-845c496cbf uid: 15942ee4-66a6-4fbc-a665-9362c36a683a resourceVersion: "4346" uid: 3669762b-529c-4cfc-bf25-2c656e2c54ca spec: containers: - command: - /usr/local/bin/dex - serve - /etc/dex/cfg/config.yaml env: - name: CLIENT_SECRET valueFrom: secretKeyRef: key: client-secret name: oauth2-proxy-client-secret image: ghcr.io/dexidp/dex:v2.44.0 imagePullPolicy: IfNotPresent name: dex ports: - containerPort: 9443 name: https protocol: TCP - containerPort: 5558 name: telemetry protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /healthz/ready port: telemetry scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 50m memory: 128Mi requests: cpu: 10m memory: 64Mi securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1001 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/dex/cfg name: dex - mountPath: /etc/dex/tls name: tls - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-dsgqk readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: kind-mapt-control-plane preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: dex serviceAccountName: dex terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - configMap: defaultMode: 420 items: - key: config.yaml path: config.yaml name: dex-4k6bdhgm54 name: dex - name: tls secret: defaultMode: 420 secretName: dex-cert - name: kube-api-access-dsgqk projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2025-09-06T05:14:06Z" status: "True" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2025-09-06T05:13:53Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2025-09-06T05:14:10Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2025-09-06T05:14:10Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2025-09-06T05:13:53Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://d97a14cb78b6c33e611ce3165ff4bcb89678bd09a2f1f4f152aa72e74ee9e75a image: ghcr.io/dexidp/dex:v2.44.0 imageID: ghcr.io/dexidp/dex@sha256:5d0656fce7d453c0e3b2706abf40c0d0ce5b371fb0b73b3cf714d05f35fa5f86 lastState: terminated: containerID: containerd://120cac42cb11d6ad8913778eb78b90aa591810b330d1bce7b4eb4dc9b91f05b0 exitCode: 2 finishedAt: "2025-09-06T05:14:07Z" reason: Error startedAt: "2025-09-06T05:14:06Z" name: dex ready: true restartCount: 1 started: true state: running: startedAt: "2025-09-06T05:14:09Z" volumeMounts: - mountPath: /etc/dex/cfg name: dex - mountPath: /etc/dex/tls name: tls - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-dsgqk readOnly: true recursiveReadOnly: Disabled hostIP: 10.89.0.2 hostIPs: - ip: 10.89.0.2 phase: Running podIP: 10.244.0.36 podIPs: - ip: 10.244.0.36 qosClass: Burstable startTime: "2025-09-06T05:13:53Z" --- Pod 'dex-845c496cbf-4tbrx' under namespace 'dex': Pod dex-845c496cbf-4tbrx Readiness probe failed: Get "http://10.244.0.36:5558/healthz/ready": dial tcp 10.244.0.36:5558: connect: connection refused (Unhealthy) time=2025-09-06T05:14:09.345Z level=INFO msg="Version info" dex_version=v2.44.0 go.version=go1.25.0 go.os=linux go.arch=amd64 time=2025-09-06T05:14:09.440Z level=INFO msg="config issuer" issuer=https://localhost:9443/idp/ time=2025-09-06T05:14:09.441Z level=INFO msg="kubernetes client" api_version=dex.coreos.com/v1 time=2025-09-06T05:14:09.444Z level=INFO msg="creating custom Kubernetes resources" time=2025-09-06T05:14:09.444Z level=INFO msg="checking if custom resource has already been created..." object=authcodes.dex.coreos.com time=2025-09-06T05:14:09.451Z level=INFO msg="the custom resource already available, skipping create" object=authcodes.dex.coreos.com time=2025-09-06T05:14:09.451Z level=INFO msg="checking if custom resource has already been created..." object=authrequests.dex.coreos.com time=2025-09-06T05:14:09.458Z level=INFO msg="the custom resource already available, skipping create" object=authrequests.dex.coreos.com time=2025-09-06T05:14:09.458Z level=INFO msg="checking if custom resource has already been created..." object=oauth2clients.dex.coreos.com time=2025-09-06T05:14:09.464Z level=INFO msg="the custom resource already available, skipping create" object=oauth2clients.dex.coreos.com time=2025-09-06T05:14:09.464Z level=INFO msg="checking if custom resource has already been created..." object=signingkeies.dex.coreos.com time=2025-09-06T05:14:09.470Z level=INFO msg="the custom resource already available, skipping create" object=signingkeies.dex.coreos.com time=2025-09-06T05:14:09.470Z level=INFO msg="checking if custom resource has already been created..." object=refreshtokens.dex.coreos.com time=2025-09-06T05:14:09.477Z level=INFO msg="the custom resource already available, skipping create" object=refreshtokens.dex.coreos.com time=2025-09-06T05:14:09.477Z level=INFO msg="checking if custom resource has already been created..." object=passwords.dex.coreos.com time=2025-09-06T05:14:09.483Z level=INFO msg="the custom resource already available, skipping create" object=passwords.dex.coreos.com time=2025-09-06T05:14:09.483Z level=INFO msg="checking if custom resource has already been created..." object=offlinesessionses.dex.coreos.com time=2025-09-06T05:14:09.509Z level=INFO msg="the custom resource already available, skipping create" object=offlinesessionses.dex.coreos.com time=2025-09-06T05:14:09.509Z level=INFO msg="checking if custom resource has already been created..." object=connectors.dex.coreos.com time=2025-09-06T05:14:09.540Z level=INFO msg="the custom resource already available, skipping create" object=connectors.dex.coreos.com time=2025-09-06T05:14:09.540Z level=INFO msg="checking if custom resource has already been created..." object=devicerequests.dex.coreos.com time=2025-09-06T05:14:09.548Z level=INFO msg="the custom resource already available, skipping create" object=devicerequests.dex.coreos.com time=2025-09-06T05:14:09.548Z level=INFO msg="checking if custom resource has already been created..." object=devicetokens.dex.coreos.com time=2025-09-06T05:14:09.554Z level=INFO msg="the custom resource already available, skipping create" object=devicetokens.dex.coreos.com time=2025-09-06T05:14:09.554Z level=INFO msg="config storage" storage_type=kubernetes time=2025-09-06T05:14:09.554Z level=INFO msg="config static client" client_name=oauth2-proxy time=2025-09-06T05:14:09.554Z level=INFO msg="config connector: local passwords enabled" time=2025-09-06T05:14:09.554Z level=INFO msg="config skipping approval screen" time=2025-09-06T05:14:09.554Z level=INFO msg="config using password grant connector" password_connector=local time=2025-09-06T05:14:09.554Z level=INFO msg="config refresh tokens rotation" enabled=true time=2025-09-06T05:14:09.642Z level=INFO msg="keys expired, rotating" time=2025-09-06T05:14:10.149Z level=INFO msg="keys rotated" next_rotation=2025-09-06T11:14:10.143Z time=2025-09-06T05:14:10.149Z level=INFO msg="listening on" server=telemetry address=0.0.0.0:5558 time=2025-09-06T05:14:10.149Z level=INFO msg="listening on" server=https address=0.0.0.0:9443 ---------- namespace 'enterprise-contract-service' ---------- ---------- namespace 'integration-service' ---------- apiVersion: v1 kind: Pod metadata: annotations: kubectl.kubernetes.io/default-container: manager creationTimestamp: "2025-09-06T05:18:03Z" generateName: integration-service-controller-manager-8665678d48- labels: control-plane: controller-manager pod-template-hash: 8665678d48 name: integration-service-controller-manager-8665678d48-tg6gn namespace: integration-service ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: integration-service-controller-manager-8665678d48 uid: f69a238d-855a-408c-9393-102752882683 resourceVersion: "6636" uid: c56d35e5-5244-4d0d-adaf-c429b796eecf spec: containers: - args: - --metrics-bind-address=:8080 - --leader-elect - --lease-duration=30s - --leader-renew-deadline=15s - --leader-elector-retry-period=5s command: - /manager image: quay.io/konflux-ci/integration-service:cc65bf09ea9d6296732981cc436304f91927f0ab imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8081 scheme: HTTP initialDelaySeconds: 15 periodSeconds: 20 successThreshold: 1 timeoutSeconds: 1 name: manager ports: - containerPort: 9443 name: webhook-server protocol: TCP - containerPort: 8081 name: probes protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: 8081 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 500m memory: 128Mi requests: cpu: 10m memory: 64Mi securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /tmp/k8s-webhook-server/serving-certs name: cert readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-77n9l readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: kind-mapt-control-plane preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: runAsNonRoot: true serviceAccount: integration-service-controller-manager serviceAccountName: integration-service-controller-manager terminationGracePeriodSeconds: 10 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: cert secret: defaultMode: 420 secretName: webhook-server-cert - name: kube-api-access-77n9l projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2025-09-06T05:18:03Z" status: "False" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2025-09-06T05:18:03Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2025-09-06T05:18:03Z" message: 'containers with unready status: [manager]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2025-09-06T05:18:03Z" message: 'containers with unready status: [manager]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2025-09-06T05:18:03Z" status: "True" type: PodScheduled containerStatuses: - image: quay.io/konflux-ci/integration-service:cc65bf09ea9d6296732981cc436304f91927f0ab imageID: "" lastState: {} name: manager ready: false restartCount: 0 started: false state: waiting: reason: ContainerCreating volumeMounts: - mountPath: /tmp/k8s-webhook-server/serving-certs name: cert readOnly: true recursiveReadOnly: Disabled - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-77n9l readOnly: true recursiveReadOnly: Disabled hostIP: 10.89.0.2 hostIPs: - ip: 10.89.0.2 phase: Pending qosClass: Burstable startTime: "2025-09-06T05:18:03Z" --- Pod 'integration-service-controller-manager-8665678d48-tg6gn' under namespace 'integration-service': Pod integration-service-controller-manager-8665678d48-tg6gn MountVolume.SetUp failed for volume "cert" : secret "webhook-server-cert" not found (FailedMount) Error from server (BadRequest): container "manager" in pod "integration-service-controller-manager-8665678d48-tg6gn" is waiting to start: ContainerCreating Failed to get pod logs for integration-service-controller-manager-8665678d48-tg6gn in namespace integration-service ---------- namespace 'kind-registry' ---------- apiVersion: v1 kind: Pod metadata: creationTimestamp: "2025-09-06T05:14:00Z" generateName: registry-68dcdc78fb- labels: pod-template-hash: 68dcdc78fb run: registry name: registry-68dcdc78fb-lslt2 namespace: kind-registry ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: registry-68dcdc78fb uid: 6be3a3bb-8888-4df8-ad29-57f0f5b9fc0e resourceVersion: "4374" uid: 177fd99a-8b62-49c0-91a2-43480056d292 spec: containers: - env: - name: REGISTRY_HTTP_TLS_CERTIFICATE value: /certs/tls.crt - name: REGISTRY_HTTP_TLS_KEY value: /certs/tls.key image: registry:2 imagePullPolicy: IfNotPresent name: registry ports: - containerPort: 5000 protocol: TCP resources: limits: cpu: 100m memory: 250Mi requests: cpu: 10m memory: 50Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /certs name: certs - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-9qqfn readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: kind-mapt-control-plane preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: certs secret: defaultMode: 420 secretName: local-registry-tls - name: kube-api-access-9qqfn projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2025-09-06T05:14:13Z" status: "True" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2025-09-06T05:14:00Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2025-09-06T05:14:13Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2025-09-06T05:14:13Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2025-09-06T05:14:00Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://f52b3b98f080d9137bc8868a0142dba26f81a6c3cdfb14e0874fa6750c030d9e image: docker.io/library/registry:2 imageID: docker.io/library/registry@sha256:a3d8aaa63ed8681a604f1dea0aa03f100d5895b6a58ace528858a7b332415373 lastState: {} name: registry ready: true restartCount: 0 started: true state: running: startedAt: "2025-09-06T05:14:13Z" volumeMounts: - mountPath: /certs name: certs - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-9qqfn readOnly: true recursiveReadOnly: Disabled hostIP: 10.89.0.2 hostIPs: - ip: 10.89.0.2 phase: Running podIP: 10.244.0.37 podIPs: - ip: 10.244.0.37 qosClass: Burstable startTime: "2025-09-06T05:14:00Z" --- Pod 'registry-68dcdc78fb-lslt2' under namespace 'kind-registry': Pod registry-68dcdc78fb-lslt2 MountVolume.SetUp failed for volume "certs" : secret "local-registry-tls" not found (FailedMount) time="2025-09-06T05:14:13Z" level=warning msg="Ignoring unrecognized environment variable REGISTRY_SERVICE_PORT" time="2025-09-06T05:14:13Z" level=warning msg="Ignoring unrecognized environment variable REGISTRY_SERVICE_PORT_443_TCP" time="2025-09-06T05:14:13Z" level=warning msg="Ignoring unrecognized environment variable REGISTRY_SERVICE_PORT_443_TCP_ADDR" time="2025-09-06T05:14:13Z" level=warning msg="Ignoring unrecognized environment variable REGISTRY_SERVICE_PORT_443_TCP_PORT" time="2025-09-06T05:14:13Z" level=warning msg="Ignoring unrecognized environment variable REGISTRY_SERVICE_PORT_443_TCP_PROTO" time="2025-09-06T05:14:13Z" level=warning msg="Ignoring unrecognized environment variable REGISTRY_SERVICE_SERVICE_HOST" time="2025-09-06T05:14:13Z" level=warning msg="Ignoring unrecognized environment variable REGISTRY_SERVICE_SERVICE_PORT" time="2025-09-06T05:14:13.27136153Z" level=warning msg="No HTTP secret provided - generated random secret. This may cause problems with uploads if multiple registries are behind a load-balancer. To provide a shared secret, fill in http.secret in the configuration file or set the REGISTRY_HTTP_SECRET environment variable." go.version=go1.20.8 instance.id=8976bf19-7b9d-47dd-8a4f-00820b129870 service=registry version=2.8.3 time="2025-09-06T05:14:13.271400691Z" level=info msg="redis not configured" go.version=go1.20.8 instance.id=8976bf19-7b9d-47dd-8a4f-00820b129870 service=registry version=2.8.3 time="2025-09-06T05:14:13.271497063Z" level=info msg="Starting upload purge in 52m0s" go.version=go1.20.8 instance.id=8976bf19-7b9d-47dd-8a4f-00820b129870 service=registry version=2.8.3 time="2025-09-06T05:14:13.271532344Z" level=info msg="using inmemory blob descriptor cache" go.version=go1.20.8 instance.id=8976bf19-7b9d-47dd-8a4f-00820b129870 service=registry version=2.8.3 time="2025-09-06T05:14:13.271792429Z" level=info msg="restricting TLS version to tls1.2 or higher" go.version=go1.20.8 instance.id=8976bf19-7b9d-47dd-8a4f-00820b129870 service=registry version=2.8.3 time="2025-09-06T05:14:13.356491077Z" level=info msg="restricting TLS cipher suites to: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_AES_128_GCM_SHA256,TLS_CHACHA20_POLY1305_SHA256,TLS_AES_256_GCM_SHA384" go.version=go1.20.8 instance.id=8976bf19-7b9d-47dd-8a4f-00820b129870 service=registry version=2.8.3 time="2025-09-06T05:14:13.356871596Z" level=info msg="listening on [::]:5000, tls" go.version=go1.20.8 instance.id=8976bf19-7b9d-47dd-8a4f-00820b129870 service=registry version=2.8.3 ---------- namespace 'konflux-ui' ---------- apiVersion: v1 kind: Pod metadata: creationTimestamp: "2025-09-06T05:18:28Z" generateName: proxy-79cf68d5c4- labels: app: proxy pod-template-hash: 79cf68d5c4 name: proxy-79cf68d5c4-p5tkq namespace: konflux-ui ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: proxy-79cf68d5c4 uid: 74c10a54-647e-41ff-a734-8551de6f7ec5 resourceVersion: "6954" uid: 817fcb53-e91a-470c-8574-f16829af8896 spec: containers: - command: - nginx - -g - daemon off; - -c - /etc/nginx/nginx.conf image: registry.access.redhat.com/ubi9/nginx-124@sha256:b924363ff07ee0f8fd4f680497da774ac0721722a119665998ff5b2111098ad1 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /health port: 9443 scheme: HTTPS initialDelaySeconds: 30 periodSeconds: 60 successThreshold: 1 timeoutSeconds: 1 name: nginx ports: - containerPort: 8080 name: web protocol: TCP - containerPort: 9443 name: web-tls protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /health port: 9443 scheme: HTTPS initialDelaySeconds: 30 periodSeconds: 30 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 300m memory: 256Mi requests: cpu: 30m memory: 128Mi securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1001 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/nginx/nginx.conf name: proxy readOnly: true subPath: nginx.conf - mountPath: /var/log/nginx name: logs - mountPath: /var/lib/nginx/tmp name: nginx-tmp - mountPath: /run name: run - mountPath: /mnt name: serving-cert - mountPath: /mnt/nginx-generated-config name: nginx-generated-config - mountPath: /mnt/nginx-additional-location-configs name: nginx-static - mountPath: /opt/app-root/src/static-content name: static-content - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-jbztz readOnly: true - args: - --provider - oidc - --provider-display-name - Dex OIDC - --client-id - oauth2-proxy - --http-address - 127.0.0.1:6000 - --redirect-url - https://localhost:9443/oauth2/callback - --oidc-issuer-url - https://localhost:9443/idp/ - --skip-oidc-discovery - --login-url - https://localhost:9443/idp/auth - --redeem-url - https://dex.dex.svc.cluster.local:9443/idp/token - --oidc-jwks-url - https://dex.dex.svc.cluster.local:9443/idp/keys - --cookie-secure - "true" - --cookie-name - __Host-konflux-ci-cookie - --email-domain - '*' - --ssl-insecure-skip-verify - "true" - --set-xauthrequest - "true" - --whitelist-domain - localhost:9443 - --skip-jwt-bearer-tokens env: - name: OAUTH2_PROXY_CLIENT_SECRET valueFrom: secretKeyRef: key: client-secret name: oauth2-proxy-client-secret - name: OAUTH2_PROXY_COOKIE_SECRET valueFrom: secretKeyRef: key: cookie-secret name: oauth2-proxy-cookie-secret image: quay.io/oauth2-proxy/oauth2-proxy:latest@sha256:786bed0f000c0f8a7b31619244ebab02406a8856a4faf3f5fb1df61fbd6c30ed imagePullPolicy: Always name: oauth2-proxy ports: - containerPort: 6000 name: web protocol: TCP resources: limits: cpu: 300m memory: 256Mi requests: cpu: 30m memory: 128Mi securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1001 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-jbztz readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true initContainers: - command: - cp - -R - /opt/app-root/src/. - /mnt/static-content/ image: quay.io/konflux-ci/konflux-ui@sha256:6fd7c0240404686adc8f6ec5a9db1e31bae68856f689dc68079b479bfa73a6e7 imagePullPolicy: IfNotPresent name: copy-static-content resources: limits: cpu: 50m memory: 128Mi requests: cpu: 10m memory: 64Mi securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1001 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /mnt/static-content name: static-content - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-jbztz readOnly: true - command: - sh - -c - | set -e # Copy the auth.conf template and replace the bearer token token=$(cat /mnt/api-token/token) sed "s/__BEARER_TOKEN__/$token/" /mnt/nginx-templates/auth.conf > /mnt/nginx-generated-config/auth.conf chmod 640 /mnt/nginx-generated-config/auth.conf image: registry.access.redhat.com/ubi9/ubi@sha256:66233eebd72bb5baa25190d4f55e1dc3fff3a9b77186c1f91a0abdb274452072 imagePullPolicy: IfNotPresent name: generate-nginx-configs resources: limits: cpu: 50m memory: 128Mi requests: cpu: 10m memory: 64Mi securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1001 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /mnt/nginx-generated-config name: nginx-generated-config - mountPath: /mnt/nginx-templates name: nginx-templates - mountPath: /mnt/api-token name: api-token - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-jbztz readOnly: true nodeName: kind-mapt-control-plane preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: proxy serviceAccountName: proxy terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 topologySpreadConstraints: - labelSelector: matchLabels: app: proxy maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: ScheduleAnyway volumes: - configMap: defaultMode: 420 items: - key: nginx.conf path: nginx.conf name: proxy-cb8kd95k47 name: proxy - configMap: defaultMode: 420 name: proxy-nginx-templates-4m8fgtf4m9 name: nginx-templates - name: nginx-static projected: defaultMode: 420 sources: - configMap: name: proxy-nginx-static-fmmfg7d22f - configMap: name: nginx-idp-location-h959ghd6bh - emptyDir: {} name: logs - emptyDir: {} name: nginx-tmp - emptyDir: {} name: run - name: serving-cert secret: defaultMode: 420 secretName: serving-cert - emptyDir: {} name: nginx-generated-config - name: api-token secret: defaultMode: 420 secretName: proxy - emptyDir: {} name: static-content - name: kube-api-access-jbztz projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2025-09-06T05:18:28Z" status: "False" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2025-09-06T05:18:28Z" message: 'containers with incomplete status: [copy-static-content generate-nginx-configs]' reason: ContainersNotInitialized status: "False" type: Initialized - lastProbeTime: null lastTransitionTime: "2025-09-06T05:18:28Z" message: 'containers with unready status: [nginx oauth2-proxy]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2025-09-06T05:18:28Z" message: 'containers with unready status: [nginx oauth2-proxy]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2025-09-06T05:18:28Z" status: "True" type: PodScheduled containerStatuses: - image: registry.access.redhat.com/ubi9/nginx-124@sha256:b924363ff07ee0f8fd4f680497da774ac0721722a119665998ff5b2111098ad1 imageID: "" lastState: {} name: nginx ready: false restartCount: 0 started: false state: waiting: reason: PodInitializing volumeMounts: - mountPath: /etc/nginx/nginx.conf name: proxy readOnly: true recursiveReadOnly: Disabled - mountPath: /var/log/nginx name: logs - mountPath: /var/lib/nginx/tmp name: nginx-tmp - mountPath: /run name: run - mountPath: /mnt name: serving-cert - mountPath: /mnt/nginx-generated-config name: nginx-generated-config - mountPath: /mnt/nginx-additional-location-configs name: nginx-static - mountPath: /opt/app-root/src/static-content name: static-content - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-jbztz readOnly: true recursiveReadOnly: Disabled - image: quay.io/oauth2-proxy/oauth2-proxy:latest@sha256:786bed0f000c0f8a7b31619244ebab02406a8856a4faf3f5fb1df61fbd6c30ed imageID: "" lastState: {} name: oauth2-proxy ready: false restartCount: 0 started: false state: waiting: reason: PodInitializing volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-jbztz readOnly: true recursiveReadOnly: Disabled hostIP: 10.89.0.2 hostIPs: - ip: 10.89.0.2 initContainerStatuses: - image: quay.io/konflux-ci/konflux-ui@sha256:6fd7c0240404686adc8f6ec5a9db1e31bae68856f689dc68079b479bfa73a6e7 imageID: "" lastState: {} name: copy-static-content ready: false restartCount: 0 started: false state: waiting: reason: PodInitializing volumeMounts: - mountPath: /mnt/static-content name: static-content - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-jbztz readOnly: true recursiveReadOnly: Disabled - image: registry.access.redhat.com/ubi9/ubi@sha256:66233eebd72bb5baa25190d4f55e1dc3fff3a9b77186c1f91a0abdb274452072 imageID: "" lastState: {} name: generate-nginx-configs ready: false restartCount: 0 started: false state: waiting: reason: PodInitializing volumeMounts: - mountPath: /mnt/nginx-generated-config name: nginx-generated-config - mountPath: /mnt/nginx-templates name: nginx-templates - mountPath: /mnt/api-token name: api-token - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-jbztz readOnly: true recursiveReadOnly: Disabled phase: Pending qosClass: Burstable startTime: "2025-09-06T05:18:28Z" --- Pod 'proxy-79cf68d5c4-p5tkq' under namespace 'konflux-ui': Pod proxy-79cf68d5c4-p5tkq MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found (FailedMount) Error from server (BadRequest): container "generate-nginx-configs" in pod "proxy-79cf68d5c4-p5tkq" is waiting to start: PodInitializing Failed to get pod logs for proxy-79cf68d5c4-p5tkq in namespace konflux-ui ---------- namespace 'kube-node-lease' ---------- ---------- namespace 'kube-public' ---------- ---------- namespace 'kube-system' ---------- apiVersion: v1 kind: Pod metadata: creationTimestamp: "2025-09-06T05:03:37Z" generateName: coredns-668d6bf9bc- labels: k8s-app: kube-dns pod-template-hash: 668d6bf9bc name: coredns-668d6bf9bc-x88l5 namespace: kube-system ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: coredns-668d6bf9bc uid: a79d344e-9686-4d3b-ba16-35aff1719f42 resourceVersion: "485" uid: 2fdfb6f6-9d69-4136-8aa8-6e9b108ba6b7 spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchExpressions: - key: k8s-app operator: In values: - kube-dns topologyKey: kubernetes.io/hostname weight: 100 containers: - args: - -conf - /etc/coredns/Corefile image: registry.k8s.io/coredns/coredns:v1.11.3 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 5 httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 name: coredns ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /ready port: 8181 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi securityContext: allowPrivilegeEscalation: false capabilities: add: - NET_BIND_SERVICE drop: - ALL readOnlyRootFilesystem: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/coredns name: config-volume readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-vkqvf readOnly: true dnsPolicy: Default enableServiceLinks: true nodeName: kind-mapt-control-plane nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 2000000000 priorityClassName: system-cluster-critical restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: coredns serviceAccountName: coredns terminationGracePeriodSeconds: 30 tolerations: - key: CriticalAddonsOnly operator: Exists - effect: NoSchedule key: node-role.kubernetes.io/control-plane - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - configMap: defaultMode: 420 items: - key: Corefile path: Corefile name: coredns name: config-volume - name: kube-api-access-vkqvf projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2025-09-06T05:03:53Z" status: "True" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2025-09-06T05:03:49Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2025-09-06T05:03:53Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2025-09-06T05:03:53Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2025-09-06T05:03:49Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://1ab6167886644a36fbea7b6c8e80825e4183864f074fe112e65c1297ab9102bc image: registry.k8s.io/coredns/coredns:v1.11.3 imageID: sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 lastState: {} name: coredns ready: true restartCount: 0 started: true state: running: startedAt: "2025-09-06T05:03:52Z" volumeMounts: - mountPath: /etc/coredns name: config-volume readOnly: true recursiveReadOnly: Disabled - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-vkqvf readOnly: true recursiveReadOnly: Disabled hostIP: 10.89.0.2 hostIPs: - ip: 10.89.0.2 phase: Running podIP: 10.244.0.3 podIPs: - ip: 10.244.0.3 qosClass: Burstable startTime: "2025-09-06T05:03:49Z" --- Pod 'coredns-668d6bf9bc-x88l5' under namespace 'kube-system': Pod coredns-668d6bf9bc-x88l5 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. (FailedScheduling) .:53 [INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b CoreDNS-1.11.3 linux/amd64, go1.21.11, a6338e9 apiVersion: v1 kind: Pod metadata: creationTimestamp: "2025-09-06T05:03:37Z" generateName: coredns-668d6bf9bc- labels: k8s-app: kube-dns pod-template-hash: 668d6bf9bc name: coredns-668d6bf9bc-zbdxx namespace: kube-system ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: coredns-668d6bf9bc uid: a79d344e-9686-4d3b-ba16-35aff1719f42 resourceVersion: "478" uid: 9f8cffa9-5208-4017-9398-8ebe315c8644 spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchExpressions: - key: k8s-app operator: In values: - kube-dns topologyKey: kubernetes.io/hostname weight: 100 containers: - args: - -conf - /etc/coredns/Corefile image: registry.k8s.io/coredns/coredns:v1.11.3 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 5 httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 name: coredns ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /ready port: 8181 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi securityContext: allowPrivilegeEscalation: false capabilities: add: - NET_BIND_SERVICE drop: - ALL readOnlyRootFilesystem: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/coredns name: config-volume readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-8vkx6 readOnly: true dnsPolicy: Default enableServiceLinks: true nodeName: kind-mapt-control-plane nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 2000000000 priorityClassName: system-cluster-critical restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: coredns serviceAccountName: coredns terminationGracePeriodSeconds: 30 tolerations: - key: CriticalAddonsOnly operator: Exists - effect: NoSchedule key: node-role.kubernetes.io/control-plane - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - configMap: defaultMode: 420 items: - key: Corefile path: Corefile name: coredns name: config-volume - name: kube-api-access-8vkx6 projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2025-09-06T05:03:53Z" status: "True" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2025-09-06T05:03:49Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2025-09-06T05:03:53Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2025-09-06T05:03:53Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2025-09-06T05:03:49Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://0f283fed5cc0069e38e5c61451663bfafb064c628f9dbc58e1cf6d9483d1b5ea image: registry.k8s.io/coredns/coredns:v1.11.3 imageID: sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 lastState: {} name: coredns ready: true restartCount: 0 started: true state: running: startedAt: "2025-09-06T05:03:52Z" volumeMounts: - mountPath: /etc/coredns name: config-volume readOnly: true recursiveReadOnly: Disabled - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-8vkx6 readOnly: true recursiveReadOnly: Disabled hostIP: 10.89.0.2 hostIPs: - ip: 10.89.0.2 phase: Running podIP: 10.244.0.4 podIPs: - ip: 10.244.0.4 qosClass: Burstable startTime: "2025-09-06T05:03:49Z" --- Pod 'coredns-668d6bf9bc-zbdxx' under namespace 'kube-system': Pod coredns-668d6bf9bc-zbdxx 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. (FailedScheduling) .:53 [INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b CoreDNS-1.11.3 linux/amd64, go1.21.11, a6338e9 apiVersion: v1 kind: Pod metadata: annotations: kubernetes.io/config.hash: 31cebaf30e092dbe08214df0c8b1d778 kubernetes.io/config.mirror: 31cebaf30e092dbe08214df0c8b1d778 kubernetes.io/config.seen: "2025-09-06T05:03:31.021292337Z" kubernetes.io/config.source: file creationTimestamp: "2025-09-06T05:03:31Z" labels: component: kube-scheduler tier: control-plane name: kube-scheduler-kind-mapt-control-plane namespace: kube-system ownerReferences: - apiVersion: v1 controller: true kind: Node name: kind-mapt-control-plane uid: e80c9308-7acd-4dc7-895e-307255b4c17d resourceVersion: "430" uid: 5365364f-1a49-4aa4-9c79-f5baa24fb6bc spec: containers: - command: - kube-scheduler - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf - --bind-address=127.0.0.1 - --kubeconfig=/etc/kubernetes/scheduler.conf - --leader-elect=true image: registry.k8s.io/kube-scheduler:v1.32.5 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 8 httpGet: host: 127.0.0.1 path: /livez port: 10259 scheme: HTTPS initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 15 name: kube-scheduler readinessProbe: failureThreshold: 3 httpGet: host: 127.0.0.1 path: /readyz port: 10259 scheme: HTTPS periodSeconds: 1 successThreshold: 1 timeoutSeconds: 15 resources: requests: cpu: 100m startupProbe: failureThreshold: 24 httpGet: host: 127.0.0.1 path: /livez port: 10259 scheme: HTTPS initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 15 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/kubernetes/scheduler.conf name: kubeconfig readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true hostNetwork: true nodeName: kind-mapt-control-plane preemptionPolicy: PreemptLowerPriority priority: 2000001000 priorityClassName: system-node-critical restartPolicy: Always schedulerName: default-scheduler securityContext: seccompProfile: type: RuntimeDefault terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute operator: Exists volumes: - hostPath: path: /etc/kubernetes/scheduler.conf type: FileOrCreate name: kubeconfig status: conditions: - lastProbeTime: null lastTransitionTime: "2025-09-06T05:03:31Z" status: "True" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2025-09-06T05:03:31Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2025-09-06T05:03:36Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2025-09-06T05:03:36Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2025-09-06T05:03:31Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://dc5eba6100ee03d112977a38885e4f313b551435e2a78c94dd8c57d6176b2a6f image: registry.k8s.io/kube-scheduler-amd64:v1.32.5 imageID: sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a lastState: {} name: kube-scheduler ready: true restartCount: 0 started: true state: running: startedAt: "2025-09-06T05:03:26Z" hostIP: 10.89.0.2 hostIPs: - ip: 10.89.0.2 phase: Running podIP: 10.89.0.2 podIPs: - ip: 10.89.0.2 qosClass: Burstable startTime: "2025-09-06T05:03:31Z" --- Pod 'kube-scheduler-kind-mapt-control-plane' under namespace 'kube-system': Pod kube-scheduler-kind-mapt-control-plane Node is not ready (NodeNotReady) I0906 05:03:26.928949 1 serving.go:386] Generated self-signed cert in-memory W0906 05:03:28.560570 1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W0906 05:03:28.560595 1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0906 05:03:28.560605 1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous. W0906 05:03:28.560613 1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0906 05:03:28.569336 1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.5" I0906 05:03:28.569351 1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0906 05:03:28.570778 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0906 05:03:28.570843 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0906 05:03:28.570880 1 secure_serving.go:213] Serving securely on 127.0.0.1:10259 I0906 05:03:28.570902 1 tlsconfig.go:243] "Starting DynamicServingCertificateController" W0906 05:03:28.571965 1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0906 05:03:28.572002 1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" W0906 05:03:28.572216 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0906 05:03:28.572248 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" W0906 05:03:28.572254 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0906 05:03:28.572284 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" W0906 05:03:28.572298 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0906 05:03:28.572678 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0906 05:03:28.572212 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0906 05:03:28.572325 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0906 05:03:28.572717 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" E0906 05:03:28.572715 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" W0906 05:03:28.572682 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0906 05:03:28.572802 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" W0906 05:03:28.572804 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0906 05:03:28.572674 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" W0906 05:03:28.572900 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0906 05:03:28.572897 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0906 05:03:28.572686 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope E0906 05:03:28.572940 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" W0906 05:03:28.572946 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0906 05:03:28.572943 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" E0906 05:03:28.572972 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" E0906 05:03:28.572896 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" W0906 05:03:28.572900 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0906 05:03:28.573027 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" E0906 05:03:28.572938 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" E0906 05:03:28.572870 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" W0906 05:03:28.573086 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0906 05:03:28.573116 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" W0906 05:03:28.573149 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0906 05:03:28.573190 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" W0906 05:03:29.462044 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0906 05:03:29.462077 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" W0906 05:03:29.511733 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0906 05:03:29.511763 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" W0906 05:03:29.516389 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope E0906 05:03:29.516439 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" W0906 05:03:29.602714 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0906 05:03:29.602741 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" W0906 05:03:29.640619 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0906 05:03:29.640643 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" W0906 05:03:29.678661 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0906 05:03:29.678682 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" W0906 05:03:29.770807 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0906 05:03:29.770854 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" W0906 05:03:29.811836 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0906 05:03:29.811876 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" W0906 05:03:29.814718 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0906 05:03:29.814742 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" W0906 05:03:29.885742 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0906 05:03:29.885763 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" W0906 05:03:29.893674 1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0906 05:03:29.893695 1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" W0906 05:03:29.902858 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0906 05:03:29.902899 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" I0906 05:03:32.471779 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0906 05:03:32.972252 1 leaderelection.go:257] attempting to acquire leader lease kube-system/kube-scheduler... I0906 05:03:32.978637 1 leaderelection.go:271] successfully acquired lease kube-system/kube-scheduler E0906 05:07:04.828117 1 framework.go:1316] "Plugin Failed" err="pods \"test-pvc-consumer\" is forbidden: unable to create new content in namespace test-pvc-ns because it is being terminated" plugin="DefaultBinder" pod="test-pvc-ns/test-pvc-consumer" node="kind-mapt-control-plane" E0906 05:07:04.828196 1 schedule_one.go:1058] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": pods \"test-pvc-consumer\" is forbidden: unable to create new content in namespace test-pvc-ns because it is being terminated" pod="test-pvc-ns/test-pvc-consumer" ---------- namespace 'kyverno' ---------- ---------- namespace 'local-path-storage' ---------- apiVersion: v1 kind: Pod metadata: creationTimestamp: "2025-09-06T05:03:37Z" generateName: local-path-provisioner-7dc846544d- labels: app: local-path-provisioner pod-template-hash: 7dc846544d name: local-path-provisioner-7dc846544d-d7xsn namespace: local-path-storage ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: local-path-provisioner-7dc846544d uid: 36646d60-d253-4f37-96d0-cb681463d30f resourceVersion: "482" uid: bc2d937f-9642-40f5-bf5b-1f4d3cbc8d55 spec: containers: - command: - local-path-provisioner - --debug - start - --helper-image - docker.io/kindest/local-path-helper:v20241212-8ac705d0 - --config - /etc/config/config.json env: - name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: CONFIG_MOUNT_PATH value: /etc/config/ image: docker.io/kindest/local-path-provisioner:v20250214-acbabc1a imagePullPolicy: IfNotPresent name: local-path-provisioner resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/config/ name: config-volume - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-227fc readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: kind-mapt-control-plane nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: local-path-provisioner-service-account serviceAccountName: local-path-provisioner-service-account terminationGracePeriodSeconds: 30 tolerations: - effect: NoSchedule key: node-role.kubernetes.io/control-plane operator: Equal - effect: NoSchedule key: node-role.kubernetes.io/master operator: Equal - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - configMap: defaultMode: 420 name: local-path-config name: config-volume - name: kube-api-access-227fc projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2025-09-06T05:03:53Z" status: "True" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2025-09-06T05:03:49Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2025-09-06T05:03:53Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2025-09-06T05:03:53Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2025-09-06T05:03:49Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://f514196fa0ff1b7c1fbdc6a2b28a63484f1d3da3d67bf9c5ecfefc8d90579302 image: docker.io/kindest/local-path-provisioner:v20250214-acbabc1a imageID: sha256:bbb6209cc873b9b4095bd014b4687512eea2bd7b246f9ec06f4f6f0be14d9fb6 lastState: {} name: local-path-provisioner ready: true restartCount: 0 started: true state: running: startedAt: "2025-09-06T05:03:52Z" volumeMounts: - mountPath: /etc/config/ name: config-volume - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-227fc readOnly: true recursiveReadOnly: Disabled hostIP: 10.89.0.2 hostIPs: - ip: 10.89.0.2 phase: Running podIP: 10.244.0.2 podIPs: - ip: 10.244.0.2 qosClass: BestEffort startTime: "2025-09-06T05:03:49Z" --- Pod 'local-path-provisioner-7dc846544d-d7xsn' under namespace 'local-path-storage': Pod local-path-provisioner-7dc846544d-d7xsn 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. (FailedScheduling) time="2025-09-06T05:03:52Z" level=debug msg="Applied config: {\"nodePathMap\":[{\"node\":\"DEFAULT_PATH_FOR_NON_LISTED_NODES\",\"paths\":[\"/var/local-path-provisioner\"]}],\"storageClassConfigs\":null}" time="2025-09-06T05:03:52Z" level=debug msg="Provisioner started" I0906 05:03:52.640262 1 controller.go:824] "Starting provisioner controller" component="rancher.io/local-path_local-path-provisioner-7dc846544d-d7xsn_4bbf3942-a311-4af7-b527-b040cdae6353" I0906 05:03:52.740494 1 controller.go:873] "Started provisioner controller" component="rancher.io/local-path_local-path-provisioner-7dc846544d-d7xsn_4bbf3942-a311-4af7-b527-b040cdae6353" time="2025-09-06T05:07:00Z" level=debug msg="config doesn't contain node kind-mapt-control-plane, use DEFAULT_PATH_FOR_NON_LISTED_NODES instead" time="2025-09-06T05:07:00Z" level=info msg="Creating volume pvc-c306f89f-3f6f-47f7-969e-bd2323a34add at kind-mapt-control-plane:/var/local-path-provisioner/pvc-c306f89f-3f6f-47f7-969e-bd2323a34add_test-pvc-ns_test-pvc" time="2025-09-06T05:07:00Z" level=info msg="create the helper pod helper-pod-create-pvc-c306f89f-3f6f-47f7-969e-bd2323a34add into local-path-storage" I0906 05:07:00.835611 1 event.go:389] "Event occurred" object="test-pvc-ns/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="Provisioning" message="External provisioner is provisioning volume for claim \"test-pvc-ns/test-pvc\"" time="2025-09-06T05:07:03Z" level=info msg="Volume pvc-c306f89f-3f6f-47f7-969e-bd2323a34add has been created on kind-mapt-control-plane:/var/local-path-provisioner/pvc-c306f89f-3f6f-47f7-969e-bd2323a34add_test-pvc-ns_test-pvc" time="2025-09-06T05:07:03Z" level=info msg="Start of helper-pod-create-pvc-c306f89f-3f6f-47f7-969e-bd2323a34add logs" time="2025-09-06T05:07:03Z" level=info msg="End of helper-pod-create-pvc-c306f89f-3f6f-47f7-969e-bd2323a34add logs" I0906 05:07:03.876760 1 event.go:389] "Event occurred" object="test-pvc-ns/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ProvisioningSucceeded" message="Successfully provisioned volume pvc-c306f89f-3f6f-47f7-969e-bd2323a34add" time="2025-09-06T05:07:05Z" level=info msg="Deleting volume pvc-c306f89f-3f6f-47f7-969e-bd2323a34add at kind-mapt-control-plane:/var/local-path-provisioner/pvc-c306f89f-3f6f-47f7-969e-bd2323a34add_test-pvc-ns_test-pvc" time="2025-09-06T05:07:05Z" level=info msg="create the helper pod helper-pod-delete-pvc-c306f89f-3f6f-47f7-969e-bd2323a34add into local-path-storage" time="2025-09-06T05:07:08Z" level=info msg="Volume pvc-c306f89f-3f6f-47f7-969e-bd2323a34add has been deleted on kind-mapt-control-plane:/var/local-path-provisioner/pvc-c306f89f-3f6f-47f7-969e-bd2323a34add_test-pvc-ns_test-pvc" time="2025-09-06T05:07:08Z" level=info msg="Start of helper-pod-delete-pvc-c306f89f-3f6f-47f7-969e-bd2323a34add logs" time="2025-09-06T05:07:08Z" level=info msg="End of helper-pod-delete-pvc-c306f89f-3f6f-47f7-969e-bd2323a34add logs" time="2025-09-06T05:11:46Z" level=debug msg="config doesn't contain node kind-mapt-control-plane, use DEFAULT_PATH_FOR_NON_LISTED_NODES instead" time="2025-09-06T05:11:46Z" level=info msg="Creating volume pvc-7b1f0472-5a39-4653-a939-c469e7b636d9 at kind-mapt-control-plane:/var/local-path-provisioner/pvc-7b1f0472-5a39-4653-a939-c469e7b636d9_tekton-pipelines_postgredb-tekton-results-postgres-0" time="2025-09-06T05:11:46Z" level=info msg="create the helper pod helper-pod-create-pvc-7b1f0472-5a39-4653-a939-c469e7b636d9 into local-path-storage" I0906 05:11:46.973555 1 event.go:389] "Event occurred" object="tekton-pipelines/postgredb-tekton-results-postgres-0" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="Provisioning" message="External provisioner is provisioning volume for claim \"tekton-pipelines/postgredb-tekton-results-postgres-0\"" time="2025-09-06T05:11:49Z" level=info msg="Volume pvc-7b1f0472-5a39-4653-a939-c469e7b636d9 has been created on kind-mapt-control-plane:/var/local-path-provisioner/pvc-7b1f0472-5a39-4653-a939-c469e7b636d9_tekton-pipelines_postgredb-tekton-results-postgres-0" time="2025-09-06T05:11:50Z" level=info msg="Start of helper-pod-create-pvc-7b1f0472-5a39-4653-a939-c469e7b636d9 logs" time="2025-09-06T05:11:50Z" level=info msg="End of helper-pod-create-pvc-7b1f0472-5a39-4653-a939-c469e7b636d9 logs" I0906 05:11:50.013880 1 event.go:389] "Event occurred" object="tekton-pipelines/postgredb-tekton-results-postgres-0" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ProvisioningSucceeded" message="Successfully provisioned volume pvc-7b1f0472-5a39-4653-a939-c469e7b636d9" ---------- namespace 'namespace-lister' ---------- apiVersion: v1 kind: Pod metadata: creationTimestamp: "2025-09-06T05:18:15Z" generateName: namespace-lister-78fcb78b8c- labels: apps: namespace-lister pod-template-hash: 78fcb78b8c name: namespace-lister-78fcb78b8c-s6pp7 namespace: namespace-lister ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: namespace-lister-78fcb78b8c uid: bdb029ef-bb73-4c62-afd6-f70c5b9aba3c resourceVersion: "6798" uid: 7239c79e-8ee5-466f-9c27-cd06b0c9bd0a spec: containers: - args: - -enable-tls - -cert-path=/var/tls/tls.crt - -key-path=/var/tls/tls.key env: - name: LOG_LEVEL value: "0" - name: CACHE_RESYNC_PERIOD value: 10m - name: CACHE_NAMESPACE_LABELSELECTOR value: konflux-ci.dev/type=tenant - name: AUTH_USERNAME_HEADER value: Impersonate-User image: quay.io/konflux-ci/namespace-lister@sha256:a42b42dc79acf26ce2af95283a98a489eafb4dd264a40b3028b44855043e7d76 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8080 scheme: HTTPS initialDelaySeconds: 1 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 name: namespace-lister ports: - containerPort: 8080 name: http protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: 8080 scheme: HTTPS initialDelaySeconds: 1 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 200m memory: 256Mi requests: cpu: 20m memory: 64Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true runAsNonRoot: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/tls name: tls readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-slm6c readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: kind-mapt-control-plane preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: namespace-lister serviceAccountName: namespace-lister terminationGracePeriodSeconds: 60 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 topologySpreadConstraints: - labelSelector: matchLabels: apps: namespace-lister maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: ScheduleAnyway volumes: - name: tls secret: defaultMode: 420 secretName: namespace-lister-tls - name: kube-api-access-slm6c projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2025-09-06T05:18:15Z" status: "False" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2025-09-06T05:18:15Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2025-09-06T05:18:15Z" message: 'containers with unready status: [namespace-lister]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2025-09-06T05:18:15Z" message: 'containers with unready status: [namespace-lister]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2025-09-06T05:18:15Z" status: "True" type: PodScheduled containerStatuses: - image: quay.io/konflux-ci/namespace-lister@sha256:a42b42dc79acf26ce2af95283a98a489eafb4dd264a40b3028b44855043e7d76 imageID: "" lastState: {} name: namespace-lister ready: false restartCount: 0 started: false state: waiting: reason: ContainerCreating volumeMounts: - mountPath: /var/tls name: tls readOnly: true recursiveReadOnly: Disabled - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-slm6c readOnly: true recursiveReadOnly: Disabled hostIP: 10.89.0.2 hostIPs: - ip: 10.89.0.2 phase: Pending qosClass: Burstable startTime: "2025-09-06T05:18:15Z" --- Pod 'namespace-lister-78fcb78b8c-s6pp7' under namespace 'namespace-lister': Pod namespace-lister-78fcb78b8c-s6pp7 MountVolume.SetUp failed for volume "tls" : secret "namespace-lister-tls" not found (FailedMount) Error from server (BadRequest): container "namespace-lister" in pod "namespace-lister-78fcb78b8c-s6pp7" is waiting to start: ContainerCreating Failed to get pod logs for namespace-lister-78fcb78b8c-s6pp7 in namespace namespace-lister ---------- namespace 'openshift-pipelines' ---------- ---------- namespace 'pipelines-as-code' ---------- ---------- namespace 'release-service' ---------- apiVersion: v1 kind: Pod metadata: annotations: kubectl.kubernetes.io/default-container: manager creationTimestamp: "2025-09-06T05:16:58Z" generateName: release-service-controller-manager-6794d6954b- labels: control-plane: controller-manager pod-template-hash: 6794d6954b name: release-service-controller-manager-6794d6954b-xw6tk namespace: release-service ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: release-service-controller-manager-6794d6954b uid: 1c372ec4-bbe9-4fb1-9748-fee7e2cd7b16 resourceVersion: "9171" uid: 02865914-2336-4928-8bed-f4c035c846c1 spec: containers: - args: - --metrics-bind-address=:8080 - --leader-elect=false command: - /manager env: - name: DEFAULT_RELEASE_PVC valueFrom: configMapKeyRef: key: DEFAULT_RELEASE_PVC name: release-service-manager-properties optional: true - name: DEFAULT_RELEASE_WORKSPACE_NAME valueFrom: configMapKeyRef: key: DEFAULT_RELEASE_WORKSPACE_NAME name: release-service-manager-properties optional: true - name: DEFAULT_RELEASE_WORKSPACE_SIZE valueFrom: configMapKeyRef: key: DEFAULT_RELEASE_WORKSPACE_SIZE name: release-service-manager-properties optional: true - name: SERVICE_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace image: quay.io/konflux-ci/release-service:cb1f7f944ff1046c62ed2d550954203c00c57caf imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8081 scheme: HTTP initialDelaySeconds: 15 periodSeconds: 20 successThreshold: 1 timeoutSeconds: 1 name: manager ports: - containerPort: 9443 name: webhook-server protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: 8081 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 500m memory: 128Mi requests: cpu: 10m memory: 64Mi securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /tmp/k8s-webhook-server/serving-certs name: cert readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-t4p47 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: kind-mapt-control-plane preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: runAsNonRoot: true serviceAccount: release-service-controller-manager serviceAccountName: release-service-controller-manager terminationGracePeriodSeconds: 10 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: cert secret: defaultMode: 420 secretName: webhook-server-cert - name: kube-api-access-t4p47 projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2025-09-06T05:23:01Z" status: "True" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2025-09-06T05:16:58Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2025-09-06T05:23:12Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2025-09-06T05:23:12Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2025-09-06T05:16:58Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://5ca356e2ee51c9d8e23e5863a0fe3324e43037c9121533c93d1a2cd4329c4283 image: quay.io/konflux-ci/release-service:cb1f7f944ff1046c62ed2d550954203c00c57caf imageID: quay.io/konflux-ci/release-service@sha256:bfb7ab06bf0daaf130d70fcc2237267718bdda420d2e045c5ebcbc37b19d5734 lastState: {} name: manager ready: true restartCount: 0 started: true state: running: startedAt: "2025-09-06T05:23:00Z" volumeMounts: - mountPath: /tmp/k8s-webhook-server/serving-certs name: cert readOnly: true recursiveReadOnly: Disabled - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-t4p47 readOnly: true recursiveReadOnly: Disabled hostIP: 10.89.0.2 hostIPs: - ip: 10.89.0.2 phase: Running podIP: 10.244.0.43 podIPs: - ip: 10.244.0.43 qosClass: Burstable startTime: "2025-09-06T05:16:58Z" --- Pod 'release-service-controller-manager-6794d6954b-xw6tk' under namespace 'release-service': Pod release-service-controller-manager-6794d6954b-xw6tk MountVolume.SetUp failed for volume "cert" : secret "webhook-server-cert" not found (FailedMount) 2025-09-06T05:23:00.325Z INFO controller-runtime.webhook Registering webhook {"path": "/mutate-appstudio-redhat-com-v1alpha1-author"} 2025-09-06T05:23:00.325Z INFO controller-runtime.builder Registering a mutating webhook {"GVK": "appstudio.redhat.com/v1alpha1, Kind=Release", "path": "/mutate-appstudio-redhat-com-v1alpha1-release"} 2025-09-06T05:23:00.325Z INFO controller-runtime.webhook Registering webhook {"path": "/mutate-appstudio-redhat-com-v1alpha1-release"} 2025-09-06T05:23:00.326Z INFO controller-runtime.builder Registering a validating webhook {"GVK": "appstudio.redhat.com/v1alpha1, Kind=Release", "path": "/validate-appstudio-redhat-com-v1alpha1-release"} 2025-09-06T05:23:00.326Z INFO controller-runtime.webhook Registering webhook {"path": "/validate-appstudio-redhat-com-v1alpha1-release"} 2025-09-06T05:23:00.326Z INFO controller-runtime.builder Registering a mutating webhook {"GVK": "appstudio.redhat.com/v1alpha1, Kind=ReleasePlan", "path": "/mutate-appstudio-redhat-com-v1alpha1-releaseplan"} 2025-09-06T05:23:00.326Z INFO controller-runtime.webhook Registering webhook {"path": "/mutate-appstudio-redhat-com-v1alpha1-releaseplan"} 2025-09-06T05:23:00.326Z INFO controller-runtime.builder Registering a validating webhook {"GVK": "appstudio.redhat.com/v1alpha1, Kind=ReleasePlan", "path": "/validate-appstudio-redhat-com-v1alpha1-releaseplan"} 2025-09-06T05:23:00.326Z INFO controller-runtime.webhook Registering webhook {"path": "/validate-appstudio-redhat-com-v1alpha1-releaseplan"} 2025-09-06T05:23:00.326Z INFO controller-runtime.builder Registering a mutating webhook {"GVK": "appstudio.redhat.com/v1alpha1, Kind=ReleasePlanAdmission", "path": "/mutate-appstudio-redhat-com-v1alpha1-releaseplanadmission"} 2025-09-06T05:23:00.326Z INFO controller-runtime.webhook Registering webhook {"path": "/mutate-appstudio-redhat-com-v1alpha1-releaseplanadmission"} 2025-09-06T05:23:00.326Z INFO controller-runtime.builder Registering a validating webhook {"GVK": "appstudio.redhat.com/v1alpha1, Kind=ReleasePlanAdmission", "path": "/validate-appstudio-redhat-com-v1alpha1-releaseplanadmission"} 2025-09-06T05:23:00.326Z INFO controller-runtime.webhook Registering webhook {"path": "/validate-appstudio-redhat-com-v1alpha1-releaseplanadmission"} 2025-09-06T05:23:00.326Z INFO setup starting manager 2025-09-06T05:23:00.327Z INFO starting server {"name": "health probe", "addr": "[::]:8081"} 2025-09-06T05:23:00.327Z INFO controller-runtime.metrics Starting metrics server 2025-09-06T05:23:00.327Z INFO controller-runtime.metrics Serving metrics server {"bindAddress": ":8080", "secure": false} 2025-09-06T05:23:00.327Z INFO controller-runtime.webhook Starting webhook server 2025-09-06T05:23:00.327Z INFO setup disabling http/2 2025-09-06T05:23:00.327Z INFO controller-runtime.certwatcher Updated current TLS certificate 2025-09-06T05:23:00.328Z INFO controller-runtime.webhook Serving webhook server {"host": "", "port": 9443} 2025-09-06T05:23:00.328Z INFO controller-runtime.certwatcher Starting certificate poll+watcher {"interval": "10s"} 2025-09-06T05:23:00.428Z INFO Starting EventSource {"controller": "releaseplanadmission", "controllerGroup": "appstudio.redhat.com", "controllerKind": "ReleasePlanAdmission", "source": "kind source: *v1alpha1.ReleasePlan"} 2025-09-06T05:23:00.428Z INFO Starting EventSource {"controller": "release", "controllerGroup": "appstudio.redhat.com", "controllerKind": "Release", "source": "kind source: *v1.PipelineRun"} 2025-09-06T05:23:00.428Z INFO Starting EventSource {"controller": "release", "controllerGroup": "appstudio.redhat.com", "controllerKind": "Release", "source": "kind source: *v1alpha1.Release"} 2025-09-06T05:23:00.428Z INFO Starting EventSource {"controller": "releaseplanadmission", "controllerGroup": "appstudio.redhat.com", "controllerKind": "ReleasePlanAdmission", "source": "kind source: *v1alpha1.ReleasePlanAdmission"} 2025-09-06T05:23:00.428Z INFO Starting EventSource {"controller": "releaseplan", "controllerGroup": "appstudio.redhat.com", "controllerKind": "ReleasePlan", "source": "kind source: *v1alpha1.ReleasePlanAdmission"} 2025-09-06T05:23:00.428Z INFO Starting EventSource {"controller": "releaseplan", "controllerGroup": "appstudio.redhat.com", "controllerKind": "ReleasePlan", "source": "kind source: *v1alpha1.ReleasePlan"} 2025-09-06T05:23:00.528Z INFO Starting Controller {"controller": "release", "controllerGroup": "appstudio.redhat.com", "controllerKind": "Release"} 2025-09-06T05:23:00.528Z INFO Starting workers {"controller": "release", "controllerGroup": "appstudio.redhat.com", "controllerKind": "Release", "worker count": 1} 2025-09-06T05:23:00.528Z INFO Starting Controller {"controller": "releaseplan", "controllerGroup": "appstudio.redhat.com", "controllerKind": "ReleasePlan"} 2025-09-06T05:23:00.528Z INFO Starting workers {"controller": "releaseplan", "controllerGroup": "appstudio.redhat.com", "controllerKind": "ReleasePlan", "worker count": 1} 2025-09-06T05:23:00.528Z INFO Starting Controller {"controller": "releaseplanadmission", "controllerGroup": "appstudio.redhat.com", "controllerKind": "ReleasePlanAdmission"} 2025-09-06T05:23:00.528Z INFO Starting workers {"controller": "releaseplanadmission", "controllerGroup": "appstudio.redhat.com", "controllerKind": "ReleasePlanAdmission", "worker count": 1} ---------- namespace 'smee-client' ---------- apiVersion: v1 kind: Pod metadata: creationTimestamp: "2025-09-06T05:14:16Z" generateName: gosmee-client-759fb5658- labels: app: gosmee-client pod-template-hash: 759fb5658 name: gosmee-client-759fb5658-gkn2m namespace: smee-client ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: gosmee-client-759fb5658 uid: 01ad6bef-a859-4c80-aee6-040b74a2e719 resourceVersion: "4689" uid: 6a64dfd0-3d73-4b83-a033-e89082729e9b spec: containers: - args: - client - https://smee.io/knsbGTelD29T9PhTGOxWEqil6X7t13fipTQZvC - http://localhost:8080 image: ghcr.io/chmouel/gosmee:v0.28.0 imagePullPolicy: Always livenessProbe: exec: command: - /shared/check-smee-health.sh failureThreshold: 2 initialDelaySeconds: 20 periodSeconds: 30 successThreshold: 1 timeoutSeconds: 10 name: gosmee resources: limits: cpu: 100m memory: 32Mi requests: cpu: 10m memory: 32Mi securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 65532 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /shared name: shared-health - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-8lv4v readOnly: true - env: - name: DOWNSTREAM_SERVICE_URL value: http://pipelines-as-code-controller.pipelines-as-code:8180 - name: SMEE_CHANNEL_URL value: https://smee.io/knsbGTelD29T9PhTGOxWEqil6X7t13fipTQZvC - name: INSECURE_SKIP_VERIFY value: "true" - name: HEALTH_CHECK_TIMEOUT_SECONDS value: "20" image: quay.io/konflux-ci/smee-sidecar:latest@sha256:91c82100bd042ced2105d3678b6bf642c1a1f22f0c69f542dac6cc3acce48fb0 imagePullPolicy: IfNotPresent livenessProbe: exec: command: - /shared/check-sidecar-health.sh failureThreshold: 2 initialDelaySeconds: 10 periodSeconds: 30 successThreshold: 1 timeoutSeconds: 10 name: health-check-sidecar ports: - containerPort: 8080 name: http protocol: TCP - containerPort: 9100 name: metrics protocol: TCP resources: limits: cpu: 100m memory: 32Mi requests: cpu: 10m memory: 32Mi securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 65532 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /shared name: shared-health - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-8lv4v readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: kind-mapt-control-plane preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 65532 serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - emptyDir: {} name: shared-health - name: kube-api-access-8lv4v projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2025-09-06T05:14:38Z" status: "True" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2025-09-06T05:14:16Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2025-09-06T05:14:38Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2025-09-06T05:14:38Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2025-09-06T05:14:16Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://718f824a27be99970605a69c973c29f197b87e29143680a97f3cbdc0f00e5ac7 image: ghcr.io/chmouel/gosmee:v0.28.0 imageID: ghcr.io/chmouel/gosmee@sha256:4fd46588d14928225eee1ae9d35d380f35f846e4215df138a1711662b7411958 lastState: {} name: gosmee ready: true restartCount: 0 started: true state: running: startedAt: "2025-09-06T05:14:25Z" volumeMounts: - mountPath: /shared name: shared-health - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-8lv4v readOnly: true recursiveReadOnly: Disabled - containerID: containerd://5eaa1ac5e5e76f718e029ef65f54912c01f0941a14ea3da7e79c2b55b1ea34e4 image: sha256:436458c8ee0c39f9e2dadd85ae2cf675df3072fe64d342a3ccff674cfd36735f imageID: quay.io/konflux-ci/smee-sidecar@sha256:91c82100bd042ced2105d3678b6bf642c1a1f22f0c69f542dac6cc3acce48fb0 lastState: {} name: health-check-sidecar ready: true restartCount: 0 started: true state: running: startedAt: "2025-09-06T05:14:38Z" volumeMounts: - mountPath: /shared name: shared-health - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-8lv4v readOnly: true recursiveReadOnly: Disabled hostIP: 10.89.0.2 hostIPs: - ip: 10.89.0.2 phase: Running podIP: 10.244.0.38 podIPs: - ip: 10.244.0.38 qosClass: Burstable startTime: "2025-09-06T05:14:16Z" --- Pod 'gosmee-client-759fb5658-gkn2m' under namespace 'smee-client': Pod gosmee-client-759fb5658-gkn2m Liveness probe failed: Health file missing: /shared/health-status.txt Sat, 06 Sep 2025 05:14:25 UTC INF Starting gosmee client version: dev Sat, 06 Sep 2025 05:14:26 UTC WRN Could not parse server version: invalid character '<' looking for beginning of value Sat, 06 Sep 2025 05:14:26 UTC INF Configured reconnection strategy to retry indefinitely Sat, 06 Sep 2025 05:14:27 UTC INF 2025-09-06T05.14.09.087 Forwarding https://smee.io/knsbGTelD29T9PhTGOxWEqil6X7t13fipTQZvC to http://localhost:8080 Sat, 06 Sep 2025 05:15:09 UTC INF 2025-09-06T05.15.09.571 request replayed to http://localhost:8080, status: 200 Sat, 06 Sep 2025 05:15:39 UTC INF 2025-09-06T05.15.09.563 request replayed to http://localhost:8080, status: 200 Sat, 06 Sep 2025 05:16:09 UTC INF 2025-09-06T05.16.09.550 request replayed to http://localhost:8080, status: 200 Sat, 06 Sep 2025 05:16:39 UTC INF 2025-09-06T05.16.09.571 request replayed to http://localhost:8080, status: 200 Sat, 06 Sep 2025 05:17:09 UTC INF 2025-09-06T05.17.09.556 request replayed to http://localhost:8080, status: 200 Sat, 06 Sep 2025 05:17:39 UTC INF 2025-09-06T05.17.09.403 request replayed to http://localhost:8080, status: 200 Sat, 06 Sep 2025 05:18:09 UTC INF 2025-09-06T05.18.09.570 request replayed to http://localhost:8080, status: 200 Sat, 06 Sep 2025 05:18:39 UTC INF 2025-09-06T05.18.09.595 request replayed to http://localhost:8080, status: 200 Sat, 06 Sep 2025 05:19:09 UTC INF 2025-09-06T05.19.09.596 request replayed to http://localhost:8080, status: 200 Sat, 06 Sep 2025 05:19:39 UTC INF 2025-09-06T05.19.09.621 request replayed to http://localhost:8080, status: 200 Sat, 06 Sep 2025 05:20:09 UTC INF 2025-09-06T05.20.09.553 request replayed to http://localhost:8080, status: 200 Sat, 06 Sep 2025 05:20:39 UTC INF 2025-09-06T05.20.09.431 request replayed to http://localhost:8080, status: 200 Sat, 06 Sep 2025 05:21:09 UTC INF 2025-09-06T05.21.09.573 request replayed to http://localhost:8080, status: 200 Sat, 06 Sep 2025 05:21:39 UTC INF 2025-09-06T05.21.09.564 request replayed to http://localhost:8080, status: 200 Sat, 06 Sep 2025 05:22:09 UTC INF 2025-09-06T05.22.09.543 request replayed to http://localhost:8080, status: 200 Sat, 06 Sep 2025 05:22:39 UTC INF 2025-09-06T05.22.09.561 request replayed to http://localhost:8080, status: 200 Sat, 06 Sep 2025 05:23:09 UTC INF 2025-09-06T05.23.09.571 request replayed to http://localhost:8080, status: 200 2025/09/06 05:14:38 Starting Smee instrumentation sidecar... 2025/09/06 05:14:38 Wrote read-only probe script: /shared/check-smee-health.sh 2025/09/06 05:14:38 Wrote read-only probe script: /shared/check-sidecar-health.sh 2025/09/06 05:14:38 Wrote read-only probe script: /shared/check-file-age.sh 2025/09/06 05:14:38 pprof endpoints disabled (set ENABLE_PPROF=true to enable) 2025/09/06 05:14:38 Management server (metrics) listening on :9100 2025/09/06 05:14:38 Starting background health checker (interval: 30s, timeout: 20s) 2025/09/06 05:14:38 Relay server listening on :8080 with timeouts (read: 180s, write: 60s, idle: 600s) 2025/09/06 05:15:09 Health check completed: success (Health check completed successfully) 2025/09/06 05:15:39 Health check completed: success (Health check completed successfully) 2025/09/06 05:16:09 Health check completed: success (Health check completed successfully) 2025/09/06 05:16:39 Health check completed: success (Health check completed successfully) 2025/09/06 05:17:09 Health check completed: success (Health check completed successfully) 2025/09/06 05:17:39 Health check completed: success (Health check completed successfully) 2025/09/06 05:18:09 Health check completed: success (Health check completed successfully) 2025/09/06 05:18:39 Health check completed: success (Health check completed successfully) 2025/09/06 05:19:09 Health check completed: success (Health check completed successfully) 2025/09/06 05:19:39 Health check completed: success (Health check completed successfully) 2025/09/06 05:20:09 Health check completed: success (Health check completed successfully) 2025/09/06 05:20:39 Health check completed: success (Health check completed successfully) 2025/09/06 05:21:09 Health check completed: success (Health check completed successfully) 2025/09/06 05:21:39 Health check completed: success (Health check completed successfully) 2025/09/06 05:22:09 Health check completed: success (Health check completed successfully) 2025/09/06 05:22:39 Health check completed: success (Health check completed successfully) 2025/09/06 05:23:09 Health check completed: success (Health check completed successfully) ---------- namespace 'tekton-operator' ---------- ---------- namespace 'tekton-pipelines' ---------- apiVersion: v1 kind: Pod metadata: annotations: cluster-autoscaler.kubernetes.io/safe-to-evict: "false" creationTimestamp: "2025-09-06T05:11:46Z" generateName: tekton-results-api-864fcb8bc6- labels: app.kubernetes.io/name: tekton-results-api app.kubernetes.io/version: v0.14.0 operator.tekton.dev/deployment-spec-applied-hash: e229b86bc4e564cc4a1463b46b06fd81 pod-template-hash: 864fcb8bc6 name: tekton-results-api-864fcb8bc6-hr5gn namespace: tekton-pipelines ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: tekton-results-api-864fcb8bc6 uid: 5b374ab3-3cb6-4b12-8b60-98f38150bdaa resourceVersion: "3087" uid: e277c410-f71f-44a0-9d9d-ba5dad7656bf spec: containers: - env: - name: DB_PASSWORD valueFrom: secretKeyRef: key: POSTGRES_PASSWORD name: tekton-results-postgres - name: DB_USER valueFrom: secretKeyRef: key: POSTGRES_USER name: tekton-results-postgres image: ghcr.io/tektoncd/results/api-b1b7ffa9ba32f7c3020c3b68830b30a8:v0.14.0@sha256:61ce677b79370cb4027669cc251127e4036d2cec07850fd12ef93baeac1fb2eb imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8080 scheme: HTTPS initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 name: api readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8080 scheme: HTTPS initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: {} securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL runAsNonRoot: true seccompProfile: type: RuntimeDefault startupProbe: failureThreshold: 10 httpGet: path: /healthz port: 8080 scheme: HTTPS initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/tekton/results name: config readOnly: true - mountPath: /etc/tls name: tls readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-gnzgh readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: kind-mapt-control-plane preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault serviceAccount: tekton-results-api serviceAccountName: tekton-results-api terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - configMap: defaultMode: 420 name: tekton-results-api-config name: config - name: tls secret: defaultMode: 420 secretName: tekton-results-tls - name: kube-api-access-gnzgh projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2025-09-06T05:11:54Z" status: "True" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2025-09-06T05:11:46Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2025-09-06T05:12:36Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2025-09-06T05:12:36Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2025-09-06T05:11:46Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://beb4acd4e08876f52e3717d068995b8d24a91a9976eb814ab42a5bede79625da image: sha256:1206cce6fe1f26358dc10bb18fb709903fb18d0bd382e3c2022b25348853bb5d imageID: ghcr.io/tektoncd/results/api-b1b7ffa9ba32f7c3020c3b68830b30a8@sha256:61ce677b79370cb4027669cc251127e4036d2cec07850fd12ef93baeac1fb2eb lastState: {} name: api ready: true restartCount: 0 started: true state: running: startedAt: "2025-09-06T05:11:53Z" volumeMounts: - mountPath: /etc/tekton/results name: config readOnly: true recursiveReadOnly: Disabled - mountPath: /etc/tls name: tls readOnly: true recursiveReadOnly: Disabled - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-gnzgh readOnly: true recursiveReadOnly: Disabled hostIP: 10.89.0.2 hostIPs: - ip: 10.89.0.2 phase: Running podIP: 10.244.0.22 podIPs: - ip: 10.244.0.22 qosClass: BestEffort startTime: "2025-09-06T05:11:46Z" --- Pod 'tekton-results-api-864fcb8bc6-hr5gn' under namespace 'tekton-pipelines': Pod tekton-results-api-864fcb8bc6-hr5gn Startup probe failed: Get "https://10.244.0.22:8080/healthz": dial tcp 10.244.0.22:8080: connect: connection refused (Unhealthy) 2025/09/06 05:11:53 maxprocs: Leaving GOMAXPROCS=16: CPU quota undefined {"level":"warn","ts":1757135514.0067472,"caller":"api/main.go:116","msg":"Error connecting to database (retrying in 10s): failed to connect to `host=tekton-results-postgres-service.tekton-pipelines.svc.cluster.local user=result database=tekton-results`: dial error (dial tcp 10.96.169.33:5432: connect: connection refused)"} {"level":"warn","ts":1757135524.0104663,"caller":"api/main.go:116","msg":"Error connecting to database (retrying in 10s): failed to connect to `host=tekton-results-postgres-service.tekton-pipelines.svc.cluster.local user=result database=tekton-results`: dial error (dial tcp 10.96.169.33:5432: connect: connection refused)"} {"level":"warn","ts":1757135534.0117424,"caller":"api/main.go:116","msg":"Error connecting to database (retrying in 10s): failed to connect to `host=tekton-results-postgres-service.tekton-pipelines.svc.cluster.local user=result database=tekton-results`: dial error (dial tcp 10.96.169.33:5432: connect: connection refused)"} {"level":"warn","ts":1757135544.0111084,"caller":"api/main.go:116","msg":"Error connecting to database (retrying in 10s): failed to connect to `host=tekton-results-postgres-service.tekton-pipelines.svc.cluster.local user=result database=tekton-results`: dial error (dial tcp 10.96.169.33:5432: connect: connection refused)"} {"level":"info","ts":1757135554.013199,"caller":"api/main.go:167","msg":"Kubernetes RBAC authorization check enabled"} {"level":"info","ts":1757135554.013639,"caller":"api/main.go:188","msg":"Kubernetes RBAC impersonation enabled"} {"level":"warn","ts":1757135554.1043637,"caller":"plugin/plugin_logs.go:423","msg":"Plugin Logs API Disable: unsupported type of logs given for plugin, legacy logging system might work"} {"level":"info","ts":1757135554.1056259,"caller":"api/main.go:276","msg":"Prometheus server listening on: 9090"} {"level":"info","ts":1757135554.1060445,"caller":"api/main.go:325","msg":"gRPC and REST server listening on: 8080"} apiVersion: v1 items: [] kind: List metadata: resourceVersion: "" apiVersion: v1 items: [] kind: List metadata: resourceVersion: "" Generated logs successfully [INFO] Applying Kyverno to reduce resources for testing clusterpolicy.kyverno.io/e2e-reduce-resources created [INFO] Creating Test Resources... ๐Ÿงช Deploying test resources... ๐Ÿ‘ฅ Setting up demo users... namespace/user-ns1 created namespace/user-ns2 created serviceaccount/appstudio-pipeline created serviceaccount/appstudio-pipeline created role.rbac.authorization.k8s.io/ns2-pod-viewer-job-creator created rolebinding.rbac.authorization.k8s.io/release-pipeline-resource-role-binding created rolebinding.rbac.authorization.k8s.io/user1-konflux-admin created rolebinding.rbac.authorization.k8s.io/user2-konflux-admin created rolebinding.rbac.authorization.k8s.io/ns2-pod-viewer-job-creator-binding created rolebinding.rbac.authorization.k8s.io/release-pipeline-resource-role-binding created rolebinding.rbac.authorization.k8s.io/user1-konflux-admin created rolebinding.rbac.authorization.k8s.io/user2-konflux-admin created clusterrolebinding.rbac.authorization.k8s.io/managed1-self-access-review created clusterrolebinding.rbac.authorization.k8s.io/managed2-self-access-review created clusterrolebinding.rbac.authorization.k8s.io/user1-self-access-review created secret/regcred-empty created application.appstudio.redhat.com/sample-component created component.appstudio.redhat.com/sample-component created releaseplan.appstudio.redhat.com/local-release created releaseplan.appstudio.redhat.com/sample-component created integrationtestscenario.appstudio.redhat.com/sample-component-enterprise-contract created