INFO: Log in to your Red Hat account... INFO: Configure AWS Credentials... WARN: The current version (1.2.56) is not up to date with latest rosa cli released version (1.2.59). WARN: It is recommended that you update to the latest version. INFO: Logged in as 'konflux-ci-418295695583' on 'https://api.openshift.com' INFO: Create ROSA with HCP cluster... WARN: The current version (1.2.56) is not up to date with latest rosa cli released version (1.2.59). WARN: It is recommended that you update to the latest version. INFO: Creating cluster 'kx-8408208f1d' INFO: To view a list of clusters and their status, run 'rosa list clusters' INFO: Cluster 'kx-8408208f1d' has been created. INFO: Once the cluster is installed you will need to add an Identity Provider before you can login into the cluster. See 'rosa create idp --help' for more information. Name: kx-8408208f1d Domain Prefix: kx-8408208f1d Display Name: kx-8408208f1d ID: 2nbin63ebj2mm2dqgslsd86vbn4a15po External ID: a45fe867-4ada-4955-83ec-5c6738ff47d6 Control Plane: ROSA Service Hosted OpenShift Version: 4.17.45 Channel Group: stable DNS: Not ready AWS Account: 418295695583 AWS Billing Account: 418295695583 API URL: Console URL: Region: us-east-1 Availability: - Control Plane: MultiAZ - Data Plane: MultiAZ Nodes: - Compute (desired): 3 - Compute (current): 0 Network: - Type: OVNKubernetes - Service CIDR: 172.30.0.0/16 - Machine CIDR: 10.0.0.0/16 - Pod CIDR: 10.128.0.0/14 - Host Prefix: /23 - Subnets: subnet-001fc23497e4a3aeb, subnet-00ffba09365a434bc, subnet-074cbf0329958194a, subnet-0689cd077699b690a, subnet-0f9f09e46f74cde64, subnet-033f48892ddbaa09d EC2 Metadata Http Tokens: optional Role (STS) ARN: arn:aws:iam::418295695583:role/ManagedOpenShift-HCP-ROSA-Installer-Role Support Role ARN: arn:aws:iam::418295695583:role/ManagedOpenShift-HCP-ROSA-Support-Role Instance IAM Roles: - Worker: arn:aws:iam::418295695583:role/ManagedOpenShift-HCP-ROSA-Worker-Role Operator IAM Roles: - arn:aws:iam::418295695583:role/rosa-hcp-kube-system-kube-controller-manager - arn:aws:iam::418295695583:role/rosa-hcp-kube-system-capa-controller-manager - arn:aws:iam::418295695583:role/rosa-hcp-kube-system-control-plane-operator - arn:aws:iam::418295695583:role/rosa-hcp-kube-system-kms-provider - arn:aws:iam::418295695583:role/rosa-hcp-openshift-image-registry-installer-cloud-credentials - arn:aws:iam::418295695583:role/rosa-hcp-openshift-ingress-operator-cloud-credentials - arn:aws:iam::418295695583:role/rosa-hcp-openshift-cluster-csi-drivers-ebs-cloud-credentials - arn:aws:iam::418295695583:role/rosa-hcp-openshift-cloud-network-config-controller-cloud-credent Managed Policies: Yes State: waiting (Waiting for user action) Private: No Delete Protection: Disabled Created: Dec 22 2025 17:02:24 UTC [DEPRECATED] User Workload Monitoring: Enabled Details Page: https://console.redhat.com/openshift/details/s/37D1h7yKamEyLF1qpL5gUu9wzzk OIDC Endpoint URL: https://oidc.op1.openshiftapps.com/2du11g36ejmoo4624pofphlrgf4r9tf3 (Managed) Etcd Encryption: Disabled Audit Log Forwarding: Disabled External Authentication: Disabled Zero Egress: Disabled INFO: Preparing to create operator roles. INFO: Operator Roles already exists INFO: Preparing to create OIDC Provider. INFO: OIDC provider already exists INFO: To determine when your cluster is Ready, run 'rosa describe cluster -c kx-8408208f1d'. INFO: To watch your cluster installation logs, run 'rosa logs install -c kx-8408208f1d --watch'. INFO: Track the progress of the cluster creation... WARN: The current version (1.2.56) is not up to date with latest rosa cli released version (1.2.59). WARN: It is recommended that you update to the latest version. W: Region flag will be removed from this command in future versions INFO: Cluster 'kx-8408208f1d' is in waiting state waiting for installation to begin. Logs will show up within 5 minutes 0001-01-01 00:00:00 +0000 UTC hostedclusters kx-8408208f1d Version 2025-12-22 17:08:19 +0000 UTC hostedclusters kx-8408208f1d ValidAWSIdentityProvider StatusUnknown 2025-12-22 17:08:20 +0000 UTC certificates cluster-api-cert Issuing certificate as Secret does not exist 2025-12-22 17:08:20 +0000 UTC certificates cluster-api-cert Issuing certificate as Secret does not exist 2025-12-22 17:08:22 +0000 UTC hostedclusters kx-8408208f1d The hosted control plane is not found 2025-12-22 17:08:22 +0000 UTC hostedclusters kx-8408208f1d Condition not found in the CVO. 2025-12-22 17:08:22 +0000 UTC hostedclusters kx-8408208f1d ValidConfiguration condition is false: NamedCertificates get secret: Invalid value: "cluster-api-cert": Secret "cluster-api-cert" not found 2025-12-22 17:08:22 +0000 UTC hostedclusters kx-8408208f1d The hosted control plane is not found 2025-12-22 17:08:22 +0000 UTC hostedclusters kx-8408208f1d The hosted control plane is not found 2025-12-22 17:08:22 +0000 UTC hostedclusters kx-8408208f1d Condition not found in the CVO. 2025-12-22 17:08:22 +0000 UTC hostedclusters kx-8408208f1d Condition not found in the CVO. 2025-12-22 17:08:22 +0000 UTC hostedclusters kx-8408208f1d Condition not found in the CVO. 2025-12-22 17:08:22 +0000 UTC hostedclusters kx-8408208f1d Condition not found in the CVO. 2025-12-22 17:08:22 +0000 UTC hostedclusters kx-8408208f1d The hosted control plane is not found 2025-12-22 17:08:22 +0000 UTC hostedclusters kx-8408208f1d Ignition server deployment not found 2025-12-22 17:08:22 +0000 UTC hostedclusters kx-8408208f1d NamedCertificates get secret: Invalid value: "cluster-api-cert": Secret "cluster-api-cert" not found 2025-12-22 17:08:22 +0000 UTC hostedclusters kx-8408208f1d HostedCluster is supported by operator configuration 2025-12-22 17:08:22 +0000 UTC hostedclusters kx-8408208f1d Release image is valid 2025-12-22 17:08:22 +0000 UTC hostedclusters kx-8408208f1d The hosted control plane is not found 2025-12-22 17:08:22 +0000 UTC hostedclusters kx-8408208f1d Reconciliation active on resource 2025-12-22 17:08:22 +0000 UTC hostedclusters kx-8408208f1d ValidConfiguration condition is false: NamedCertificates get secret: Invalid value: "cluster-api-cert": Secret "cluster-api-cert" not found 2025-12-22 17:08:22 +0000 UTC hostedclusters kx-8408208f1d The hosted control plane is not found 2025-12-22 17:08:22 +0000 UTC hostedclusters kx-8408208f1d The hosted control plane is not found 2025-12-22 17:08:23 +0000 UTC hostedclusters kx-8408208f1d configuration is invalid: NamedCertificates get secret: Invalid value: "cluster-api-cert": Secret "cluster-api-cert" not found 2025-12-22 17:09:45 +0000 UTC certificates cluster-api-cert Certificate is up to date and has not expired 2025-12-22 17:09:51 +0000 UTC hostedclusters kx-8408208f1d Configuration passes validation 2025-12-22 17:09:53 +0000 UTC hostedclusters kx-8408208f1d Required platform credentials are found 2025-12-22 17:09:56 +0000 UTC hostedclusters kx-8408208f1d AWS KMS is not configured 2025-12-22 17:09:56 +0000 UTC hostedclusters kx-8408208f1d capi-provider deployment has 1 unavailable replicas 2025-12-22 17:09:56 +0000 UTC hostedclusters kx-8408208f1d lookup api.kx-8408208f1d.ziwy.p3.openshiftapps.com on 172.30.0.10:53: no such host 2025-12-22 17:09:56 +0000 UTC hostedclusters kx-8408208f1d Configuration passes validation 2025-12-22 17:09:56 +0000 UTC hostedclusters kx-8408208f1d Waiting for etcd to reach quorum 2025-12-22 17:09:56 +0000 UTC hostedclusters kx-8408208f1d Kube APIServer deployment not found 2025-12-22 17:10:02 +0000 UTC hostedclusters kx-8408208f1d OIDC configuration is valid 2025-12-22 17:10:02 +0000 UTC hostedclusters kx-8408208f1d Reconciliation completed successfully 2025-12-22 17:10:17 +0000 UTC hostedclusters kx-8408208f1d All is well 2025-12-22 17:10:17 +0000 UTC hostedclusters kx-8408208f1d All is well 2025-12-22 17:10:25 +0000 UTC hostedclusters kx-8408208f1d WebIdentityErr 2025-12-22 17:10:40 +0000 UTC hostedclusters kx-8408208f1d EtcdAvailable QuorumAvailable 2025-12-22 17:11:04 +0000 UTC hostedclusters kx-8408208f1d Kube APIServer deployment is available 2025-12-22 17:11:25 +0000 UTC hostedclusters kx-8408208f1d All is well 2025-12-22 17:11:32 +0000 UTC hostedclusters kx-8408208f1d All is well 2025-12-22 17:11:33 +0000 UTC hostedclusters kx-8408208f1d Ignition server deployment is available 2025-12-22 17:11:39 +0000 UTC hostedclusters kx-8408208f1d ClusterVersionSucceeding FromClusterVersion 2025-12-22 17:11:39 +0000 UTC hostedclusters kx-8408208f1d Condition not found in the CVO. 2025-12-22 17:11:39 +0000 UTC hostedclusters kx-8408208f1d Payload loaded version="4.17.45" image="quay.io/openshift-release-dev/ocp-release@sha256:bcadd0f1bcc3f12859c9159d7341dd359a0e5854adfad7712b3ea1b0829bd585" architecture="Multi" 2025-12-22 17:11:39 +0000 UTC hostedclusters kx-8408208f1d ClusterVersionAvailable FromClusterVersion 2025-12-22 17:11:39 +0000 UTC hostedclusters kx-8408208f1d Working towards 4.17.45: 281 of 623 done (45% complete) 0001-01-01 00:00:00 +0000 UTC hostedclusters kx-8408208f1d Version 2025-12-22 17:08:22 +0000 UTC hostedclusters kx-8408208f1d HostedCluster is at expected version 2025-12-22 17:08:22 +0000 UTC hostedclusters kx-8408208f1d Reconciliation active on resource 2025-12-22 17:08:22 +0000 UTC hostedclusters kx-8408208f1d Get "https://aff5b22997df74366874a46a5da6087d-d788cfa9dd65e6d4.elb.us-east-1.amazonaws.com:443/healthz": dial tcp: lookup aff5b22997df74366874a46a5da6087d-d788cfa9dd65e6d4.elb.us-east-1.amazonaws.com on 172.30.0.10:53: no such host 2025-12-22 17:08:22 +0000 UTC hostedclusters kx-8408208f1d Release image is valid 2025-12-22 17:08:22 +0000 UTC hostedclusters kx-8408208f1d HostedCluster is supported by operator configuration 2025-12-22 17:09:45 +0000 UTC certificates cluster-api-cert Certificate is up to date and has not expired 2025-12-22 17:09:51 +0000 UTC hostedclusters kx-8408208f1d Configuration passes validation 2025-12-22 17:09:53 +0000 UTC hostedclusters kx-8408208f1d Required platform credentials are found 2025-12-22 17:09:56 +0000 UTC hostedclusters kx-8408208f1d Configuration passes validation 2025-12-22 17:09:56 +0000 UTC hostedclusters kx-8408208f1d router deployment has 1 unavailable replicas 2025-12-22 17:09:56 +0000 UTC hostedclusters kx-8408208f1d AWS KMS is not configured 2025-12-22 17:10:02 +0000 UTC hostedclusters kx-8408208f1d OIDC configuration is valid 2025-12-22 17:10:17 +0000 UTC hostedclusters kx-8408208f1d All is well 2025-12-22 17:10:17 +0000 UTC hostedclusters kx-8408208f1d All is well 2025-12-22 17:10:40 +0000 UTC hostedclusters kx-8408208f1d EtcdAvailable QuorumAvailable 2025-12-22 17:11:04 +0000 UTC hostedclusters kx-8408208f1d Kube APIServer deployment is available 2025-12-22 17:11:25 +0000 UTC hostedclusters kx-8408208f1d All is well 2025-12-22 17:11:32 +0000 UTC hostedclusters kx-8408208f1d All is well 2025-12-22 17:11:33 +0000 UTC hostedclusters kx-8408208f1d Ignition server deployment is available 2025-12-22 17:11:39 +0000 UTC hostedclusters kx-8408208f1d Condition not found in the CVO. 2025-12-22 17:11:39 +0000 UTC hostedclusters kx-8408208f1d Payload loaded version="4.17.45" image="quay.io/openshift-release-dev/ocp-release@sha256:bcadd0f1bcc3f12859c9159d7341dd359a0e5854adfad7712b3ea1b0829bd585" architecture="Multi" 2025-12-22 17:11:39 +0000 UTC hostedclusters kx-8408208f1d ClusterVersionAvailable FromClusterVersion 2025-12-22 17:11:39 +0000 UTC hostedclusters kx-8408208f1d Unable to apply 4.17.45: an unknown error has occurred: MultipleErrors 2025-12-22 17:12:14 +0000 UTC hostedclusters kx-8408208f1d All is well 2025-12-22 17:12:16 +0000 UTC hostedclusters kx-8408208f1d Multiple errors are preventing progress: * Cluster operators console, dns, ingress, insights, kube-storage-version-migrator, monitoring, network, node-tuning, openshift-samples, service-ca are not available * Could not update imagestream "openshift/driver-toolkit" (440 of 623): resource may have been deleted * Could not update operatorgroup "openshift-monitoring/openshift-cluster-monitoring" (552 of 623): resource may have been deleted * Could not update role "openshift-authentication/prometheus-k8s" (539 of 623): resource may have been deleted * Could not update role "openshift-console-operator/prometheus-k8s" (581 of 623): resource may have been deleted * Could not update role "openshift-console/prometheus-k8s" (585 of 623): resource may have been deleted * Could not update role "openshift-ingress-operator/prometheus-k8s" (592 of 623): resource may have been deleted * Could not update role "openshift-kube-apiserver-operator/prometheus-k8s" (596 of 623): resource may have been deleted 2025-12-22 17:12:31 +0000 UTC hostedclusters kx-8408208f1d failed to reconcile hostedcontrolplane: Operation cannot be fulfilled on hostedcontrolplanes.hypershift.openshift.io "kx-8408208f1d": the object has been modified; please apply your changes to the latest version and try again 0001-01-01 00:00:00 +0000 UTC hostedclusters kx-8408208f1d Version 2025-12-22 17:08:22 +0000 UTC hostedclusters kx-8408208f1d HostedCluster is at expected version 2025-12-22 17:08:22 +0000 UTC hostedclusters kx-8408208f1d Reconciliation active on resource 2025-12-22 17:08:22 +0000 UTC hostedclusters kx-8408208f1d Release image is valid 2025-12-22 17:08:22 +0000 UTC hostedclusters kx-8408208f1d HostedCluster is supported by operator configuration 2025-12-22 17:09:45 +0000 UTC certificates cluster-api-cert Certificate is up to date and has not expired 2025-12-22 17:09:51 +0000 UTC hostedclusters kx-8408208f1d Configuration passes validation 2025-12-22 17:09:53 +0000 UTC hostedclusters kx-8408208f1d Required platform credentials are found 2025-12-22 17:09:56 +0000 UTC hostedclusters kx-8408208f1d Configuration passes validation 2025-12-22 17:09:56 +0000 UTC hostedclusters kx-8408208f1d AWS KMS is not configured 2025-12-22 17:10:02 +0000 UTC hostedclusters kx-8408208f1d OIDC configuration is valid 2025-12-22 17:10:17 +0000 UTC hostedclusters kx-8408208f1d All is well 2025-12-22 17:10:17 +0000 UTC hostedclusters kx-8408208f1d All is well 2025-12-22 17:10:40 +0000 UTC hostedclusters kx-8408208f1d EtcdAvailable QuorumAvailable 2025-12-22 17:11:04 +0000 UTC hostedclusters kx-8408208f1d Kube APIServer deployment is available 2025-12-22 17:11:25 +0000 UTC hostedclusters kx-8408208f1d All is well 2025-12-22 17:11:32 +0000 UTC hostedclusters kx-8408208f1d All is well 2025-12-22 17:11:33 +0000 UTC hostedclusters kx-8408208f1d Ignition server deployment is available 2025-12-22 17:11:39 +0000 UTC hostedclusters kx-8408208f1d Payload loaded version="4.17.45" image="quay.io/openshift-release-dev/ocp-release@sha256:bcadd0f1bcc3f12859c9159d7341dd359a0e5854adfad7712b3ea1b0829bd585" architecture="Multi" 2025-12-22 17:11:39 +0000 UTC hostedclusters kx-8408208f1d ClusterVersionAvailable FromClusterVersion 2025-12-22 17:11:39 +0000 UTC hostedclusters kx-8408208f1d Unable to apply 4.17.45: some cluster operators are not available 2025-12-22 17:11:39 +0000 UTC hostedclusters kx-8408208f1d Condition not found in the CVO. 2025-12-22 17:12:16 +0000 UTC hostedclusters kx-8408208f1d Cluster operators console, dns, image-registry, ingress, insights, kube-storage-version-migrator, monitoring, node-tuning, openshift-samples, service-ca, storage are not available 2025-12-22 17:12:32 +0000 UTC hostedclusters kx-8408208f1d The hosted cluster is not degraded 2025-12-22 17:12:37 +0000 UTC hostedclusters kx-8408208f1d Reconciliation completed successfully 2025-12-22 17:12:55 +0000 UTC hostedclusters kx-8408208f1d The hosted control plane is available INFO: Cluster 'kx-8408208f1d' is now ready INFO: ROSA with HCP cluster is ready, create a cluster admin account for accessing the cluster WARN: The current version (1.2.56) is not up to date with latest rosa cli released version (1.2.59). WARN: It is recommended that you update to the latest version. INFO: Storing login command... INFO: Check if it's able to login to OCP cluster... Retried 1 times... INFO: Check if apiserver is ready... Waiting for cluster operators to be accessible for 2m... NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console csi-snapshot-controller 4.17.45 True False False 4m21s dns 4.17.45 False False True 4m21s DNS "default" is unavailable. image-registry False True True 4m7s Available: The deployment does not have available replicas... ingress False True True 4m7s The "default" ingress controller reports Available=False: IngressControllerUnavailable: One or more status conditions indicate unavailable: DeploymentAvailable=False (DeploymentUnavailable: The deployment has Available status condition set to False (reason: MinimumReplicasUnavailable) with message: Deployment does not have minimum availability.) insights kube-apiserver 4.17.45 True False False 4m12s kube-controller-manager 4.17.45 True False False 4m12s kube-scheduler 4.17.45 True False False 4m12s kube-storage-version-migrator monitoring network 4.17.45 True True False 4m Deployment "/openshift-network-console/networking-console-plugin" is not available (awaiting 2 nodes) node-tuning False True False 4m DaemonSet "tuned" has no available Pod(s) openshift-apiserver 4.17.45 True False False 4m12s openshift-controller-manager 4.17.45 True False False 4m12s openshift-samples operator-lifecycle-manager 4.17.45 True False False 4m13s operator-lifecycle-manager-catalog 4.17.45 True False False 4m12s operator-lifecycle-manager-packageserver 4.17.45 True False False 4m12s service-ca storage 4.17.45 False False False 4m12s AWSEBSCSIDriverOperatorCRAvailable: AWSEBSDriverNodeServiceControllerAvailable: Waiting for the DaemonSet to deploy the CSI Node Service cluster operators to be accessible finished! [INFO] Cluster operators are accessible. Waiting for cluster to be reported as healthy for 60m... NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console csi-snapshot-controller 4.17.45 True False False 4m21s dns 4.17.45 False False True 4m21s DNS "default" is unavailable. image-registry False True True 4m7s Available: The deployment does not have available replicas... ingress False True True 4m7s The "default" ingress controller reports Available=False: IngressControllerUnavailable: One or more status conditions indicate unavailable: DeploymentAvailable=False (DeploymentUnavailable: The deployment has Available status condition set to False (reason: MinimumReplicasUnavailable) with message: Deployment does not have minimum availability.) insights kube-apiserver 4.17.45 True False False 4m12s kube-controller-manager 4.17.45 True False False 4m12s kube-scheduler 4.17.45 True False False 4m12s kube-storage-version-migrator monitoring network 4.17.45 True True False 4m Deployment "/openshift-network-console/networking-console-plugin" is not available (awaiting 2 nodes) node-tuning False True False 4m DaemonSet "tuned" has no available Pod(s) openshift-apiserver 4.17.45 True False False 4m12s openshift-controller-manager 4.17.45 True False False 4m12s openshift-samples operator-lifecycle-manager 4.17.45 True False False 4m13s operator-lifecycle-manager-catalog 4.17.45 True False False 4m12s operator-lifecycle-manager-packageserver 4.17.45 True False False 4m12s service-ca storage 4.17.45 False False False 4m12s AWSEBSCSIDriverOperatorCRAvailable: AWSEBSDriverNodeServiceControllerAvailable: Waiting for the DaemonSet to deploy the CSI Node Service Waiting for cluster to be reported as healthy... Trying again in 60s NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console csi-snapshot-controller 4.17.45 True False False 5m22s dns 4.17.45 False False True 5m22s DNS "default" is unavailable. image-registry False True True 5m8s Available: The deployment does not have available replicas... ingress False True True 5m8s The "default" ingress controller reports Available=False: IngressControllerUnavailable: One or more status conditions indicate unavailable: DeploymentAvailable=False (DeploymentUnavailable: The deployment has Available status condition set to False (reason: MinimumReplicasUnavailable) with message: Deployment does not have minimum availability.) insights kube-apiserver 4.17.45 True False False 5m13s kube-controller-manager 4.17.45 True False False 5m13s kube-scheduler 4.17.45 True False False 5m13s kube-storage-version-migrator monitoring network 4.17.45 True True False 5m1s Deployment "/openshift-network-console/networking-console-plugin" is not available (awaiting 2 nodes) node-tuning False True False 5m1s DaemonSet "tuned" has no available Pod(s) openshift-apiserver 4.17.45 True False False 5m13s openshift-controller-manager 4.17.45 True False False 5m13s openshift-samples operator-lifecycle-manager 4.17.45 True False False 5m14s operator-lifecycle-manager-catalog 4.17.45 True False False 5m13s operator-lifecycle-manager-packageserver 4.17.45 True False False 5m13s service-ca storage 4.17.45 False False False 5m13s AWSEBSCSIDriverOperatorCRAvailable: AWSEBSDriverNodeServiceControllerAvailable: Waiting for the DaemonSet to deploy the CSI Node Service Waiting for cluster to be reported as healthy... Trying again in 60s NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console csi-snapshot-controller 4.17.45 True False False 6m22s dns 4.17.45 False False True 6m22s DNS "default" is unavailable. image-registry False True True 6m8s Available: The deployment does not have available replicas... ingress False True True 6m8s The "default" ingress controller reports Available=False: IngressControllerUnavailable: One or more status conditions indicate unavailable: DeploymentAvailable=False (DeploymentUnavailable: The deployment has Available status condition set to False (reason: MinimumReplicasUnavailable) with message: Deployment does not have minimum availability.) insights kube-apiserver 4.17.45 True False False 6m13s kube-controller-manager 4.17.45 True False False 6m13s kube-scheduler 4.17.45 True False False 6m13s kube-storage-version-migrator monitoring network 4.17.45 True True False 6m1s Deployment "/openshift-network-console/networking-console-plugin" is not available (awaiting 2 nodes) node-tuning False True False 6m1s DaemonSet "tuned" has no available Pod(s) openshift-apiserver 4.17.45 True False False 6m13s openshift-controller-manager 4.17.45 True False False 6m13s openshift-samples operator-lifecycle-manager 4.17.45 True False False 6m14s operator-lifecycle-manager-catalog 4.17.45 True False False 6m13s operator-lifecycle-manager-packageserver 4.17.45 True False False 6m13s service-ca storage 4.17.45 False False False 6m13s AWSEBSCSIDriverOperatorCRAvailable: AWSEBSDriverNodeServiceControllerAvailable: Waiting for the DaemonSet to deploy the CSI Node Service Waiting for cluster to be reported as healthy... Trying again in 60s NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console csi-snapshot-controller 4.17.45 True False False 7m22s dns 4.17.45 False True True 7m22s DNS "default" is unavailable. image-registry False True True 7m8s Available: The deployment does not have available replicas... ingress False True True 7m8s The "default" ingress controller reports Available=False: IngressControllerUnavailable: One or more status conditions indicate unavailable: DeploymentAvailable=False (DeploymentUnavailable: The deployment has Available status condition set to False (reason: MinimumReplicasUnavailable) with message: Deployment does not have minimum availability.) insights kube-apiserver 4.17.45 True False False 7m13s kube-controller-manager 4.17.45 True False False 7m13s kube-scheduler 4.17.45 True False False 7m13s kube-storage-version-migrator monitoring network 4.17.45 True True False 7m1s DaemonSet "/openshift-multus/multus" is not available (awaiting 2 nodes)... node-tuning False True False 7m1s DaemonSet "tuned" has no available Pod(s) openshift-apiserver 4.17.45 True False False 7m13s openshift-controller-manager 4.17.45 True False False 7m13s openshift-samples operator-lifecycle-manager 4.17.45 True False False 7m14s operator-lifecycle-manager-catalog 4.17.45 True False False 7m13s operator-lifecycle-manager-packageserver 4.17.45 True False False 7m13s service-ca storage 4.17.45 False True False 7m13s AWSEBSCSIDriverOperatorCRAvailable: AWSEBSDriverNodeServiceControllerAvailable: Waiting for the DaemonSet to deploy the CSI Node Service Waiting for cluster to be reported as healthy... Trying again in 60s NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console 4.17.45 Unknown False False 6s csi-snapshot-controller 4.17.45 True False False 8m22s dns 4.17.45 False True True 8m22s DNS "default" is unavailable. image-registry False True True 8m8s Available: The deployment does not have available replicas... ingress False True True 8m8s The "default" ingress controller reports Available=False: IngressControllerUnavailable: One or more status conditions indicate unavailable: DeploymentAvailable=False (DeploymentUnavailable: The deployment has Available status condition set to False (reason: MinimumReplicasUnavailable) with message: Deployment does not have minimum availability.) insights 4.17.45 True False False 20s kube-apiserver 4.17.45 True False False 8m13s kube-controller-manager 4.17.45 True False False 8m13s kube-scheduler 4.17.45 True False False 8m13s kube-storage-version-migrator 4.17.45 True False False 18s monitoring network 4.17.45 True True False 8m1s DaemonSet "/openshift-multus/network-metrics-daemon" is not available (awaiting 3 nodes)... node-tuning 4.17.45 True False False 53s openshift-apiserver 4.17.45 True False False 8m13s openshift-controller-manager 4.17.45 True False False 8m13s openshift-samples operator-lifecycle-manager 4.17.45 True False False 8m14s operator-lifecycle-manager-catalog 4.17.45 True False False 8m13s operator-lifecycle-manager-packageserver 4.17.45 True False False 8m13s service-ca True True False 19s Progressing: ... storage 4.17.45 True False False 51s Waiting for cluster to be reported as healthy... Trying again in 60s NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console 4.17.45 True False False 35s csi-snapshot-controller 4.17.45 True False False 9m23s dns 4.17.45 True False False 35s image-registry 4.17.45 True False False 31s ingress 4.17.45 True True True 25s The "default" ingress controller reports Degraded=True: DegradedConditions: One or more other status conditions indicate a degraded state: CanaryChecksSucceeding=Unknown (CanaryRouteNotAdmitted: Canary route is not admitted by the default ingress controller) insights 4.17.45 True False False 81s kube-apiserver 4.17.45 True False False 9m14s kube-controller-manager 4.17.45 True False False 9m14s kube-scheduler 4.17.45 True False False 9m14s kube-storage-version-migrator 4.17.45 True False False 79s monitoring Unknown True Unknown 54s Rolling out the stack. network 4.17.45 True False False 9m2s node-tuning 4.17.45 True False False 114s openshift-apiserver 4.17.45 True False False 9m14s openshift-controller-manager 4.17.45 True False False 9m14s openshift-samples 4.17.45 True False False 19s operator-lifecycle-manager 4.17.45 True False False 9m15s operator-lifecycle-manager-catalog 4.17.45 True False False 9m14s operator-lifecycle-manager-packageserver 4.17.45 True False False 9m14s service-ca 4.17.45 True False False 80s storage 4.17.45 True False False 112s Waiting for cluster to be reported as healthy... Trying again in 60s NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console 4.17.45 True False False 95s csi-snapshot-controller 4.17.45 True False False 10m dns 4.17.45 True False False 95s image-registry 4.17.45 True False False 91s ingress 4.17.45 True False False 85s insights 4.17.45 True False False 2m21s kube-apiserver 4.17.45 True False False 10m kube-controller-manager 4.17.45 True False False 10m kube-scheduler 4.17.45 True False False 10m kube-storage-version-migrator 4.17.45 True False False 2m19s monitoring Unknown True Unknown 114s Rolling out the stack. network 4.17.45 True False False 10m node-tuning 4.17.45 True False False 2m54s openshift-apiserver 4.17.45 True False False 10m openshift-controller-manager 4.17.45 True False False 10m openshift-samples 4.17.45 True False False 79s operator-lifecycle-manager 4.17.45 True False False 10m operator-lifecycle-manager-catalog 4.17.45 True False False 10m operator-lifecycle-manager-packageserver 4.17.45 True False False 10m service-ca 4.17.45 True False False 2m20s storage 4.17.45 True False False 2m52s Waiting for cluster to be reported as healthy... Trying again in 60s NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console 4.17.45 True False False 2m35s csi-snapshot-controller 4.17.45 True False False 11m dns 4.17.45 True False False 2m35s image-registry 4.17.45 True False False 2m31s ingress 4.17.45 True False False 2m25s insights 4.17.45 True False False 3m21s kube-apiserver 4.17.45 True False False 11m kube-controller-manager 4.17.45 True False False 11m kube-scheduler 4.17.45 True False False 11m kube-storage-version-migrator 4.17.45 True False False 3m19s monitoring 4.17.45 True False False 39s network 4.17.45 True False False 11m node-tuning 4.17.45 True False False 3m54s openshift-apiserver 4.17.45 True False False 11m openshift-controller-manager 4.17.45 True False False 11m openshift-samples 4.17.45 True False False 2m19s operator-lifecycle-manager 4.17.45 True False False 11m operator-lifecycle-manager-catalog 4.17.45 True False False 11m operator-lifecycle-manager-packageserver 4.17.45 True False False 11m service-ca 4.17.45 True False False 3m20s storage 4.17.45 True False False 3m52s Waiting for cluster to be reported as healthy... Trying again in 60s NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console 4.17.45 True False False 3m35s csi-snapshot-controller 4.17.45 True False False 12m dns 4.17.45 True False False 3m35s image-registry 4.17.45 True False False 3m31s ingress 4.17.45 True False False 3m25s insights 4.17.45 True False False 4m21s kube-apiserver 4.17.45 True False False 12m kube-controller-manager 4.17.45 True False False 12m kube-scheduler 4.17.45 True False False 12m kube-storage-version-migrator 4.17.45 True False False 4m19s monitoring 4.17.45 True False False 99s network 4.17.45 True False False 12m node-tuning 4.17.45 True False False 4m54s openshift-apiserver 4.17.45 True False False 12m openshift-controller-manager 4.17.45 True False False 12m openshift-samples 4.17.45 True False False 3m19s operator-lifecycle-manager 4.17.45 True False False 12m operator-lifecycle-manager-catalog 4.17.45 True False False 12m operator-lifecycle-manager-packageserver 4.17.45 True False False 12m service-ca 4.17.45 True False False 4m20s storage 4.17.45 True False False 4m52s Waiting for cluster to be reported as healthy... Trying again in 60s NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console 4.17.45 True False False 4m35s csi-snapshot-controller 4.17.45 True False False 13m dns 4.17.45 True False False 4m35s image-registry 4.17.45 True False False 4m31s ingress 4.17.45 True False False 4m25s insights 4.17.45 True False False 5m21s kube-apiserver 4.17.45 True False False 13m kube-controller-manager 4.17.45 True False False 13m kube-scheduler 4.17.45 True False False 13m kube-storage-version-migrator 4.17.45 True False False 5m19s monitoring 4.17.45 True False False 2m39s network 4.17.45 True False False 13m node-tuning 4.17.45 True False False 5m54s openshift-apiserver 4.17.45 True False False 13m openshift-controller-manager 4.17.45 True False False 13m openshift-samples 4.17.45 True False False 4m19s operator-lifecycle-manager 4.17.45 True False False 13m operator-lifecycle-manager-catalog 4.17.45 True False False 13m operator-lifecycle-manager-packageserver 4.17.45 True False False 13m service-ca 4.17.45 True False False 5m20s storage 4.17.45 True False False 5m52s Waiting for cluster to be reported as healthy... Trying again in 60s healthy cluster to be reported as healthy finished!