INFO: Log in to your Red Hat account... INFO: Configure AWS Credentials... WARN: The current version (1.2.56) is not up to date with latest rosa cli released version (1.2.57). WARN: It is recommended that you update to the latest version. INFO: Logged in as 'rhtap-shared' on 'https://api.openshift.com' INFO: Create ROSA with HCP cluster... WARN: The current version (1.2.56) is not up to date with latest rosa cli released version (1.2.57). WARN: It is recommended that you update to the latest version. INFO: Creating cluster 'kx-078ca6a12c' INFO: To view a list of clusters and their status, run 'rosa list clusters' INFO: Cluster 'kx-078ca6a12c' has been created. INFO: Once the cluster is installed you will need to add an Identity Provider before you can login into the cluster. See 'rosa create idp --help' for more information. Name: kx-078ca6a12c Domain Prefix: kx-078ca6a12c Display Name: kx-078ca6a12c ID: 2m4cuekee9ir8e86m2isjhjes6m3v8pt External ID: 334e6e75-9098-4207-b95f-7a1b65210524 Control Plane: ROSA Service Hosted OpenShift Version: 4.17.41 Channel Group: stable DNS: Not ready AWS Account: 381492310364 AWS Billing Account: 381492310364 API URL: Console URL: Region: us-east-1 Availability: - Control Plane: MultiAZ - Data Plane: MultiAZ Nodes: - Compute (desired): 3 - Compute (current): 0 Network: - Type: OVNKubernetes - Service CIDR: 172.30.0.0/16 - Machine CIDR: 10.0.0.0/16 - Pod CIDR: 10.128.0.0/14 - Host Prefix: /23 - Subnets: subnet-0208a6297964e4fe1, subnet-0c161c939f7025e15, subnet-023e5c7b3016ed194, subnet-02dbd8abbf884d77f, subnet-0360c2d20442c5ba5, subnet-0aad9c992e402a91a EC2 Metadata Http Tokens: optional Role (STS) ARN: arn:aws:iam::381492310364:role/rhads-hcp-HCP-ROSA-Installer-Role Support Role ARN: arn:aws:iam::381492310364:role/rhads-hcp-HCP-ROSA-Support-Role Instance IAM Roles: - Worker: arn:aws:iam::381492310364:role/rhads-hcp-HCP-ROSA-Worker-Role Operator IAM Roles: - arn:aws:iam::381492310364:role/rhads-hcp-kube-system-kube-controller-manager - arn:aws:iam::381492310364:role/rhads-hcp-openshift-cloud-network-config-controller-cloud-creden - arn:aws:iam::381492310364:role/rhads-hcp-openshift-image-registry-installer-cloud-credentials - arn:aws:iam::381492310364:role/rhads-hcp-openshift-ingress-operator-cloud-credentials - arn:aws:iam::381492310364:role/rhads-hcp-openshift-cluster-csi-drivers-ebs-cloud-credentials - arn:aws:iam::381492310364:role/rhads-hcp-kube-system-capa-controller-manager - arn:aws:iam::381492310364:role/rhads-hcp-kube-system-control-plane-operator - arn:aws:iam::381492310364:role/rhads-hcp-kube-system-kms-provider Managed Policies: Yes State: waiting (Waiting for user action) Private: No Delete Protection: Disabled Created: Oct 24 2025 06:31:32 UTC [DEPRECATED] User Workload Monitoring: Enabled Details Page: https://console.redhat.com/openshift/details/s/34V8gqJTymeQGy9ZjdwnLDj8JpM OIDC Endpoint URL: https://oidc.op1.openshiftapps.com/2jtsga3i2etnl697l7bk5i1kmbm4a95j (Managed) Etcd Encryption: Disabled Audit Log Forwarding: Disabled External Authentication: Disabled Zero Egress: Disabled INFO: Preparing to create operator roles. INFO: Operator Roles already exists INFO: Preparing to create OIDC Provider. INFO: OIDC provider already exists INFO: To determine when your cluster is Ready, run 'rosa describe cluster -c kx-078ca6a12c'. INFO: To watch your cluster installation logs, run 'rosa logs install -c kx-078ca6a12c --watch'. INFO: Track the progress of the cluster creation... WARN: The current version (1.2.56) is not up to date with latest rosa cli released version (1.2.57). WARN: It is recommended that you update to the latest version. W: Region flag will be removed from this command in future versions INFO: Cluster 'kx-078ca6a12c' is in waiting state waiting for installation to begin. Logs will show up within 5 minutes 0001-01-01 00:00:00 +0000 UTC hostedclusters kx-078ca6a12c Version 2025-10-24 06:36:35 +0000 UTC certificates cluster-api-cert Issuing certificate as Secret does not exist 2025-10-24 06:36:35 +0000 UTC certificates cluster-api-cert Issuing certificate as Secret does not exist 2025-10-24 06:36:35 +0000 UTC hostedclusters kx-078ca6a12c ValidAWSIdentityProvider StatusUnknown 2025-10-24 06:36:37 +0000 UTC hostedclusters kx-078ca6a12c Condition not found in the CVO. 2025-10-24 06:36:37 +0000 UTC hostedclusters kx-078ca6a12c Condition not found in the CVO. 2025-10-24 06:36:37 +0000 UTC hostedclusters kx-078ca6a12c The hosted control plane is not found 2025-10-24 06:36:37 +0000 UTC hostedclusters kx-078ca6a12c The hosted control plane is not found 2025-10-24 06:36:37 +0000 UTC hostedclusters kx-078ca6a12c The hosted control plane is not found 2025-10-24 06:36:37 +0000 UTC hostedclusters kx-078ca6a12c The hosted control plane is not found 2025-10-24 06:36:37 +0000 UTC hostedclusters kx-078ca6a12c The hosted control plane is not found 2025-10-24 06:36:37 +0000 UTC hostedclusters kx-078ca6a12c Condition not found in the CVO. 2025-10-24 06:36:37 +0000 UTC hostedclusters kx-078ca6a12c The hosted control plane is not found 2025-10-24 06:36:37 +0000 UTC hostedclusters kx-078ca6a12c The hosted control plane is not found 2025-10-24 06:36:37 +0000 UTC hostedclusters kx-078ca6a12c Condition not found in the CVO. 2025-10-24 06:36:37 +0000 UTC hostedclusters kx-078ca6a12c Condition not found in the CVO. 2025-10-24 06:36:37 +0000 UTC hostedclusters kx-078ca6a12c Ignition server deployment not found 2025-10-24 06:36:37 +0000 UTC hostedclusters kx-078ca6a12c NamedCertificates get secret: Invalid value: "cluster-api-cert": Secret "cluster-api-cert" not found 2025-10-24 06:36:37 +0000 UTC hostedclusters kx-078ca6a12c HostedCluster is supported by operator configuration 2025-10-24 06:36:37 +0000 UTC hostedclusters kx-078ca6a12c Release image is valid 2025-10-24 06:36:37 +0000 UTC hostedclusters kx-078ca6a12c ValidConfiguration condition is false: NamedCertificates get secret: Invalid value: "cluster-api-cert": Secret "cluster-api-cert" not found 2025-10-24 06:36:37 +0000 UTC hostedclusters kx-078ca6a12c Reconciliation active on resource 2025-10-24 06:36:38 +0000 UTC hostedclusters kx-078ca6a12c configuration is invalid: NamedCertificates get secret: Invalid value: "cluster-api-cert": Secret "cluster-api-cert" not found 2025-10-24 06:36:38 +0000 UTC hostedclusters kx-078ca6a12c ValidConfiguration condition is false: NamedCertificates get secret: Invalid value: "cluster-api-cert": Secret "cluster-api-cert" not found 0001-01-01 00:00:00 +0000 UTC hostedclusters kx-078ca6a12c Version 2025-10-24 06:36:35 +0000 UTC hostedclusters kx-078ca6a12c ValidAWSIdentityProvider StatusUnknown 2025-10-24 06:36:37 +0000 UTC hostedclusters kx-078ca6a12c Condition not found in the CVO. 2025-10-24 06:36:37 +0000 UTC hostedclusters kx-078ca6a12c Condition not found in the HCP 2025-10-24 06:36:37 +0000 UTC hostedclusters kx-078ca6a12c Reconciliation active on resource 2025-10-24 06:36:37 +0000 UTC hostedclusters kx-078ca6a12c Ignition server deployment not found 2025-10-24 06:36:37 +0000 UTC hostedclusters kx-078ca6a12c router load balancer is not provisioned; 10s since creation.; router load balancer is not provisioned; 10s since creation. 2025-10-24 06:36:37 +0000 UTC hostedclusters kx-078ca6a12c Release image is valid 2025-10-24 06:36:37 +0000 UTC hostedclusters kx-078ca6a12c Condition not found in the CVO. 2025-10-24 06:36:37 +0000 UTC hostedclusters kx-078ca6a12c Condition not found in the CVO. 2025-10-24 06:36:37 +0000 UTC hostedclusters kx-078ca6a12c Condition not found in the CVO. 2025-10-24 06:36:37 +0000 UTC hostedclusters kx-078ca6a12c Condition not found in the CVO. 2025-10-24 06:36:37 +0000 UTC hostedclusters kx-078ca6a12c HostedCluster is supported by operator configuration 2025-10-24 06:36:38 +0000 UTC hostedclusters kx-078ca6a12c HostedCluster is at expected version 2025-10-24 06:38:02 +0000 UTC certificates cluster-api-cert Certificate is up to date and has not expired 2025-10-24 06:38:04 +0000 UTC hostedclusters kx-078ca6a12c Configuration passes validation 2025-10-24 06:38:05 +0000 UTC hostedclusters kx-078ca6a12c Required platform credentials are found 2025-10-24 06:38:10 +0000 UTC hostedclusters kx-078ca6a12c OIDC configuration is valid 2025-10-24 06:38:10 +0000 UTC hostedclusters kx-078ca6a12c Reconciliation completed successfully 2025-10-24 06:38:13 +0000 UTC hostedclusters kx-078ca6a12c lookup api.kx-078ca6a12c.iarw.p3.openshiftapps.com on 172.30.0.10:53: no such host 2025-10-24 06:38:13 +0000 UTC hostedclusters kx-078ca6a12c Configuration passes validation 2025-10-24 06:38:13 +0000 UTC hostedclusters kx-078ca6a12c EtcdAvailable StatefulSetNotFound 2025-10-24 06:38:13 +0000 UTC hostedclusters kx-078ca6a12c AWS KMS is not configured 2025-10-24 06:38:13 +0000 UTC hostedclusters kx-078ca6a12c Kube APIServer deployment not found 2025-10-24 06:38:13 +0000 UTC hostedclusters kx-078ca6a12c router load balancer is not provisioned; 10s since creation.; router load balancer is not provisioned; 10s since creation. 2025-10-24 06:38:13 +0000 UTC hostedclusters kx-078ca6a12c capi-provider deployment has 2 unavailable replicas 2025-10-24 06:38:36 +0000 UTC hostedclusters kx-078ca6a12c All is well 2025-10-24 06:38:36 +0000 UTC hostedclusters kx-078ca6a12c All is well 2025-10-24 06:38:42 +0000 UTC hostedclusters kx-078ca6a12c WebIdentityErr 2025-10-24 06:39:05 +0000 UTC hostedclusters kx-078ca6a12c EtcdAvailable QuorumAvailable 2025-10-24 06:39:52 +0000 UTC hostedclusters kx-078ca6a12c Kube APIServer deployment is available 2025-10-24 06:40:15 +0000 UTC hostedclusters kx-078ca6a12c Ignition server deployment is available 2025-10-24 06:40:42 +0000 UTC hostedclusters kx-078ca6a12c Payload loaded version="4.17.41" image="quay.io/openshift-release-dev/ocp-release@sha256:57f09f90de7ab876109581cef6b2cf9da8ff62818bd9fb1503c0cc26d5a5d80a" architecture="Multi" 2025-10-24 06:40:42 +0000 UTC hostedclusters kx-078ca6a12c ClusterVersionAvailable FromClusterVersion 2025-10-24 06:40:42 +0000 UTC hostedclusters kx-078ca6a12c Working towards 4.17.41: 20 of 621 done (3% complete) 2025-10-24 06:40:42 +0000 UTC hostedclusters kx-078ca6a12c Condition not found in the CVO. 2025-10-24 06:40:42 +0000 UTC hostedclusters kx-078ca6a12c ClusterVersionSucceeding FromClusterVersion 2025-10-24 06:40:43 +0000 UTC hostedclusters kx-078ca6a12c All is well 2025-10-24 06:40:51 +0000 UTC hostedclusters kx-078ca6a12c All is well 2025-10-24 06:41:17 +0000 UTC hostedclusters kx-078ca6a12c Cluster operators console, dns, image-registry, ingress, insights, kube-storage-version-migrator, monitoring, node-tuning, openshift-samples, service-ca, storage are not available 2025-10-24 06:41:18 +0000 UTC hostedclusters kx-078ca6a12c Reconciliation completed successfully 2025-10-24 06:41:23 +0000 UTC hostedclusters kx-078ca6a12c The hosted cluster is not degraded 2025-10-24 06:41:32 +0000 UTC hostedclusters kx-078ca6a12c All is well 2025-10-24 06:41:32 +0000 UTC hostedclusters kx-078ca6a12c Get "https://ab6c4f7d9135b4bbca861cb493ef865c-691c9635c510f3db.elb.us-east-1.amazonaws.com:443/healthz": dial tcp: lookup ab6c4f7d9135b4bbca861cb493ef865c-691c9635c510f3db.elb.us-east-1.amazonaws.com on 172.30.0.10:53: no such host 0001-01-01 00:00:00 +0000 UTC hostedclusters kx-078ca6a12c Version 2025-10-24 06:36:37 +0000 UTC hostedclusters kx-078ca6a12c Release image is valid 2025-10-24 06:36:37 +0000 UTC hostedclusters kx-078ca6a12c Reconciliation active on resource 2025-10-24 06:36:37 +0000 UTC hostedclusters kx-078ca6a12c HostedCluster is supported by operator configuration 2025-10-24 06:36:38 +0000 UTC hostedclusters kx-078ca6a12c HostedCluster is at expected version 2025-10-24 06:38:02 +0000 UTC certificates cluster-api-cert Certificate is up to date and has not expired 2025-10-24 06:38:04 +0000 UTC hostedclusters kx-078ca6a12c Configuration passes validation 2025-10-24 06:38:05 +0000 UTC hostedclusters kx-078ca6a12c Required platform credentials are found 2025-10-24 06:38:10 +0000 UTC hostedclusters kx-078ca6a12c OIDC configuration is valid 2025-10-24 06:38:13 +0000 UTC hostedclusters kx-078ca6a12c Configuration passes validation 2025-10-24 06:38:13 +0000 UTC hostedclusters kx-078ca6a12c AWS KMS is not configured 2025-10-24 06:38:36 +0000 UTC hostedclusters kx-078ca6a12c All is well 2025-10-24 06:38:36 +0000 UTC hostedclusters kx-078ca6a12c All is well 2025-10-24 06:39:05 +0000 UTC hostedclusters kx-078ca6a12c EtcdAvailable QuorumAvailable 2025-10-24 06:39:52 +0000 UTC hostedclusters kx-078ca6a12c Kube APIServer deployment is available 2025-10-24 06:40:15 +0000 UTC hostedclusters kx-078ca6a12c Ignition server deployment is available 2025-10-24 06:40:42 +0000 UTC hostedclusters kx-078ca6a12c Unable to apply 4.17.41: some cluster operators are not available 2025-10-24 06:40:42 +0000 UTC hostedclusters kx-078ca6a12c Payload loaded version="4.17.41" image="quay.io/openshift-release-dev/ocp-release@sha256:57f09f90de7ab876109581cef6b2cf9da8ff62818bd9fb1503c0cc26d5a5d80a" architecture="Multi" 2025-10-24 06:40:42 +0000 UTC hostedclusters kx-078ca6a12c Condition not found in the CVO. 2025-10-24 06:40:42 +0000 UTC hostedclusters kx-078ca6a12c ClusterVersionAvailable FromClusterVersion 2025-10-24 06:40:43 +0000 UTC hostedclusters kx-078ca6a12c All is well 2025-10-24 06:40:51 +0000 UTC hostedclusters kx-078ca6a12c All is well 2025-10-24 06:41:17 +0000 UTC hostedclusters kx-078ca6a12c Cluster operators console, dns, image-registry, ingress, insights, kube-storage-version-migrator, monitoring, node-tuning, openshift-samples, service-ca, storage are not available 2025-10-24 06:41:18 +0000 UTC hostedclusters kx-078ca6a12c Reconciliation completed successfully 2025-10-24 06:41:23 +0000 UTC hostedclusters kx-078ca6a12c The hosted cluster is not degraded 2025-10-24 06:42:05 +0000 UTC hostedclusters kx-078ca6a12c The hosted control plane is available INFO: Cluster 'kx-078ca6a12c' is now ready INFO: ROSA with HCP cluster is ready, create a cluster admin account for accessing the cluster WARN: The current version (1.2.56) is not up to date with latest rosa cli released version (1.2.57). WARN: It is recommended that you update to the latest version. INFO: Storing login command... INFO: Check if it's able to login to OCP cluster... Retried 1 times... INFO: Check if apiserver is ready... Waiting for cluster operators to be accessible for 2m... NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console csi-snapshot-controller 4.17.41 True False False 3m36s dns 4.17.41 False False True 3m36s DNS "default" is unavailable. image-registry False True True 2m53s Available: The deployment does not have available replicas... ingress False True True 3m21s The "default" ingress controller reports Available=False: IngressControllerUnavailable: One or more status conditions indicate unavailable: DeploymentAvailable=False (DeploymentUnavailable: The deployment has Available status condition set to False (reason: MinimumReplicasUnavailable) with message: Deployment does not have minimum availability.) insights kube-apiserver 4.17.41 True False False 3m28s kube-controller-manager 4.17.41 True False False 3m28s kube-scheduler 4.17.41 True False False 3m28s kube-storage-version-migrator monitoring network 4.17.41 True True False 3m14s DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready... node-tuning False True False 3m18s DaemonSet "tuned" has no available Pod(s) openshift-apiserver 4.17.41 True False False 3m28s openshift-controller-manager 4.17.41 True False False 3m28s openshift-samples operator-lifecycle-manager 4.17.41 True False False 3m32s operator-lifecycle-manager-catalog 4.17.41 True False False 3m17s operator-lifecycle-manager-packageserver 4.17.41 True False False 3m28s service-ca storage 4.17.41 False False False 3m28s AWSEBSCSIDriverOperatorCRAvailable: AWSEBSDriverNodeServiceControllerAvailable: Waiting for the DaemonSet to deploy the CSI Node Service cluster operators to be accessible finished! [INFO] Cluster operators are accessible. Waiting for cluster to be reported as healthy for 60m... Unable to connect to the server: dial tcp: lookup api.kx-078ca6a12c.iarw.p3.openshiftapps.com on 172.30.0.10:53: no such host Waiting for cluster to be reported as healthy... Trying again in 60s NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console csi-snapshot-controller 4.17.41 True False False 4m37s dns 4.17.41 False False True 4m37s DNS "default" is unavailable. image-registry False True True 3m54s Available: The deployment does not have available replicas... ingress False True True 4m22s The "default" ingress controller reports Available=False: IngressControllerUnavailable: One or more status conditions indicate unavailable: DeploymentAvailable=False (DeploymentUnavailable: The deployment has Available status condition set to False (reason: MinimumReplicasUnavailable) with message: Deployment does not have minimum availability.) insights kube-apiserver 4.17.41 True False False 4m29s kube-controller-manager 4.17.41 True False False 4m29s kube-scheduler 4.17.41 True False False 4m29s kube-storage-version-migrator monitoring network 4.17.41 True True False 4m15s DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready... node-tuning False True False 4m19s DaemonSet "tuned" has no available Pod(s) openshift-apiserver 4.17.41 True False False 4m29s openshift-controller-manager 4.17.41 True False False 4m29s openshift-samples operator-lifecycle-manager 4.17.41 True False False 4m33s operator-lifecycle-manager-catalog 4.17.41 True False False 4m18s operator-lifecycle-manager-packageserver 4.17.41 True False False 4m29s service-ca storage 4.17.41 False False False 4m29s AWSEBSCSIDriverOperatorCRAvailable: AWSEBSDriverNodeServiceControllerAvailable: Waiting for the DaemonSet to deploy the CSI Node Service Waiting for cluster to be reported as healthy... Trying again in 60s NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console csi-snapshot-controller 4.17.41 True False False 5m37s dns 4.17.41 False False True 5m37s DNS "default" is unavailable. image-registry False True True 4m54s Available: The deployment does not have available replicas... ingress False True True 5m22s The "default" ingress controller reports Available=False: IngressControllerUnavailable: One or more status conditions indicate unavailable: DeploymentAvailable=False (DeploymentUnavailable: The deployment has Available status condition set to False (reason: MinimumReplicasUnavailable) with message: Deployment does not have minimum availability.) insights kube-apiserver 4.17.41 True False False 5m29s kube-controller-manager 4.17.41 True False False 5m29s kube-scheduler 4.17.41 True False False 5m29s kube-storage-version-migrator monitoring network 4.17.41 True True False 5m15s DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready... node-tuning 4.17.41 True False False 33s openshift-apiserver 4.17.41 True False False 5m29s openshift-controller-manager 4.17.41 True False False 5m29s openshift-samples operator-lifecycle-manager 4.17.41 True False False 5m33s operator-lifecycle-manager-catalog 4.17.41 True False False 5m18s operator-lifecycle-manager-packageserver 4.17.41 True False False 5m29s service-ca storage 4.17.41 True False False 20s Waiting for cluster to be reported as healthy... Trying again in 60s Unable to connect to the server: dial tcp: lookup api.kx-078ca6a12c.iarw.p3.openshiftapps.com on 172.30.0.10:53: no such host Waiting for cluster to be reported as healthy... Trying again in 60s NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console 4.17.41 True True False 70s SyncLoopRefreshProgressing: working toward version 4.17.41, 1 replicas available csi-snapshot-controller 4.17.41 True False False 7m38s dns 4.17.41 True False False 75s image-registry True True True 69s Degraded: Registry deployment has timed out progressing: ReplicaSet "image-registry-f9645f9cd" has timed out progressing. ingress 4.17.41 True True False 67s ingresscontroller "default" is progressing: IngressControllerProgressing: One or more status conditions indicate progressing: DeploymentRollingOut=True (DeploymentRollingOut: Waiting for router deployment rollout to finish: 1 of 2 updated replica(s) are available...... insights 4.17.41 True False False 2m1s kube-apiserver 4.17.41 True False False 7m30s kube-controller-manager 4.17.41 True False False 7m30s kube-scheduler 4.17.41 True False False 7m30s kube-storage-version-migrator 4.17.41 True False False 117s monitoring Unknown True Unknown 94s Rolling out the stack. network 4.17.41 True False False 7m16s node-tuning 4.17.41 True False False 2m34s openshift-apiserver 4.17.41 True False False 7m30s openshift-controller-manager 4.17.41 True False False 7m30s openshift-samples 4.17.41 True False False 62s operator-lifecycle-manager 4.17.41 True False False 7m34s operator-lifecycle-manager-catalog 4.17.41 True False False 7m19s operator-lifecycle-manager-packageserver 4.17.41 True False False 7m30s service-ca 4.17.41 True False False 118s storage 4.17.41 True False False 2m21s Waiting for cluster to be reported as healthy... Trying again in 60s NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console 4.17.41 True True False 2m10s SyncLoopRefreshProgressing: working toward version 4.17.41, 1 replicas available csi-snapshot-controller 4.17.41 True False False 8m38s dns 4.17.41 True False False 2m15s image-registry True True True 2m9s Degraded: Registry deployment has timed out progressing: ReplicaSet "image-registry-f9645f9cd" has timed out progressing. ingress 4.17.41 True True False 2m7s ingresscontroller "default" is progressing: IngressControllerProgressing: One or more status conditions indicate progressing: DeploymentRollingOut=True (DeploymentRollingOut: Waiting for router deployment rollout to finish: 1 of 2 updated replica(s) are available...... insights 4.17.41 True False False 3m1s kube-apiserver 4.17.41 True False False 8m30s kube-controller-manager 4.17.41 True False False 8m30s kube-scheduler 4.17.41 True False False 8m30s kube-storage-version-migrator 4.17.41 True False False 2m57s monitoring Unknown True Unknown 2m34s Rolling out the stack. network 4.17.41 True False False 8m16s node-tuning 4.17.41 True False False 3m34s openshift-apiserver 4.17.41 True False False 8m30s openshift-controller-manager 4.17.41 True False False 8m30s openshift-samples 4.17.41 True False False 2m2s operator-lifecycle-manager 4.17.41 True False False 8m34s operator-lifecycle-manager-catalog 4.17.41 True False False 8m19s operator-lifecycle-manager-packageserver 4.17.41 True False False 8m30s service-ca 4.17.41 True False False 2m58s storage 4.17.41 True False False 3m21s Waiting for cluster to be reported as healthy... Trying again in 60s NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console 4.17.41 True True False 3m10s SyncLoopRefreshProgressing: working toward version 4.17.41, 1 replicas available csi-snapshot-controller 4.17.41 True False False 9m38s dns 4.17.41 True False False 3m15s image-registry True True True 3m9s Degraded: Registry deployment has timed out progressing: ReplicaSet "image-registry-f9645f9cd" has timed out progressing. ingress 4.17.41 True True False 3m7s ingresscontroller "default" is progressing: IngressControllerProgressing: One or more status conditions indicate progressing: DeploymentRollingOut=True (DeploymentRollingOut: Waiting for router deployment rollout to finish: 1 of 2 updated replica(s) are available...... insights 4.17.41 True False False 4m1s kube-apiserver 4.17.41 True False False 9m30s kube-controller-manager 4.17.41 True False False 9m30s kube-scheduler 4.17.41 True False False 9m30s kube-storage-version-migrator 4.17.41 True False False 3m57s monitoring Unknown True Unknown 3m34s Rolling out the stack. network 4.17.41 True False False 9m16s node-tuning 4.17.41 True False False 4m34s openshift-apiserver 4.17.41 True False False 9m30s openshift-controller-manager 4.17.41 True False False 9m30s openshift-samples 4.17.41 True False False 3m2s operator-lifecycle-manager 4.17.41 True False False 9m34s operator-lifecycle-manager-catalog 4.17.41 True False False 9m19s operator-lifecycle-manager-packageserver 4.17.41 True False False 9m30s service-ca 4.17.41 True False False 3m58s storage 4.17.41 True False False 4m21s Waiting for cluster to be reported as healthy... Trying again in 60s NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console 4.17.41 True True False 4m10s SyncLoopRefreshProgressing: working toward version 4.17.41, 1 replicas available csi-snapshot-controller 4.17.41 True False False 10m dns 4.17.41 True False False 4m15s image-registry True True True 4m9s Degraded: Registry deployment has timed out progressing: ReplicaSet "image-registry-f9645f9cd" has timed out progressing. ingress 4.17.41 True True False 4m7s ingresscontroller "default" is progressing: IngressControllerProgressing: One or more status conditions indicate progressing: DeploymentRollingOut=True (DeploymentRollingOut: Waiting for router deployment rollout to finish: 1 of 2 updated replica(s) are available...... insights 4.17.41 True False False 5m1s kube-apiserver 4.17.41 True False False 10m kube-controller-manager 4.17.41 True False False 10m kube-scheduler 4.17.41 True False False 10m kube-storage-version-migrator 4.17.41 True False False 4m57s monitoring Unknown True Unknown 4m34s Rolling out the stack. network 4.17.41 True False False 10m node-tuning 4.17.41 True False False 5m34s openshift-apiserver 4.17.41 True False False 10m openshift-controller-manager 4.17.41 True False False 10m openshift-samples 4.17.41 True False False 4m2s operator-lifecycle-manager 4.17.41 True False False 10m operator-lifecycle-manager-catalog 4.17.41 True False False 10m operator-lifecycle-manager-packageserver 4.17.41 True False False 10m service-ca 4.17.41 True False False 4m58s storage 4.17.41 True False False 5m21s Waiting for cluster to be reported as healthy... Trying again in 60s NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console 4.17.41 True True False 5m11s SyncLoopRefreshProgressing: working toward version 4.17.41, 1 replicas available csi-snapshot-controller 4.17.41 True False False 11m dns 4.17.41 True False False 5m16s image-registry True True True 5m10s Degraded: Registry deployment has timed out progressing: ReplicaSet "image-registry-f9645f9cd" has timed out progressing. ingress 4.17.41 True True False 5m8s ingresscontroller "default" is progressing: IngressControllerProgressing: One or more status conditions indicate progressing: DeploymentRollingOut=True (DeploymentRollingOut: Waiting for router deployment rollout to finish: 1 of 2 updated replica(s) are available...... insights 4.17.41 True False False 6m2s kube-apiserver 4.17.41 True False False 11m kube-controller-manager 4.17.41 True False False 11m kube-scheduler 4.17.41 True False False 11m kube-storage-version-migrator 4.17.41 True False False 5m58s monitoring Unknown True Unknown 5m35s Rolling out the stack. network 4.17.41 True False False 11m node-tuning 4.17.41 True False False 6m35s openshift-apiserver 4.17.41 True False False 11m openshift-controller-manager 4.17.41 True False False 11m openshift-samples 4.17.41 True False False 5m3s operator-lifecycle-manager 4.17.41 True False False 11m operator-lifecycle-manager-catalog 4.17.41 True False False 11m operator-lifecycle-manager-packageserver 4.17.41 True False False 11m service-ca 4.17.41 True False False 5m59s storage 4.17.41 True False False 6m22s Waiting for cluster to be reported as healthy... Trying again in 60s Unable to connect to the server: dial tcp: lookup api.kx-078ca6a12c.iarw.p3.openshiftapps.com on 172.30.0.10:53: no such host Waiting for cluster to be reported as healthy... Trying again in 60s NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console 4.17.41 True False False 7m11s csi-snapshot-controller 4.17.41 True False False 13m dns 4.17.41 True True False 7m16s DNS "default" reports Progressing=True: "Have 2 available node-resolver pods, want 3." image-registry 4.17.41 True True False 7m10s Progressing: The deployment has not completed... ingress 4.17.41 True False False 7m8s insights 4.17.41 True False False 8m2s kube-apiserver 4.17.41 True False False 13m kube-controller-manager 4.17.41 True False False 13m kube-scheduler 4.17.41 True False False 13m kube-storage-version-migrator 4.17.41 True False False 7m58s monitoring Unknown True Unknown 7m35s Rolling out the stack. network 4.17.41 True True False 13m DaemonSet "/openshift-multus/multus-additional-cni-plugins" is not available (awaiting 1 nodes)... node-tuning 4.17.41 True True False 43s Waiting for 1/3 Profiles to be applied openshift-apiserver 4.17.41 True False False 13m openshift-controller-manager 4.17.41 True False False 13m openshift-samples 4.17.41 True False False 7m3s operator-lifecycle-manager 4.17.41 True False False 13m operator-lifecycle-manager-catalog 4.17.41 True False False 13m operator-lifecycle-manager-packageserver 4.17.41 True False False 13m service-ca 4.17.41 True False False 7m59s storage 4.17.41 True True False 8m22s AWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods Waiting for cluster to be reported as healthy... Trying again in 60s NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console 4.17.41 True False False 8m11s csi-snapshot-controller 4.17.41 True False False 14m dns 4.17.41 True True False 8m16s DNS "default" reports Progressing=True: "Have 2 available DNS pods, want 3." image-registry 4.17.41 True False False 8m10s ingress 4.17.41 True False False 8m8s insights 4.17.41 True False False 9m2s kube-apiserver 4.17.41 True False False 14m kube-controller-manager 4.17.41 True False False 14m kube-scheduler 4.17.41 True False False 14m kube-storage-version-migrator 4.17.41 True False False 8m58s monitoring Unknown True Unknown 8m35s Rolling out the stack. network 4.17.41 True True False 14m DaemonSet "/openshift-multus/network-metrics-daemon" is not available (awaiting 1 nodes) node-tuning 4.17.41 True False False 103s openshift-apiserver 4.17.41 True False False 14m openshift-controller-manager 4.17.41 True False False 14m openshift-samples 4.17.41 True False False 8m3s operator-lifecycle-manager 4.17.41 True False False 14m operator-lifecycle-manager-catalog 4.17.41 True False False 14m operator-lifecycle-manager-packageserver 4.17.41 True False False 14m service-ca 4.17.41 True False False 8m59s storage 4.17.41 True False False 9m22s Waiting for cluster to be reported as healthy... Trying again in 60s NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console 4.17.41 True False False 9m12s csi-snapshot-controller 4.17.41 True False False 15m dns 4.17.41 True False False 9m17s image-registry 4.17.41 True False False 9m11s ingress 4.17.41 True False False 9m9s insights 4.17.41 True False False 10m kube-apiserver 4.17.41 True False False 15m kube-controller-manager 4.17.41 True False False 15m kube-scheduler 4.17.41 True False False 15m kube-storage-version-migrator 4.17.41 True False False 9m59s monitoring 4.17.41 True False False 43s network 4.17.41 True False False 15m node-tuning 4.17.41 True False False 2m44s openshift-apiserver 4.17.41 True False False 15m openshift-controller-manager 4.17.41 True False False 15m openshift-samples 4.17.41 True False False 9m4s operator-lifecycle-manager 4.17.41 True False False 15m operator-lifecycle-manager-catalog 4.17.41 True False False 15m operator-lifecycle-manager-packageserver 4.17.41 True False False 15m service-ca 4.17.41 True False False 10m storage 4.17.41 True False False 10m Waiting for cluster to be reported as healthy... Trying again in 60s NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console 4.17.41 True False False 10m csi-snapshot-controller 4.17.41 True False False 16m dns 4.17.41 True False False 10m image-registry 4.17.41 True False False 10m ingress 4.17.41 True False False 10m insights 4.17.41 True False False 11m kube-apiserver 4.17.41 True False False 16m kube-controller-manager 4.17.41 True False False 16m kube-scheduler 4.17.41 True False False 16m kube-storage-version-migrator 4.17.41 True False False 10m monitoring 4.17.41 True False False 103s network 4.17.41 True False False 16m node-tuning 4.17.41 True False False 3m44s openshift-apiserver 4.17.41 True False False 16m openshift-controller-manager 4.17.41 True False False 16m openshift-samples 4.17.41 True False False 10m operator-lifecycle-manager 4.17.41 True False False 16m operator-lifecycle-manager-catalog 4.17.41 True False False 16m operator-lifecycle-manager-packageserver 4.17.41 True False False 16m service-ca 4.17.41 True False False 11m storage 4.17.41 True False False 11m Waiting for cluster to be reported as healthy... Trying again in 60s NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console 4.17.41 True False False 11m csi-snapshot-controller 4.17.41 True False False 17m dns 4.17.41 True False False 11m image-registry 4.17.41 True False False 11m ingress 4.17.41 True False False 11m insights 4.17.41 True False False 12m kube-apiserver 4.17.41 True False False 17m kube-controller-manager 4.17.41 True False False 17m kube-scheduler 4.17.41 True False False 17m kube-storage-version-migrator 4.17.41 True False False 11m monitoring 4.17.41 True False False 2m43s network 4.17.41 True False False 17m node-tuning 4.17.41 True False False 4m44s openshift-apiserver 4.17.41 True False False 17m openshift-controller-manager 4.17.41 True False False 17m openshift-samples 4.17.41 True False False 11m operator-lifecycle-manager 4.17.41 True False False 17m operator-lifecycle-manager-catalog 4.17.41 True False False 17m operator-lifecycle-manager-packageserver 4.17.41 True False False 17m service-ca 4.17.41 True False False 12m storage 4.17.41 True False False 12m Waiting for cluster to be reported as healthy... Trying again in 60s NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console 4.17.41 True False False 12m csi-snapshot-controller 4.17.41 True False False 18m dns 4.17.41 True False False 12m image-registry 4.17.41 True False False 12m ingress 4.17.41 True False False 12m insights 4.17.41 True False False 13m kube-apiserver 4.17.41 True False False 18m kube-controller-manager 4.17.41 True False False 18m kube-scheduler 4.17.41 True False False 18m kube-storage-version-migrator 4.17.41 True False False 13m monitoring 4.17.41 True False False 3m44s network 4.17.41 True False False 18m node-tuning 4.17.41 True False False 5m45s openshift-apiserver 4.17.41 True False False 18m openshift-controller-manager 4.17.41 True False False 18m openshift-samples 4.17.41 True False False 12m operator-lifecycle-manager 4.17.41 True False False 18m operator-lifecycle-manager-catalog 4.17.41 True False False 18m operator-lifecycle-manager-packageserver 4.17.41 True False False 18m service-ca 4.17.41 True False False 13m storage 4.17.41 True False False 13m Waiting for cluster to be reported as healthy... Trying again in 60s healthy cluster to be reported as healthy finished!