Validate and Deploy Rancher-Managed Clusters with Monokle

May 11, 2023
9 min
Sonali Srivastava
Technology Evangelist

Learn how to maximize SUSE's Rancher and Monokle together to validate and deploy Kubernetes clusters successfully.

Share on Twitter
Share on LinkedIn
Share on Reddit
Share on HackerNews
Copy URL

Table of Contents

No items found.

Try Monokle Desktop Today

Most organizations today host applications on Kubernetes. They often split their dev, test, and prod environments into different clusters. These can be hosted on-prem or on a managed Kubernetes service. Making this separation helps to prevent unauthorized access and isolate the control and data planes and allow finer control of resource requirements for different purposes. 

Additionally, organizations split their services across managed service providers to add resilience, high availability, and better disaster recovery. In such cases, it becomes difficult to manage these clusters. Hence arises a need for Kubernetes management platforms that allow managing different clusters from a single window.

One popular management platform is Rancher. Rancher helps teams manage operational and security complexities in a straightforward way.  In short, Rancher provides the ability to import or create clusters. It helps in monitoring each cluster and managing them in a secure way from a single platform.

## Benefits of Using Rancher and Monokle for Kubernetes Deployments

### A real-world perspective for quality deployments

Application deployment to clusters can be tricky when dealing with multiple clusters. Maintaining consistency and detecting vulnerabilities can be difficult. By using a tool like Monokle, connecting to Rancher-managed clusters becomes simpler. With Monokle’s shift left mindset, where warning and errors are shown in real time close to the developer, you can [make changes to manifests easily](, deploy to specific clusters, compare the development and production cluster resources, and improve deployments accordingly. This will help to avoid the possibility of an **application that works on the dev env but fails on the production env**

In this blog post, let’s consider a company with a multi-cluster architecture. Their application runs on a premise cluster for development and/or testing and on a managed Kubernetes cluster for production and like many, they use Rancher for managing their multi-cluster architecture.

## How Rancher Benefits Kubernetes

Enterprises prefer to manage multiple cluster architectures with container management tools like Rancher. Rancher is a Kubernetes management tool to manage clusters no matter who is the provider.

Some benefits of using Rancher include:

- **Flexible provisioning**: Rancher offers flexible provisioning options for Kubernetes, including provisioning from a hosted provider, installation on compute nodes, or importing existing clusters from anywhere.

- **Centralized authentication and access control**: Rancher centralizes authentication and role-based access control (RBAC) for all clusters, allowing global admins to manage cluster access from a single location.

- **Detailed monitoring and alerting**: Rancher provides detailed monitoring and alerting for clusters and their resources, and can also ship logs to external providers.

Rancher & its components

Kubeshop has developed [Monokle](, a suite of tools created to manage all pre-deployment tasks and policies before errors make it to your cluster. This unique toolkit consists of [Monokle Desktop](, [CLI]( & [Monokle Policy Management IDE]( 

For multiple or distributed teams, suppose team leads need to ensure that standard policies are applied. They can access their YAML manifest via a GitHub repository in Monokle Cloud creating guardrails for teams to ensure policies and deployment standards are met.

Monokle Policy IDE clones the GitHub project to load the resources, shows the resource relationships using a graph, allows a preview of Helm/Kustomization, and has the ability to write custom policies in Typescript. These policies are readily available in Monokle Cloud and can also be [shared for validating the manifest]( in real time before pushing the manifests to the pipeline.

Monokle Desktop focuses on effectively managing YAML manifests and all pre-deployment tasks to help avoid vulnerabilities making their way to your cluster. It offers [error-proofing]( features which include: 

- **Enabling validation**: Monokle provides validation capability for the manifest. It includes OPA policies, Kubernetes schema, and resource link validation. Validating pre-deployment helps ensure the least vulnerabilities to go into production. Monokle also provides custom policy validators creation and enabling features to enforce organization-specific custom policies.

- **Comparing resources across clusters**: With `kubectl` anyone can make a deployment but it does not give the possibility to compare the resources across clusters. With Monokle, resources can be compared with local against cluster and with multiple clusters.

Let’s understand how we can make use of Monokle to have the least difference between the dev and production clusters that are running the same application in the dev and prod environment, make changes to the manifests, and deploy resources. In this blog post, we will be working with Monokle Desktop.

## Deploy an application to Rancher-managed multiple clusters

We have an on-prem dev minikube cluster and a prod AKS cluster that is managed by Rancher. An application is running on these clusters. We will make use of Rancher to view the dev and prod cluster status. We will compare the dev and prod environments manifests in Monokle to understand how they are different.

### Prerequisite

- [Rancher](

- [Monokle]( 

- Two clusters: on-premise and managed Kubernetes cluster

### Configure Rancher

Rancher provides a dashboard for cluster management, cloud credentials storage, global settings, continuous delivery, and much more. It can be accessed via the browser on localhost after the Rancher container is up and running. Rancher creates an admin user with a default password. You can also reset this password, and add users, roles, and groups to ensure authorized access. 

In the Rancher menu, select the **Cluster Management**  option. It lists the clusters, cloud credentials, drivers, and more advanced options. Select Clusters and click on **Import Existing**. If your cluster is running on [AKS](, [EKS](, or [GKE](, Rancher has pre-defined the requirements that make it easy to import clusters in Rancher. In case your cluster is not among these providers, select [Generic](

- Import cluster in Rancher by adding your [cloud credentials]( for hosted Kubernetes provider.

AKS cluster import in progress in Rancher

- Verify both dev minikube and prod-azure clusters are running in the Rancher and download their respective cluster kubeconfig.

Rancher cluster management

### Start project

Launch Monokle Desktop and select “New Project”. Click on “Start from a template”. Select the **[Basic service deployment](** template and provide the following details:

- Name: Name to identify the service and deployment.

- Namespace: Select the namespace you want to deploy the resource to or choose default.

- Image: Enter the image name. Example: cerebro31/monokle-helloworld:stable

- Service Port: 80

- Target Port: 80

- Service Type: LoadBalancer

Monokle creates the resources based on input and loads them on the Desktop as shown below.

Monokle Desktop after resource creation from the template

### Initial setup

- Enable the validation in Monokle to check for schema errors and violations of [OPA policies]( Click on “Open settings” in Monokle and enable validation. We have enabled all the rules provided by Monokle here. Click on “Configure” to check the rules and enable/disable them accordingly.

Enable validation in Monokle

  • Merge the kubeconfig of both clusters in one kubeconfig and [configure]( in Monokle Desktop to import the clusters.
Kubeconfig imported and clusters visible in the drop-down menu

## Deploying Rancher with Monokle Desktop

For the dev minikube cluster, we have updated the infrastructure spec and added a label `env`  with the value `dev` using Monokle’s Form Editor. It saves the effort of manually playing around with the YAML.

Add label using Form Editor

Click on `Deploy` and deploy the deployment resource. Similarly, select the Service resource and click on `Deploy` to deploy to the cluster.

Deploy to cluster

For the prod cluster deployment, update the `env` flag to `prod` for both the service and deployment manifest using Monokle’s Form Editor and click on `Deploy`. In Monokle, we can also view the deployment status using the cluster dashboard. Select the cluster from the dropdown and click on Connect in Monokle to enable cluster mode.

Monokle cluster dashboard for prod-azure cluster

In the above view, we can view the live status of the deployment progress in the `Activity` section.

### Verify with Rancher

After successful deployment, access the cluster in Rancher to view the application running on-prem. Select the Node as minikube or prod-azure and click on Execute Shell. Access the application by running curl to localhost to validate if the application is running or not.

- Node: minikube

Verify the dev server in Rancher using the shell

- Node: aks

Verify the prod server in Rancher using the shell

## Benefits of using Monokle Desktop

With the large infrastructure running with multiple clusters and access to multiple team members, it is not easy to follow up with all the changes. We can make use of `kubectl` to view the status of the cluster resources and logs. With cluster management tools like Rancher, we can view and control cluster activity. Let us suppose we have detected some activity on the dev cluster via logs. The same activity does not seem to show on the prod server. Manually digging and following up with shift reports can consume time and reduce efficiency.

Let’s see how Monokle can help us in detecting the reason for the activity.

- View dev cluster activity

                                  Monokle dashboard

Logs show that the application has been scaled up. Questions that arise here:

  • Have the same changes been made to prod?
  • What else has been changed in the dev cluster?

- View the comparison of the dev vs prod deployment in Monokle

Compare resources in Monokle

Monokle shows the replica set for the dev minikube clusters is set to 2 which is not the case in production. This way we can easily, quickly, and efficiently detect any such activity in the cluster and roll back using Rancher if required immediately. We can also compare resources pre-deployment against the cluster for a better understanding of what impact the new resource will have on existing resources.

With Monokle we have seen how we can create and manage manifests. It is an added benefit and helps us avoid vulnerabilities or detect them at an early stage.

### Conclusion

With a large-scale infrastructure and multiple teams working on it, we have seen in this blog post how we can manage multiple resources using Monokle as well as clusters using Rancher. Rancher provides global admins control of all clusters from a single location and helps in monitoring the clusters post-deployment. With the power of Monokle to enable validation and detect errors pre-deployment, we can adhere to the shift left mindset. Monokle helps to create manifests, deploy them to clusters, and compare resources against the clusters.

Monokle offers many such features like enabling custom policies to enforce organization-specific rules, creating custom resource templates for standardization, and previewing Helm or Kustomize resources to increase the efficiency of developers, reduce vulnerability and shape developers’ Kubernetes desired state. With such Monokle features, the application will face the least downtime or issues post-deployment.

Please reach out to Monokle Product Leader Sergio if you have feedback about how we can make Monokle work better for you or drop a mail to []( for information/assistance. You can also join in [conversation with other users via Discord]( as part of our growing community.

Related Content