Over time, development teams have had to leverage several open-source tools and packages to implement features not natively provided by Kubernetes. One of these is the container management platform, Rancher.
This article will go through a high-level introduction, explain how it works, and look at some use cases where it is an ideal solution.
Definition
If we deploy containers in production, we can use Rancher. It leverages open-source projects like Prometheus and Grafana to provide DevOps teams with an intuitive, single management platform that’s straightforward for even non-Kubernetes specialists.
Rancher is also useful for helping us meet IT requirements, as it uses centralized authentication. Additionally, its enterprise support allows us to provision, operate, and monitor their Kubernetes clusters running in production.
Similar container management platforms include OpenShift from RedHat and Mirantis Kubernetes, formerly Docker Enterprise.
Overview
The 2021 CCNF survey showed that 96 percent of organizations have evaluated or are using Kubernetes for production use. Yet, the complexity in provisioning and operating Highly Available (HA) Kubernetes clusters has been one of the significant challenges for DevOps teams.
Rancher reduces the complexity of provisioning (HA) Kubernetes clusters by providing a management interface to provision clusters across multiple cloud providers and bare metal services. The Apps and Marketplace feature also gives operators the ability to extend the functionality of Rancher by deploying helm charts to install custom Helm applications.
In addition to helping with provisioning, Rancher also benefits our Kubernetes cluster management and workload management.
Let’s consider three notable features Rancher offers that go beyond Kubernetes.
Improved Developer Experience (DX)
Kubernetes clusters are primarily operated using the kubectl command-line interface (CLI) tool. To make operating Kubernetes easier, Rancher provides a web-based interface — the Rancher UI — for developers to interact with multiple Kubernetes clusters connected to a Rancher server. This is particularly helpful when working with newer developers who might not be familiar with kubectl.
Through the Rancher UI, we can authenticate, provision, and import existing clusters, switch clusters, generate Kubernetes configurations, and even utilize the data graphs to visualize metrics from running clusters.
For developers familiar with kubectl, the Rancher UI also has the feature to launch a web terminal with kubectl pre-installed, so we can manually execute the commands.
Authentication and Authorization
User management is a valuable feature for engineering teams, as workers need to access the running clusters to perform several tasks.
Rancher provides a centralized user authentication feature to ensure that administrative access to a Kubernetes cluster is restricted to only qualified engineering team members. Authentication can be done through Rancher’s native authentication method or through external authentication providers, like Google OAuth or Azure AD.
Each authenticated developer is referred to as a user, and the operations allowed are further determined through assigned permission. Rancher permissions are the access rights assigned to authenticated users to ensure that developers don’t perform operations outside their scope, such as accidentally modifying a cluster.
Security Compliance
Ensuring the optimal security of Kubernetes clusters running in production is a priority for engineering teams. A way to assert that a cluster is secure is to run frequent scans that check if best practices have been followed while setting up the cluster.
Rancher provides an out-of-the-box feature that scans all deployed clusters and runs assessment tests to ensure that they meet up with the benchmarks from the Center for Internet Security (CIS). The CIS scan leverages kube-bench and Sonobuoy to ensure that a scan report can be viewed and downloaded in CSV format.
Although the new release of Rancher heavily prioritizes Kubernetes, we can also deploy Docker Swarm and Mesos from the Rancher catalog. A catalog contains a prebuilt application template that can be deployed in a few minutes.
How Rancher Works
The bulk of the Rancher software runs within the Rancher Server, which contains smaller components used to operate Kubernetes clusters. The Rancher Server is deployed within a separated cluster. It can provision or import existing clusters within cloud service providers like the Google Kubernetes Engine (GKE) or Azure Kubernetes Service (AKS).
We recommend installing the Rancher Server on a highly available (HA) Kubernetes cluster for production use. This approach ensures the Rancher server never goes down and data loss doesn’t occur. To experiment with Rancher, we can deploy the Rancher Server to a Docker container, then later migrate it to a HA cluster.
The Rancher Server can be accessed through the web-based Rancher UI, Rancher API, and Rancher CLI.
Rancher has several components, but let’s consider three crucial elements to the Server’s operation.
Authentication Proxy
The authentication proxy is a component at the entry point of the Rancher Server. It’s responsible for authenticating incoming Kubernetes API calls made from the Rancher UI, CLI, or API before they get to the connected clusters.
The authentication proxy component leverages the native user impersonation feature of Kubernetes. It first authenticates the user’s request through the configured authentication provider, then sets the impersonation headers of the request before forwarding it to the Kubernetes master. We can rightly call this component a proxy because it operates as a middleman between the user and the Kubernetes API Server.
Rancher connects to your clusters using the service account user category and generates a kubeconfig file to store the connection credentials in the Rancher data store. Rancher also allows bypassing the authentication proxy when we need to connect to a Kubernetes cluster directly.
Rancher Server Data Storage
Rancher uses the Kubernetes key value store etcd, to store the Kubernetes cluster state in a key-value pair format.
The storage use of etcd is one of the reasons why we should install Rancher on a highly available cluster in production. etcd will run on multiple nodes to ensure a backup for failovers.
Rancher K3s provides more flexibility to the Rancher Server datastore. Kubernetes Operators can decide to use a SQL database such as MySQL, PostgreSQL, or even the SQLite database embedded for short-lived data.
Downstream Cluster
The Kubernetes clusters connected to a Rancher Server and running our applications or services are referred to as downstream Kubernetes clusters. A single Rancher Server can connect to thousands of downstream clusters, making them easier to group, identify, and operate.
The connection between the downstream clusters and the Rancher server is made possible through cluster controllers. Rancher can run Kubernetes clusters anywhere.
A downstream cluster is provisioned on multiple nodes, which are compute resources. Rancher can provision these nodes, or we can provision Kubernetes on existing nodes. Clusters managed by Rancher are categorized into the following types:
- Hosted Kubernetes Providers — where Rancher provides a Kubernetes cluster on cloud services providers, like Google Kubernetes Engine (GKE) or Elastic Kubernetes Engine (EKS).
- Registered Kubernetes Clusters, where Rancher only connects to an existing Kubernetes cluster — Rancher sets up agents within the cluster that are used for communicating with the Rancher Server.
- Rancher-launched Kubernetes clusters for custom nodes — where Rancher uses the Rancher Kubernetes Engine (RKE) to install Kubernetes on existing nodes to make a custom cluster.
- Rancher-launched Kubernetes for nodes in an infrastructure provider — In this scenario, Rancher provisions the compute nodes in a provider, such as the Compute Engine (CE) on the Google Cloud. Then, Rancher proceeds to provision Kubernetes on the new nodes.
Usage Examples
Adopting Kubernetes
A great use case for Rancher is within DevOps teams in the process of adopting Kubernetes. Rancher provides several integrated tools and features, including the user-centered management interface, which abstracts several complex parts and lowers the learning curve of Kubernetes to engineers within the DevOps team.
Using Multiple Clusters
Rancher is also helpful for DevOps teams running multiple Kubernetes clusters. Rancher provides features to unify these clusters, and a single Rancher Server can connect between one and one million of them. Operators can switch between clusters using dropdowns in the Rancher UI or group and rename the cluster for easier identification.
Edge Computing
Rancher also provides K3 to support running Kubernetes within the nodes of an edge network, benefitting edge computing. K3 is a lightweight and production-ready Kubernetes distribution that supports ARM and IoT devices with a lower memory requirement.
Key Takeaways
Throughout this article, we’ve discussed what Rancher is, how it works, and some use cases where utilizing Rancher would be beneficial.
- Rancher makes it easier to group, identify, and operate multiple clusters. The single Rancher Server can manage thousands of Kubernetes clusters for us.
- There are several use cases where Rancher proves beneficial, including when we’re adopting Kubernetes, using multiple Kubernetes clusters, and working with edge computing. Check out this blog post to learn more about how Kubernetes and edge computing work together.
- It provides a centralized platform to provision and operate Kubernetes clusters and resources on multiple cloud service providers or bare metal services, making it user-friendly for beginners and empowering for DevOps teams.
- Additionally, Rancher makes it easier it easier for us to define access control policies for teammates by leveraging the authentication and authorization features of Rancher.