- Prerequisites in Amazon Web Services
- Architecture
- Create the EKS Cluster
- EKS Cluster Configuration Reference
- Changes in Rancher v2.5
- Account Access
- Service Role
- Secrets Encryption
- API Server Endpoint Access
- Private-only API Endpoints
- Public Access Endpoints
- Subnet
- Security Group
- Logging
- Managed Node Groups
- Account Access
- Service Role
- Secrets Encryption
- API Server Endpoint Access
- Private-only API Endpoints
- Public Access Endpoints
- Subnet
- Security Group
- Logging
- Managed Node Groups
- Account Access
- Service Role
- Public IP for Worker Nodes
- VPC & Subnet
- Security Group
- Instance Options
- Troubleshooting
- AWS Service Events
- Security and Compliance
- Tutorial
- Minimum EKS Permissions
- Syncing
Amazon EKS provides a managed control plane for your Kubernetes cluster. Amazon EKS runs the Kubernetes control plane instances across multiple Availability Zones to ensure high availability. Rancher provides an intuitive user interface for managing and deploying the Kubernetes clusters you run in Amazon EKS. With this guide, you will use Rancher to quickly and easily launch an Amazon EKS Kubernetes cluster in your AWS account. For more information on Amazon EKS, see this documentation.
- Prerequisites in Amazon Web Services
- Architecture
- Create the EKS Cluster
- EKS Cluster Configuration Reference
- Troubleshooting
- AWS Service Events
- Security and Compliance
- Tutorial
- Minimum EKS Permissions
- Syncing
Prerequisites in Amazon Web Services
Note Deploying to Amazon AWS will incur charges. For more information, refer to the EKS pricing page.
To set up a cluster on EKS, you will need to set up an Amazon VPC (Virtual Private Cloud). You will also need to make sure that the account you will be using to create the EKS cluster has the appropriate permissions. For details, refer to the official guide on Amazon EKS Prerequisites.
Amazon VPC
An Amazon VPC is required to launch the EKS cluster. The VPC enables you to launch AWS resources into a virtual network that you’ve defined. You can set one up yourself and provide it during cluster creation in Rancher. If you do not provide one during creation, Rancher will create one. For more information, refer to the Tutorial: Creating a VPC with Public and Private Subnets for Your Amazon EKS Cluster.
IAM Policies
Rancher needs access to your AWS account in order to provision and administer your Kubernetes clusters in Amazon EKS. You’ll need to create a user for Rancher in your AWS account and define what that user can access.
Create a user with programmatic access by following the steps here.
Next, create an IAM policy that defines what this user has access to in your AWS account. It’s important to only grant this user minimal access within your account. The minimum permissions required for an EKS cluster are listed here. Follow the steps here to create an IAM policy and attach it to your user.
Finally, follow the steps here to create an access key and secret key for this user.
Note: It’s important to regularly rotate your access and secret keys. See this documentation for more information.
For more detailed information on IAM policies for EKS, refer to the official documentation on Amazon EKS IAM Policies, Roles, and Permissions.
Architecture
The figure below illustrates the high-level architecture of Rancher 2.x. The figure depicts a Rancher Server installation that manages two Kubernetes clusters: one created by RKE and another created by EKS.
Managing Kubernetes Clusters through Rancher’s Authentication Proxy
Create the EKS Cluster
Use Rancher to set up and configure your Kubernetes cluster.
From the Clusters page, click Add Cluster.
Choose Amazon EKS.
Enter a Cluster Name.
Use Member Roles to configure user authorization for the cluster. Click Add Member to add users that can access the cluster. Use the Role drop-down to set permissions for each user.
Fill out the rest of the form. For help, refer to the configuration reference.
Click Create.
Result:
Your cluster is created and assigned a state of Provisioning. Rancher is standing up your cluster.
You can access your cluster after its state is updated to Active.
Active clusters are assigned two Projects:
Default
, containing thedefault
namespaceSystem
, containing thecattle-system
,ingress-nginx
,kube-public
, andkube-system
namespaces
EKS Cluster Configuration Reference
Changes in Rancher v2.5
More EKS options can be configured when you create an EKS cluster in Rancher, including the following:
- Managed node groups
- Desired size, minimum size, maximum size (requires the Cluster Autoscaler to be installed)
- Control plane logging
- Secrets encryption with KMS
The following capabilities have been added for configuring EKS clusters in Rancher:
- GPU support
- Exclusively use managed nodegroups that come with the most up-to-date AMIs
- Add new nodes
- Upgrade nodes
- Add and remove node groups
- Disable and enable private access
- Add restrictions to public access
- Use your cloud credentials to create the EKS cluster instead of passing in your access key and secret key
Due to the way that the cluster data is synced with EKS, if the cluster is modified from another source, such as in the EKS console, and in Rancher within five minutes, it could cause some changes to be overwritten. For information about how the sync works and how to configure it, refer to this section.
Account Access
Complete each drop-down and field using the information obtained for your IAM policy.
Setting | Description |
---|---|
Region | From the drop-down choose the geographical region in which to build your cluster. |
Cloud Credentials | Select the cloud credentials that you created for your IAM policy. For more information on creating cloud credentials in Rancher, refer to this page. |
Service Role
Choose a service role.
Service Role | Description |
---|---|
Standard: Rancher generated service role | If you choose this role, Rancher automatically adds a service role for use with the cluster. |
Custom: Choose from your existing service roles | If you choose this role, Rancher lets you choose from service roles that you’re already created within AWS. For more information on creating a custom service role in AWS, see the Amazon documentation. |
Secrets Encryption
Optional: To encrypt secrets, select or enter a key created in AWS Key Management Service (KMS)
API Server Endpoint Access
Configuring Public/Private API access is an advanced use case. For details, refer to the EKS cluster endpoint access control documentation.
Private-only API Endpoints
If you enable private and disable public API endpoint access when creating a cluster, then there is an extra step you must take in order for Rancher to connect to the cluster successfully. In this case, a pop-up will be displayed with a command that you will run on the cluster to register it with Rancher. Once the cluster is provisioned, you can run the displayed command anywhere you can connect to the cluster’s Kubernetes API.
There are two ways to avoid this extra manual step: - You can create the cluster with both private and public API endpoint access on cluster creation. You can disable public access after the cluster is created and in an active state and Rancher will continue to communicate with the EKS cluster. - You can ensure that Rancher shares a subnet with the EKS cluster. Then security groups can be used to enable Rancher to communicate with the cluster’s API endpoint. In this case, the command to register the cluster is not needed, and Rancher will be able to communicate with your cluster. For more information on configuring security groups, refer to the security groups documentation.
Public Access Endpoints
Optionally limit access to the public endpoint via explicit CIDR blocks.
If you limit access to specific CIDR blocks, then it is recommended that you also enable the private access to avoid losing network communication to the cluster.
One of the following is required to enable private access: - Rancher’s IP must be part of an allowed CIDR block - Private access should be enabled, and Rancher must share a subnet with the cluster and have network access to the cluster, which can be configured with a security group
For more information about public and private access to the cluster endpoint, refer to the Amazon EKS documentation.
Subnet
Option | Description |
---|---|
Standard: Rancher generated VPC and Subnet | While provisioning your cluster, Rancher generates a new VPC with 3 public subnets. |
Custom: Choose from your existing VPC and Subnets | While provisioning your cluster, Rancher configures your Control Plane and nodes to use a VPC and Subnet that you’ve already created in AWS. |
For more information, refer to the AWS documentation for Cluster VPC Considerations. Follow one of the sets of instructions below based on your selection from the previous step.
Security Group
Amazon Documentation:
Logging
Configure control plane logs to send to Amazon CloudWatch. You are charged the standard CloudWatch Logs data ingestion and storage costs for any logs sent to CloudWatch Logs from your clusters.
Each log type corresponds to a component of the Kubernetes control plane. To learn more about these components, see Kubernetes Components in the Kubernetes documentation.
For more information on EKS control plane logging, refer to the official documentation.
Managed Node Groups
Amazon EKS managed node groups automate the provisioning and lifecycle management of nodes (Amazon EC2 instances) for Amazon EKS Kubernetes clusters.
For more information about how node groups work and how they are configured, refer to the EKS documentation.
Bring your own launch template
A launch template ID and version can be provided in order to easily configure the EC2 instances in a node group. If a launch template is provided, then none of the settings below will be configurable in Rancher. Therefore, using a launch template would require that all the necessary and desired settings from the list below would need to be specified in the launch template. Also note that if a launch template ID and version is provided, then only the template version can be updated. Using a new template ID would require creating a new managed node group.
Option | Description | Required/Optional |
---|---|---|
Instance Type | Choose the hardware specs for the instance you’re provisioning. | Required |
Image ID | Specify a custom AMI for the nodes. Custom AMIs used with EKS must be configured properly | Optional |
Node Volume Size | The launch template must specify an EBS volume with the desired size | Required |
SSH Key | A key to be added to the instances to provide SSH access to the nodes | Optional |
User Data | Cloud init script in MIME multi-part format | Optional |
Instance Resource Tags | Tag each EC2 instance in the node group | Optional |
Rancher-managed launch templates
If you do not specify a launch template, then you will be able to configure the above options in the Rancher UI and all of them can be updated after creation. In order to take advantage of all of these options, Rancher will create and manage a launch template for you. Each cluster in Rancher will have one Rancher-managed launch template and each managed node group that does not have a specified launch template will have one version of the managed launch template. The name of this launch template will have the prefix “rancher-managed-lt-” followed by the display name of the cluster. In addition, the Rancher-managed launch template will be tagged with the key “rancher-managed-template” and value “do-not-modify-or-delete” to help identify it as Rancher-managed. It is important that this launch template and its versions not be modified, deleted, or used with any other clusters or managed node groups. Doing so could result in your node groups being “degraded” and needing to be destroyed and recreated.
Custom AMIs
If you specify a custom AMI, whether in a launch template or in Rancher, then the image must be configured properly and you must provide user data to bootstrap the node. This is considered an advanced use case and understanding the requirements is imperative.
If you specify a launch template that does not contain a custom AMI, then Amazon will use the EKS-optimized AMI for the Kubernetes version and selected region. You can also select a GPU enabled instance for workloads that would benefit from it.
Note The GPU enabled instance setting in Rancher is ignored if a custom AMI is provided, either in the dropdown or in a launch template.
Spot instances
Spot instances are now supported by EKS. If a launch template is specified, Amazon recommends that the template not provide an instance type. Instead, Amazon recommends providing multiple instance types. If the “Request Spot Instances” checkbox is enabled for a node group, then you will have the opportunity to provide multiple instance types.
Note Any selection you made in the instance type dropdown will be ignored in this situation and you must specify at least one instance type to the “Spot Instance Types” section. Furthermore, a launch template used with EKS cannot request spot instances. Requesting spot instances must be part of the EKS configuration.
Node Group Settings
The following settings are also configurable. All of these except for the “Node Group Name” are editable after the node group is created.
Option | Description |
---|---|
Node Group Name | The name of the node group. |
Desired ASG Size | The desired number of instances. |
Maximum ASG Size | The maximum number of instances. This setting won’t take effect until the Cluster Autoscaler is installed. |
Minimum ASG Size | The minimum number of instances. This setting won’t take effect until the Cluster Autoscaler is installed. |
Labels | Kubernetes labels applied to the nodes in the managed node group. |
Tags | These are tags for the managed node group and do not propagate to any of the associated resources. |
Account Access
Complete each drop-down and field using the information obtained for your IAM policy.
Setting | Description |
---|---|
Region | From the drop-down choose the geographical region in which to build your cluster. |
Cloud Credentials | Select the cloud credentials that you created for your IAM policy. For more information on creating cloud credentials in Rancher, refer to this page. |
Service Role
Choose a service role.
Service Role | Description |
---|---|
Standard: Rancher generated service role | If you choose this role, Rancher automatically adds a service role for use with the cluster. |
Custom: Choose from your existing service roles | If you choose this role, Rancher lets you choose from service roles that you’re already created within AWS. For more information on creating a custom service role in AWS, see the Amazon documentation. |
Secrets Encryption
Optional: To encrypt secrets, select or enter a key created in AWS Key Management Service (KMS)
API Server Endpoint Access
Configuring Public/Private API access is an advanced use case. For details, refer to the EKS cluster endpoint access control documentation.
Private-only API Endpoints
If you enable private and disable public API endpoint access when creating a cluster, then there is an extra step you must take in order for Rancher to connect to the cluster successfully. In this case, a pop-up will be displayed with a command that you will run on the cluster to register it with Rancher. Once the cluster is provisioned, you can run the displayed command anywhere you can connect to the cluster’s Kubernetes API.
There are two ways to avoid this extra manual step: - You can create the cluster with both private and public API endpoint access on cluster creation. You can disable public access after the cluster is created and in an active state and Rancher will continue to communicate with the EKS cluster. - You can ensure that Rancher shares a subnet with the EKS cluster. Then security groups can be used to enable Rancher to communicate with the cluster’s API endpoint. In this case, the command to register the cluster is not needed, and Rancher will be able to communicate with your cluster. For more information on configuring security groups, refer to the security groups documentation.
Public Access Endpoints
Optionally limit access to the public endpoint via explicit CIDR blocks.
If you limit access to specific CIDR blocks, then it is recommended that you also enable the private access to avoid losing network communication to the cluster.
One of the following is required to enable private access: - Rancher’s IP must be part of an allowed CIDR block - Private access should be enabled, and Rancher must share a subnet with the cluster and have network access to the cluster, which can be configured with a security group
For more information about public and private access to the cluster endpoint, refer to the Amazon EKS documentation.
Subnet
Option | Description |
---|---|
Standard: Rancher generated VPC and Subnet | While provisioning your cluster, Rancher generates a new VPC with 3 public subnets. |
Custom: Choose from your existing VPC and Subnets | While provisioning your cluster, Rancher configures your Control Plane and nodes to use a VPC and Subnet that you’ve already created in AWS. |
For more information, refer to the AWS documentation for Cluster VPC Considerations. Follow one of the sets of instructions below based on your selection from the previous step.
Security Group
Amazon Documentation:
Logging
Configure control plane logs to send to Amazon CloudWatch. You are charged the standard CloudWatch Logs data ingestion and storage costs for any logs sent to CloudWatch Logs from your clusters.
Each log type corresponds to a component of the Kubernetes control plane. To learn more about these components, see Kubernetes Components in the Kubernetes documentation.
For more information on EKS control plane logging, refer to the official documentation.
Managed Node Groups
Amazon EKS managed node groups automate the provisioning and lifecycle management of nodes (Amazon EC2 instances) for Amazon EKS Kubernetes clusters.
For more information about how node groups work and how they are configured, refer to the EKS documentation.
Amazon will use the EKS-optimized AMI for the Kubernetes version. You can configure whether the AMI has GPU enabled.
Option | Description |
---|---|
Instance Type | Choose the hardware specs for the instance you’re provisioning. |
Maximum ASG Size | The maximum number of instances. This setting won’t take effect until the Cluster Autoscaler is installed. |
Minimum ASG Size | The minimum number of instances. This setting won’t take effect until the Cluster Autoscaler is installed. |
Account Access
Complete each drop-down and field using the information obtained for your IAM policy.
Setting | Description |
---|---|
Region | From the drop-down choose the geographical region in which to build your cluster. |
Access Key | Enter the access key that you created for your IAM policy. |
Secret Key | Enter the secret key that you created for your IAM policy. |
Service Role
Choose a service role.
Service Role | Description |
---|---|
Standard: Rancher generated service role | If you choose this role, Rancher automatically adds a service role for use with the cluster. |
Custom: Choose from your existing service roles | If you choose this role, Rancher lets you choose from service roles that you’re already created within AWS. For more information on creating a custom service role in AWS, see the Amazon documentation. |
Public IP for Worker Nodes
Your selection for this option determines what options are available for VPC & Subnet.
Option | Description |
---|---|
Yes | When your cluster nodes are provisioned, they’re assigned a both a private and public IP address. |
No: Private IPs only | When your cluster nodes are provisioned, they’re assigned only a private IP address. If you choose this option, you must also choose a VPC & Subnet that allow your instances to access the internet. This access is required so that your worker nodes can connect to the Kubernetes control plane. |
VPC & Subnet
The available options depend on the public IP for worker nodes.
Option | Description |
---|---|
Standard: Rancher generated VPC and Subnet | While provisioning your cluster, Rancher generates a new VPC and Subnet. |
Custom: Choose from your existing VPC and Subnets | While provisioning your cluster, Rancher configures your nodes to use a VPC and Subnet that you’ve already created in AWS. If you choose this option, complete the remaining steps below. |
For more information, refer to the AWS documentation for Cluster VPC Considerations. Follow one of the sets of instructions below based on your selection from the previous step.
If you choose to assign a public IP address to your cluster’s worker nodes, you have the option of choosing between a VPC that’s automatically generated by Rancher (i.e., Standard: Rancher generated VPC and Subnet), or a VPC that you’ve already created with AWS (i.e., Custom: Choose from your existing VPC and Subnets). Choose the option that best fits your use case.
Click to expand
If you’re using Custom: Choose from your existing VPC and Subnets:
(If you’re using Standard, skip to the instance options.)
Make sure Custom: Choose from your existing VPC and Subnets is selected.
From the drop-down that displays, choose a VPC.
Click Next: Select Subnets. Then choose one of the Subnets that displays.
Click Next: Select Security Group.
If your worker nodes have Private IPs only, you must also choose a VPC & Subnet that allow your instances to access the internet. This access is required so that your worker nodes can connect to the Kubernetes control plane.
Click to expand
Follow the steps below.
Tip: When using only private IP addresses, you can provide your nodes internet access by creating a VPC constructed with two subnets, a private set and a public set. The private set should have its route tables configured to point toward a NAT in the public set. For more information on routing traffic from private subnets, please see the official AWS documentation.
From the drop-down that displays, choose a VPC.
Click Next: Select Subnets. Then choose one of the Subnets that displays.
Security Group
Amazon Documentation:
Instance Options
Instance type and size of your worker nodes affects how many IP addresses each worker node will have available. See this documentation for more information.
Option | Description |
---|---|
Instance Type | Choose the hardware specs for the instance you’re provisioning. |
Custom AMI Override | If you want to use a custom Amazon Machine Image (AMI), specify it here. By default, Rancher will use the EKS-optimized AMI for the EKS version that you chose. |
Desired ASG Size | The number of instances that your cluster will provision. |
User Data | Custom commands can to be passed to perform automated configuration tasks WARNING: Modifying this may cause your nodes to be unable to join the cluster. Note: Available as of v2.2.0 |
Troubleshooting
If your changes were overwritten, it could be due to the way the cluster data is synced with EKS. Changes shouldn’t be made to the cluster from another source, such as in the EKS console, and in Rancher within a five-minute span. For information on how this works and how to configure the refresh interval, refer to Syncing.
If an unauthorized error is returned while attempting to modify or register the cluster and the cluster was not created with the role or user that your credentials belong to, refer to Security and Compliance.
For any issues or troubleshooting details for your Amazon EKS Kubernetes cluster, please see this documentation.
AWS Service Events
To find information on any AWS Service events, please see this page.
Security and Compliance
By default only the IAM user or role that created a cluster has access to it. Attempting to access the cluster with any other user or role without additional configuration will lead to an error. In Rancher, this means using a credential that maps to a user or role that was not used to create the cluster will cause an unauthorized error. For example, an EKSCtl cluster will not register in Rancher unless the credentials used to register the cluster match the role or user used by EKSCtl. Additional users and roles can be authorized to access a cluster by being added to the aws-auth configmap in the kube-system namespace. For a more in-depth explanation and detailed instructions, please see this documentation.
For more information on security and compliance with your Amazon EKS Kubernetes cluster, please see this documentation.
Tutorial
This tutorial on the AWS Open Source Blog will walk you through how to set up an EKS cluster with Rancher, deploy a publicly accessible app to test the cluster, and deploy a sample project to track real-time geospatial data using a combination of other open-source software such as Grafana and InfluxDB.
Minimum EKS Permissions
Documented here is a minimum set of permissions necessary to use all functionality of the EKS driver in Rancher. Additional permissions are required for Rancher to provision the Service Role
and VPC
resources. Optionally these resources can be created before the cluster creation and will be selectable when defining the cluster configuration.
Resource | Description |
---|---|
Service Role | The service role provides Kubernetes the permissions it requires to manage resources on your behalf. Rancher can create the service role with the following Service Role Permissions. |
VPC | Provides isolated network resources utilised by EKS and worker nodes. Rancher can create the VPC resources with the following VPC Permissions. |
Resource targeting uses *
as the ARN of many of the resources created cannot be known before creating the EKS cluster in Rancher.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "EC2Permisssions",
"Effect": "Allow",
"Action": [
"ec2:RunInstances",
"ec2:RevokeSecurityGroupIngress",
"ec2:RevokeSecurityGroupEgress",
"ec2:DescribeVpcs",
"ec2:DescribeTags",
"ec2:DescribeSubnets",
"ec2:DescribeSecurityGroups",
"ec2:DescribeRouteTables",
"ec2:DescribeLaunchTemplateVersions",
"ec2:DescribeLaunchTemplates",
"ec2:DescribeKeyPairs",
"ec2:DescribeInternetGateways",
"ec2:DescribeImages",
"ec2:DescribeAvailabilityZones",
"ec2:DescribeAccountAttributes",
"ec2:DeleteTags",
"ec2:DeleteSecurityGroup",
"ec2:DeleteKeyPair",
"ec2:CreateTags",
"ec2:CreateSecurityGroup",
"ec2:CreateLaunchTemplateVersion",
"ec2:CreateLaunchTemplate",
"ec2:CreateKeyPair",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:AuthorizeSecurityGroupEgress"
],
"Resource": "*"
},
{
"Sid": "CloudFormationPermisssions",
"Effect": "Allow",
"Action": [
"cloudformation:ListStacks",
"cloudformation:ListStackResources",
"cloudformation:DescribeStacks",
"cloudformation:DescribeStackResources",
"cloudformation:DescribeStackResource",
"cloudformation:DeleteStack",
"cloudformation:CreateStackSet",
"cloudformation:CreateStack"
],
"Resource": "*"
},
{
"Sid": "IAMPermissions",
"Effect": "Allow",
"Action": [
"iam:PassRole",
"iam:ListRoles",
"iam:ListRoleTags",
"iam:ListInstanceProfilesForRole",
"iam:ListInstanceProfiles",
"iam:ListAttachedRolePolicies",
"iam:GetRole",
"iam:GetInstanceProfile",
"iam:DetachRolePolicy",
"iam:DeleteRole",
"iam:CreateRole",
"iam:AttachRolePolicy"
],
"Resource": "*"
},
{
"Sid": "KMSPermisssions",
"Effect": "Allow",
"Action": "kms:ListKeys",
"Resource": "*"
},
{
"Sid": "EKSPermisssions",
"Effect": "Allow",
"Action": [
"eks:UpdateNodegroupVersion",
"eks:UpdateNodegroupConfig",
"eks:UpdateClusterVersion",
"eks:UpdateClusterConfig",
"eks:UntagResource",
"eks:TagResource",
"eks:ListUpdates",
"eks:ListTagsForResource",
"eks:ListNodegroups",
"eks:ListFargateProfiles",
"eks:ListClusters",
"eks:DescribeUpdate",
"eks:DescribeNodegroup",
"eks:DescribeFargateProfile",
"eks:DescribeCluster",
"eks:DeleteNodegroup",
"eks:DeleteFargateProfile",
"eks:DeleteCluster",
"eks:CreateNodegroup",
"eks:CreateFargateProfile",
"eks:CreateCluster"
],
"Resource": "*"
}
]
}
Service Role Permissions
Rancher will create a service role with the following trust policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
This role will also have two role policy attachments with the following policies ARNs:
arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
arn:aws:iam::aws:policy/AmazonEKSServicePolicy
Permissions required for Rancher to create service role on users behalf during the EKS cluster creation process.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "IAMPermisssions",
"Effect": "Allow",
"Action": [
"iam:AddRoleToInstanceProfile",
"iam:AttachRolePolicy",
"iam:CreateInstanceProfile",
"iam:CreateRole",
"iam:CreateServiceLinkedRole",
"iam:DeleteInstanceProfile",
"iam:DeleteRole",
"iam:DetachRolePolicy",
"iam:GetInstanceProfile",
"iam:GetRole",
"iam:ListAttachedRolePolicies",
"iam:ListInstanceProfiles",
"iam:ListInstanceProfilesForRole",
"iam:ListRoles",
"iam:ListRoleTags",
"iam:PassRole",
"iam:RemoveRoleFromInstanceProfile"
],
"Resource": "*"
}
]
}
VPC Permissions
Permissions required for Rancher to create VPC and associated resources.
{
"Sid": "VPCPermissions",
"Effect": "Allow",
"Action": [
"ec2:ReplaceRoute",
"ec2:ModifyVpcAttribute",
"ec2:ModifySubnetAttribute",
"ec2:DisassociateRouteTable",
"ec2:DetachInternetGateway",
"ec2:DescribeVpcs",
"ec2:DeleteVpc",
"ec2:DeleteTags",
"ec2:DeleteSubnet",
"ec2:DeleteRouteTable",
"ec2:DeleteRoute",
"ec2:DeleteInternetGateway",
"ec2:CreateVpc",
"ec2:CreateSubnet",
"ec2:CreateSecurityGroup",
"ec2:CreateRouteTable",
"ec2:CreateRoute",
"ec2:CreateInternetGateway",
"ec2:AttachInternetGateway",
"ec2:AssociateRouteTable"
],
"Resource": "*"
}
Syncing
Syncing is the feature that causes Rancher to update its EKS clusters’ values so they are up to date with their corresponding cluster object in the EKS console. This enables Rancher to not be the sole owner of an EKS cluster’s state. Its largest limitation is that processing an update from Rancher and another source at the same time or within 5 minutes of one finishing may cause the state from one source to completely overwrite the other.
How it works
There are two fields on the Rancher Cluster object that must be understood to understand how syncing works:
- EKSConfig which is located on the Spec of the Cluster.
- UpstreamSpec which is located on the EKSStatus field on the Status of the Cluster.
Both of which are defined by the struct EKSClusterConfigSpec found in the eks-operator project: https://github.com/rancher/eks-operator/blob/master/pkg/apis/eks.cattle.io/v1/types.go
All fields with the exception of DisplayName, AmazonCredentialSecret, Region, and Imported are nillable on the EKSClusterConfigSpec.
The EKSConfig represents desired state for its non-nil values. Fields that are non-nil in the EKSConfig can be thought of as “managed”.When a cluster is created in Rancher, all fields are non-nil and therefore “managed”. When a pre-existing cluster is registered in rancher all nillable fields are nil and are not “managed”. Those fields become managed once their value has been changed by Rancher.
UpstreamSpec represents the cluster as it is in EKS and is refreshed on an interval of 5 minutes. After the UpstreamSpec has been refreshed rancher checks if the EKS cluster has an update in progress. If it is updating, nothing further is done. If it is not currently updating, any “managed” fields on EKSConfig are overwritten with their corresponding value from the recently updated UpstreamSpec.
The effective desired state can be thought of as the UpstreamSpec + all non-nil fields in the EKSConfig. This is what is displayed in the UI.
If Rancher and another source attempt to update an EKS cluster at the same time or within the 5 minute refresh window of an update finishing, then it is likely any “managed” fields can be caught in a race condition. For example, a cluster may have PrivateAccess as a managed field. If PrivateAccess is false and then enabled in EKS console, then finishes at 11:01, and then tags are updated from Rancher before 11:05 the value will likely be overwritten. This would also occur if tags were updated while the cluster was processing the update. If the cluster was registered and the PrivateAccess fields was nil then this issue should not occur in the aforementioned case.
Configuring the Refresh Interval
It is possible to change the refresh interval through the setting “eks-refresh-cron”. This setting accepts values in the Cron format. The default is */5 * * * *
. The shorter the refresh window is the less likely any race conditions will occur, but it does increase the likelihood of encountering request limits that may be in place for AWS APIs.