AWS 32 Cores Account AWS EKS Container Service Tutorial
Introduction: Welcome to the Wonderful World of EKS
If you’ve ever looked at Kubernetes and thought, “Wow, that’s powerful… and also somehow my personality,” then you’re in the right place. AWS EKS (Elastic Kubernetes Service) takes the “manage the control plane” burden off your shoulders and hands it to the people who invented managed services in the first place. You still manage workloads, node groups, and the fun stuff like deployments, but AWS keeps the Kubernetes brains running.
This tutorial is written to feel like a helpful friend standing next to you while you type commands. We’ll build an EKS cluster, connect with kubectl, deploy a sample application, and verify things are alive and behaving. We’ll also talk about the typical tripwires: IAM permissions, VPC networking, node group provisioning, and the occasional “why isn’t my pod coming up?” mystery. Because no one needs stress when they can have Kubernetes instead.
What You’ll Learn
By the end, you’ll be able to:
- AWS 32 Cores Account Set up prerequisites on your machine (and sanity on your AWS account).
- AWS 32 Cores Account Create an EKS cluster in an AWS VPC with the right networking approach.
- Create managed node groups so you can run pods somewhere other than your imagination.
- Configure kubectl to talk to your cluster.
- Deploy and expose a sample workload.
- Troubleshoot the usual suspects when things go sideways.
We’ll use AWS’s recommended tooling approach and keep steps clear, so you can repeat them later without needing to consult an oracle (or a teammate’s forehead). If you already know some of these concepts, skim ahead—your time is valuable and so is your ability to not read.
Prerequisites: The Things You Need Before You Start
Before spinning up anything, gather your tools and verify access. EKS isn’t hard, but it does expect you to have the correct pieces in place. Here’s your checklist.
AWS 32 Cores Account 1) An AWS Account With Permission
You’ll need an AWS account and permissions to create EKS clusters, manage IAM roles, and set up networking resources. In practice, you’ll want an IAM user or role with (at least) the ability to use EKS, EC2, IAM, and CloudFormation. The exact permissions vary by setup, but if you get access denied errors, don’t panic; it’s usually just missing IAM actions.
2) AWS CLI
Install and configure the AWS CLI on your computer:
- Confirm installation: aws --version
- Configure credentials: aws configure
- Make sure you’re using the intended region: aws configure get region
Also, ensure you’re authenticated: aws sts get-caller-identity should return your account and user/role info.
3) kubectl
kubectl is your Kubernetes command-line swiss army knife. Install it and verify:
- kubectl version --client
You’ll use it to apply manifests, inspect resources, and confirm the cluster is real and not just a bedtime story AWS told you.
4) eksctl (Recommended)
For most tutorial flows, eksctl (known as eksctl, a popular CLI tool) simplifies cluster creation a lot. Install it and verify:
- eksctl version
If you prefer “show me every Terraform block and all the pain,” you can do this without eksctl. But the tutorial goal is to help you succeed first. Pain can come later as a hobby.
5) A Basic Understanding of Networking
You don’t need to be a VPC wizard, but you should understand:
- Subnets (public vs private)
- Route tables and internet gateways
- Security groups
- Why pods need to reach services and vice versa
AWS 32 Cores Account EKS will run pods on worker nodes, and those nodes must be in subnets that make sense for your chosen exposure model.
Step 1: Choose a Region and Decide on a VPC Strategy
First, pick the AWS region you want to use. EKS is regional, so everything happens inside that region. Some regions may have different service limits, but usually you’ll be fine.
For the VPC, you have two common approaches:
- Create a new VPC tailored for EKS (typical for fresh setups)
- Use an existing VPC (common in real organizations)
This tutorial will lean toward creating a clean, dedicated setup because it’s easier to understand and easier to debug.
Step 2: Create an EKS Cluster (Managed Control Plane)
Time to create your cluster. There are multiple ways to do this; we’ll use eksctl for a smooth tutorial experience. The cluster is the Kubernetes control plane (managed by AWS), plus the supporting infrastructure.
Create a Cluster Configuration
eksctl often uses a YAML configuration file. Here’s a representative example. You’ll edit region, cluster name, and some networking details to match your preferences.
Example file: eks-cluster.yaml:
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: demo-eks
region: us-east-1
version: "1.29"
vpc:
clusterEndpoints:
publicAccess: true
privateAccess: false
subnets:
public:
us-east-1a: { id: subnet-aaaaaaa }
us-east-1b: { id: subnet-bbbbbbb }
private:
us-east-1a: { id: subnet-ccccccc }
us-east-1b: { id: subnet-dddddddd }
managedNodeGroups:
- name: ng-1
instanceType: t3.medium
desiredCapacity: 2
minSize: 1
maxSize: 3
privateNetworking: true
volumeSize: 20
iam:
withAddonPolicies:
autoScaler: true
albIngress: true
Note: the subnet IDs are placeholders. If you’re using an existing VPC, replace them with your real subnet IDs. If you’re creating a new VPC, you can omit the subnet IDs and let eksctl create them automatically in some setups.
Verify Your Configuration
Before you press the big red “create” button, confirm your config has the essentials: cluster name, region, Kubernetes version, VPC endpoints, and at least one node group.
Also double-check that your subnets are in at least two availability zones. This isn’t just for aesthetics; it helps with resilience and standard Kubernetes practices.
Run the Cluster Creation Command
Once your YAML is ready, create the cluster:
eksctl create cluster -f eks-cluster.yaml
Depending on your environment and AWS capacity, creation can take 10 to 30 minutes (or longer, if AWS decides to do its own breathing exercises). During this time, you can monitor progress with eksctl output and by checking your AWS console if you enjoy visual confirmation like it’s a sports score.
Step 3: Configure kubectl to Access Your Cluster
When the cluster is ready, you need to tell kubectl how to connect. eksctl typically can update your kubeconfig automatically. Run:
eksctl utils write-kubeconfig --cluster demo-eks --region us-east-1
Now test connectivity:
kubectl get nodes
At first, you might see nodes in a “NotReady” state. That’s usually temporary while the node group bootstraps. Watch for the nodes to become Ready.
Also check:
kubectl get pods -A
This will show system pods across namespaces. You want the usual suspects (like CoreDNS and the EKS system components) to be healthy.
Step 4: Install the Metrics Server (Optional But Helpful)
If you want autoscaling and better visibility, metrics server is a common addition. It allows kubectl top commands to work and supports HPA behavior.
A typical approach is to apply the metrics-server manifests, often from a trusted source. Since tutorial environments vary and versions matter, follow the latest official instructions for your chosen Kubernetes version.
After installation, verify:
kubectl top nodes
If this works, congratulations—you’ve now gained a little observability. If it doesn’t, it’s usually a permissions issue, a networking issue, or “metrics server pod not ready.”
Step 5: Deploy a Sample Application
Now for the fun part: deploying something that actually runs. Let’s deploy a simple web app. A common choice is the “nginx” container because it’s ubiquitous and boring in the best way.
Create a Deployment and Service
Let’s create a deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-demo
spec:
replicas: 2
selector:
matchLabels:
app: nginx-demo
template:
metadata:
labels:
app: nginx-demo
spec:
containers:
- name: nginx
image: nginx:1.27
ports:
- containerPort: 80
Save it as nginx-deployment.yaml and apply:
kubectl apply -f nginx-deployment.yaml
Next, create a service. For an internal cluster test, you can use ClusterIP first:
apiVersion: v1
kind: Service
metadata:
name: nginx-demo-svc
spec:
selector:
app: nginx-demo
ports:
- port: 80
targetPort: 80
type: ClusterIP
Save as nginx-service.yaml and apply:
kubectl apply -f nginx-service.yaml
AWS 32 Cores Account Verify the Deployment
Check the pods:
kubectl get pods -o wide
Check the deployment:
kubectl get deployments
Check the service:
kubectl get svc
You should see your service with a ClusterIP address. If pods are running, you’re off to a great start.
Test Inside the Cluster
Since ClusterIP isn’t reachable from the outside world, you can test by launching a temporary pod in the same cluster and curling the service. For example:
kubectl run curl-test --rm -it --image=curlimages/curl --restart=Never -- \
sh -c "curl -sS nginx-demo-svc:80 | head"
If you get an HTML snippet from Nginx, that means service discovery is working and your app is alive.
Step 6: Expose the Application (Ingress or Load Balancer)
For a public demo, you’ll want an ingress controller and an Ingress resource (or a service of type LoadBalancer, depending on your approach). In real projects, Ingress is usually the preferred path because it allows you to route multiple services under one load balancer.
This tutorial assumes you’d like a straightforward ingress experience. A popular choice is AWS Load Balancer Controller, which integrates Kubernetes Ingress resources with AWS load balancers.
Install an Ingress Controller (Conceptual Overview)
Installing the ingress controller typically includes:
- Creating/using an IAM role with the right permissions
- Installing Kubernetes controller manifests
- Verifying the controller is running in the cluster
With eksctl, some configurations automatically wire add-on policies such as albIngress. If you included that in your node group IAM settings, you may already be halfway there.
Check Controller Status
AWS 32 Cores Account Run:
kubectl get pods -A | grep -i alb
If you see controller pods and they’re ready, you’re good. If not, install the controller according to the latest AWS guidance for your Kubernetes version.
Create an Ingress Resource
Here’s an example Ingress manifest that routes a host to your service. In practice, you’ll set annotations to configure the load balancer type and scheme.
Example: nginx-ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-demo-ingress
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
spec:
ingressClassName: alb
rules:
- host: nginx-demo.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-demo-svc
port:
number: 80
A few notes:
- The host is an example. You can also omit host rules depending on your controller behavior.
- DNS and certificates are out of scope here, but you can extend the tutorial to include TLS.
Apply the Ingress
kubectl apply -f nginx-ingress.yaml
Then watch:
kubectl get ingress
You may need to wait a few minutes while AWS provisions the load balancer. During that time, ADDRESS might be empty. That’s normal. The load balancer is basically assembling itself with the enthusiasm of a sloth, but eventually it arrives.
Look Up Events If Something Is Wrong
If your ingress doesn’t get a load balancer, inspect events:
kubectl describe ingress nginx-demo-ingress
Also check the controller logs:
kubectl get pods -A
Find the ingress controller pod and then:
kubectl logs -n <controller-namespace> <controller-pod-name>
Common issues include missing IAM permissions, wrong ingress class name, or invalid annotations.
Step 7: Understand Node Groups and Scaling
In EKS, you have the control plane (managed) and then worker nodes. Worker nodes can be managed in groups. Managed node groups provide automation for scaling and updates.
Check Your Node Groups
From Kubernetes you can view nodes:
kubectl get nodes -o wide
From AWS you can check the corresponding Auto Scaling groups and node group status. The exact mapping depends on your setup.
Try Scaling the Deployment
For a quick test, scale up your deployment:
kubectl scale deployment nginx-demo --replicas=4
Then verify:
kubectl get pods
If you have enough node capacity, you’ll see pods spread out across nodes. If not, the cluster autoscaler (if enabled) may launch more nodes, assuming your setup supports it.
Step 8: Troubleshooting Guide (Because Cloud Life Is a Comedy)
Let’s cover the top “why is nothing working?” scenarios. Even if everything seems fine, keep this section handy for future you, who will definitely be surprised by their past choices.
Problem: Nodes Stay NotReady
Possible causes:
- Network misconfiguration between nodes and control plane
- Security group rules missing
- Node IAM role lacks required permissions
- Bootstrap issues (user data, AMI problems)
What to do:
- Check node conditions: kubectl describe node <node-name>
- Check node group status in AWS
- Verify security groups for required ports
If you see errors about kubelet or bootstrap, it’s usually an IAM or networking issue.
Problem: Pods CrashLoopBackOff
Your pod started but keeps dying. Typical causes:
- Wrong image or incompatible entrypoint
- Missing environment variables
- Bad container command or args
- Application failing health checks
What to do:
kubectl describe pod <pod-name>
And inspect logs:
kubectl logs <pod-name> --previous
The “previous” logs can be the difference between guesswork and truth.
Problem: Services Don’t Reach Each Other
If your cluster-internal curl fails, check:
- Service selector matches your pod labels
- Pods are running and ready
- Network policies (if you use them) allow traffic
Verify labels and endpoints:
kubectl get endpoints nginx-demo-svc
If endpoints are empty, your selector isn’t matching pods or pods aren’t ready.
Problem: Ingress Doesn’t Create a Load Balancer
AWS 32 Cores Account Common causes include:
- Missing or incorrect ingress class name
- AWS 32 Cores Account Controller not installed or not running
- IAM role missing permissions
- Conflicting annotations
Actions:
- kubectl describe ingress <name>
- Check controller logs
- Check AWS resources: load balancers, target groups, security groups
Cloud platforms are very polite about failure. They just speak in error logs and console messages that feel like riddles.
Step 9: Security Notes (Because “Works” Is Not the Same as “Safe”)
Once you’re up and running, consider security best practices. Tutorials sometimes skip them because the article is already long and your coffee is getting cold. But you should know what to look for.
Use Least Privilege for IAM
Ensure your node group roles and controller roles have only the required permissions. Overly broad permissions make auditors sad and security teams cry into their spreadsheets.
Prefer Private Networking for Nodes
When possible, place worker nodes in private subnets and restrict inbound access. Use load balancers and ingress controllers to handle external traffic safely rather than opening up everything.
Enable Logging and Monitoring
Watch logs and metrics so you can detect issues early. At minimum, check:
- Cluster events
- Controller logs
- Application logs (from pods)
Additionally, consider enabling CloudWatch logs for Kubernetes components, and deploy a monitoring stack if you need dashboards and alerts.
Step 10: Clean Up Resources (So Your Billing Doesn’t Become a Horror Story)
When you’re done testing, delete resources. EKS charges for cluster and supporting components, and leaving it running can turn your lab into a subscription you didn’t mean to purchase.
To delete via eksctl, you can typically run:
eksctl delete cluster --name demo-eks --region us-east-1
If you created additional resources (VPC, load balancers), make sure they’re cleaned up as well. AWS sometimes retains resources depending on deletion policies and how you created them.
Before deletion, double-check you’re deleting the right cluster. Cluster names like “demo-eks” are easy to confuse with other demos if you have a talent for chaos. Future you will thank present you for verifying.
Bonus: A Simple “Repeatable Checklist” for Future EKS Clusters
If you plan to create more clusters, a checklist saves time and reduces mistakes. Here’s a practical one:
- Choose region and Kubernetes version.
- Decide on VPC strategy (new VPC or existing).
- Ensure at least two availability zones.
- Create cluster configuration with the right endpoint access.
- Create managed node groups with appropriate instance types.
- Apply and wait for cluster creation.
- Update kubeconfig and verify kubectl get nodes.
- Deploy a test app and service (ClusterIP first).
- Install ingress controller (if you need external access).
- Deploy ingress and confirm load balancer provisioning.
- Scale deployment to verify operational behavior.
- Document networking and IAM assumptions for troubleshooting later.
- Clean up resources after testing.
It’s not glamorous, but neither is dealing with a broken subnet ID at 2 a.m. Your future self deserves better.
Conclusion: You Now Have a Working EKS Environment
Congratulations! You’ve created an AWS EKS cluster, connected to it with kubectl, deployed a sample application, and walked through the common troubleshooting paths that show up when reality interrupts. EKS is a powerful platform because it gives you managed control plane operations while keeping Kubernetes flexibility for your workloads.
From here, you can extend your setup in many directions: add CI/CD pipelines, set up proper ingress with TLS, implement autoscaling with HPA and cluster autoscaler, integrate with AWS services like S3 and IAM roles for service accounts, and build out monitoring and alerting.
Just remember: Kubernetes will occasionally punish assumptions. But once you learn to check logs, events, and label selectors, the “mysteries” become detective work. And detective work is fun, even when it’s sponsored by AWS.

