Replication controller in kubernetes
Kubernetes is a technology which is some what difficult to understand.But if you have proper understanding then you will feel very easy to implement.
The replication controller can itself have labels (.metadata.labels). Typically, you would set these the same as the .spec.template.metadata.labels; if .metadata.labels is not specified then it is defaulted to .spec.template.metadata.labels. However, they are allowed to be different, and the .metadata.labels do not affec the behavior of the replication controller.
Multiple Replicas
You can specify how many pods should run concurrently by setting .spec.replicas to the number of pods you would like to have running concurrently. The number running at any time may be higher or lower, such as if the replicas was just increased or decreased, or if a pod is gracefully shutdown, and a replacement starts early.
If you do not specify .spec.replicas, then it defaults to 1.
Some times we are facing problem in understanding replicationcontroller in kubernetes.
Today we'll drill down to one step for understanding kubernetes replication controller.
Sample replication controller file:
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 3
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Overall view
- Replication controller make insures that number of specified pods running in the cluster OR homogeneous set of pods are always up and available.
- use a replication controller even if your application requires only a single pod.
- If there are too many pods, it will kill some. If there are too few, the replication controller will start more.
Syntax of myapp-rc.yaml file
Before going ahead it's very important to understand syntax and terminology of replicationcontroller file.
- apiVersion,kind,metadata,spec will come in one alignment as green colour in ablove file
- metadata can contain name,label of replicationcontroller
- all the specification of replication controller will come under spec section
- spec mainly contains three information replicas,selector,template
- number of replicas means number of pods which cluster will contain each time.
- selector means pod will contain only those containers whose label we have mentioned in spec.template.metadata.labels
Labels on the Replication Controller
The replication controller can itself have labels (.metadata.labels). Typically, you would set these the same as the .spec.template.metadata.labels; if .metadata.labels is not specified then it is defaulted to .spec.template.metadata.labels. However, they are allowed to be different, and the .metadata.labels do not affec the behavior of the replication controller.
Pod Selector
The .spec.selector field is a label selector. A replication controller manages all the pods with labels which match the selector. It does not distinguish between pods which it created or deleted versus pods which some other person or process created or deleted. This allows the replication controller to be replaced without affecting the running pods.
If specified, the .spec.template.metadata.labels must be equal to the .spec.selector, or it will be rejected by the API. If .spec.selector is unspecified, it will be defaulted to .spec.template.metadata.labels.
Multiple Replicas
You can specify how many pods should run concurrently by setting .spec.replicas to the number of pods you would like to have running concurrently. The number running at any time may be higher or lower, such as if the replicas was just increased or decreased, or if a pod is gracefully shutdown, and a replacement starts early.
If you do not specify .spec.replicas, then it defaults to 1.
Kubernetes in your Local Ubuntu vm
Problems: I am new in kuberenetes, docker ,vagrant ,sonarqube, jenkins. so if you want to explore in your localvm about kubernetes then you have to setup that kubernetes cluster in your local vm.
I am providing here basic steps to setup kubernetes cluster in your local.
Assumptios: i'll setup master and node in one VM.same node will works as master and node both.
Below are the basic steps:
1. Install prerequisite software packages on both master and node(in our case only one vm)
I am providing here basic steps to setup kubernetes cluster in your local.
Assumptios: i'll setup master and node in one VM.same node will works as master and node both.
Below are the basic steps:
1. Install prerequisite software packages on both master and node(in our case only one vm)
$apt-get update
$apt-get install ssh curl vim
2. Docker installation
sudo apt-get update
sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
touch /etc/apt/sources.list.d/docker.list
echo deb https://apt.dockerproject.org/repo ubuntu-trusty main > /etc/apt/sources.list.d/docker.list
sudo apt-get update
sudo apt-get purge lxc-docker
sudo apt-cache policy docker-engine
sudo apt-get install docker-engine -y
3.ssh-keygen -t rsa
4.Copy the ssh id_rsa key locally /// optional for passwordless auth
$ ssh-copy-id -i /root/.ssh/id_rsa.pub 127.0.0.1
5. In case this command fails please use this alternative solution in order to add the key //optional
$ cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
6.Validate the password-less ssh-login // optional passwordless auth test
$ ssh root@127.0.0.1
root@virtual-machine:~$ exit
logout
Connection to 127.0.0.1 closed
root@virtual-machine:~$ exit
logout
Connection to 127.0.0.1 closed
7. Get the Kubernetes release bundle from the official github repository
8. Untar the Kubernetes bundle in the same directory
$ tar -xvf kubernetes.tar.gz
We will build the binaries of Kubernetes code specifically for ubuntu cluster
9. Execute the following shell script
$ cd kubernetes/cluster/ubuntu
$ ./build.sh
10 .Configure the cluster information by editing only the following parameters of the filecluster/ubuntu/config-default.sh in the editor of your choice.
$ cd
$ vi kubernetes/cluster/ubuntu/config-default.sh
export nodes="root@127.0.0.1 root@172.16.1.8"
export roles="ai i"
export NUM_MINIONS=${NUM_MINIONS:-2}
$ vi kubernetes/cluster/ubuntu/config-default.sh
export nodes="root@127.0.0.1 root@172.16.1.8"
export roles="ai i"
export NUM_MINIONS=${NUM_MINIONS:-2}
export FLANNEL_NET=172.16.0.0/16
NOTE :- for multiple node need to edit export roles=("ai" "i" "i") /// Script BUG
/* Only update the above mentioned information in the file, rest of the configuration will remain as it is. The first variable nodes defines all the cluster nodes, in our case same machine will be configured as master and node so it contains only one entry.The role below “ai” specifies that same machine will act as master, “a” stands for master and “i” stands for node.
Now, we will be starting the cluster with the following command; */
11. startup cluster configuration
$ cd kubernetes/cluster
$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh
$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh
Cluster validation succeeded
Done, listing cluster services:
Done, listing cluster services:
Note :- if validations
12 .We will add kubectl binary to path in order to manage the Kubernetes cluster.
$ export PATH=$PATH:~/kubernetes/cluster/ubuntu/binaries // to get command access
Now, we will validate if the K8s cluster created above is properly configured;
$ kubectl get nodes
13. The Kubernetes UI can be accessed at the following address https://127.0.0.1:8080/ui . Suppose in case it is not accessible please run the following services and try again
// for UI configuration //optional
$ kubectl create -f addons/kube-ui/kube-ui-rc.yaml --namespace=kube-system
$ kubectl create -f addons/kube-ui/kube-ui-svc.yaml --namespace=kube-system
$ kubectl create -f addons/kube-ui/kube-ui-svc.yaml --namespace=kube-system
14 TO container monitor uI // Cadvisor
http://127.0.0.1:4194/containers/ // it can be ip of master instend of 127.0.0.1
15 To check other resource through UI
http://127.0.0.1:8080/ // and selected options
TO check kubernetes pod list
#kubernetes get pod
To get kubernetes services
#kubernetes get services
Run a container nginx (name my-nginx) and running on 80 port(without replica)
#kubectl run-container my-nginx --image=nginx --port=80
TO create a container with yaml
#kubectl create -f replication.yaml
Replication/Scaling
Kubernetes container replication:-
1.pull image nginx at docker
#docker pull nginx
3.create a container with replication
# kubectl run my-nginxtest --image=nginx --replicas=2 --port=80
4.to check replication node
#kubectl get rc
5. to check replication node in POD
#kubectl get pod
6. create a load balancer for my-nginx container
#kubectl expose rc my-nginxtest --port=80 --create-external-load-balancer
7.to check container full detials including IP
#kubectl describe pod my-nginxtest-h91jo
8. to stop replication service
Scaling replication
scaling our nginx replication
#kubectl scale --current-replicas=2 --replicas=3 replication controllers my-nginxtest
Note:if anybody facing some issue then plz contact me using comment section and provide suggestion
Subscribe to:
Posts
(
Atom
)
No comments :
Post a Comment