使用kubeadm 安装 kubernetes 1.15.1

Note: 安装k8s通常有3种方式:

  1. 通过rpm/二进制包安装:就像安装普通软件那样,由于k8s内部机制非常复杂,各组件之间的证书就够喝一壶,所以这种方式除非你有足够耐心和底子,否则很容易产生挫败感;
  2. 通过minikube:快速构建一个单节点得k8s,主要用来体验和简单测试,不能用在生产环境;
  3. 通过kubeadm:k8s官方推荐的安装方式,简化了很多繁琐的步骤,本文也是通过这种方式来安装;

Note: 在每个节点都操作

设置hostsname:

hostnamectl set-hostname master

配置hosts:

cat >> /etc/hosts <<'EOF'
172.17.2.81 master
172.17.2.82 node1
172.17.2.83 node2
EOF

系统初始化:

centos7_initial_scripts

同步时间:

ntpdate time.apple.com
echo '*/10 * * * * root ntpdate time.apple.com > /dev/null 2>&1' >> /etc/crontab
systemctl restart crond

关闭swap: 1)

swapoff -a && sysctl -w vm.swappiness=0
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

修改sysctl内核参数:

cat > /etc/sysctl.d/k8s.conf <<'EOF'
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
EOF
sysctl --system

配置ipvs模块:

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
lsmod | egrep 'ip_vs|nf_conntrack_ipv4'

关闭防火墙:

sudo systemctl stop firewalld
sudo systemctl disable firewalld

关闭selinux:

setenforce 0
sed -i 's/SELINUX=enforce/SELINUX=disabled/g' /etc/selinux/config 

生成yum仓库配置:

# docker-ce
curl -sk -o /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# kubernetes
cat > /etc/yum.repos.d/kubernetes.repo<<'EOF'
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
enabled=1
EOF

安装相关包:

yum install docker-ce kubelet kubeadm kubectl ipset ipvsadm -y
systemctl enable docker kubelet

配置kubectl自动补全:

# setup autocomplete in bash into the current shell, bash-completion package should be installed first.
source <(kubectl completion bash)
# add autocomplete permanently to your bash shell.
echo "source <(kubectl completion bash)" >> ~/.bashrc

配置docker:

  • 修改docker cgroup driver为systemd;2)
  • 编辑Docker启动文件;3)
systemctl start docker
# 修改docker cgroup driver为systemd
cat > /etc/docker/daemon.json<<'EOF'
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
# FOWARD 链
sed -i "13i ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT" /usr/lib/systemd/system/docker.service
# GFW !!!
sed -i '11i Environment="HTTPS_PROXY=http://proxy:pass@172.17.1.252:53128"' /usr/lib/systemd/system/docker.service
sed -i '12i Environment="NO_PROXY=127.0.0.0/8,172.17.0.0/16"' /usr/lib/systemd/system/docker.service
# restart
systemctl daemon-reload
systemctl restart docker
systemctl status docker
docker info | grep Cgroup
iptables -nvL | grep FORWARD

3.0.1 GFW导致docker images pull 失败

由于GFW的原因造成kubeadm config images pull 或者 kubeadm init失败,根本原因就是docker pull镜像失败,你可以自己修改docker的服务文件,添加Environment的代理,或者使用下面曲线救国的方式:

先获取最新的镜像版本:

kubeadm config images list

再一次修改下面的脚本并执行: 4)

cat > gfw.sh <<'EOF'
MY_REGISTRY=gcr.azk8s.cn/google-containers
## 拉取镜像
docker pull ${MY_REGISTRY}/kube-apiserver:v1.15.1
docker pull ${MY_REGISTRY}/kube-controller-manager:v1.15.1
docker pull ${MY_REGISTRY}/kube-scheduler:v1.15.1
docker pull ${MY_REGISTRY}/kube-proxy:v1.15.1
docker pull ${MY_REGISTRY}/pause:3.1
docker pull ${MY_REGISTRY}/etcd:3.3.10
docker pull ${MY_REGISTRY}/coredns:1.3.1
docker pull quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
## 添加Tag
docker tag ${MY_REGISTRY}/kube-apiserver:v1.15.1 k8s.gcr.io/kube-apiserver:v1.15.1
docker tag ${MY_REGISTRY}/kube-controller-manager:v1.15.1 k8s.gcr.io/kube-controller-manager:v1.15.1
docker tag ${MY_REGISTRY}/kube-scheduler:v1.15.1 k8s.gcr.io/kube-scheduler:v1.15.1
docker tag ${MY_REGISTRY}/kube-proxy:v1.15.1 k8s.gcr.io/kube-proxy:v1.15.1
docker tag ${MY_REGISTRY}/pause:3.1 k8s.gcr.io/pause:3.1
docker tag ${MY_REGISTRY}/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
docker tag ${MY_REGISTRY}/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
docker tag quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64
#删除无用的镜像
docker images | grep ${MY_REGISTRY} | awk '{print "docker rmi "  $1":"$2}' | sh -x
docker rmi quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
EOF

3.1 初始化集群

# 获取&拉取kubeadm所需的docker images
kubeadm config images list
kubeadm config images pull
# 查看docker images
docker image ls
# 初始化集群
kubeadm init --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12
# 假如失败,可以reset再来
kubeadm reset

正常状态会返回下面的信息:

Kubernetes master is running at https://172.17.2.81:6443
KubeDNS is running at https://172.17.2.81:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

.....
### 注意记录此行,用于在其他work node加入集群
kubeadm join 172.17.2.81:6443 --token nbdr0f.rf068f7h8v2edfyf \
    --discovery-token-ca-cert-hash sha256:f95861f8600a3a516ca057a5f6ad026e9aaea3d8e6c48f990974911f148534c9

接着在控制节点上配置kubectl的连接信息:

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

验证:

kubectl cluster-info
kubectl get cs
# 此时缺少network addon,not ready
kubectl get nodes

安装pod network addon: 5)

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# 此时ready
kubectl get nodes
# 已经有flannel的pod
kubectl get pods -n kube-system
kubectl get ns

在普通的node节点上执行以加入cluster:

kubeadm join 172.17.2.81:6443 --token nbdr0f.rf068f7h8v2edfyf \
    --discovery-token-ca-cert-hash sha256:f95861f8600a3a516ca057a5f6ad026e9aaea3d8e6c48f990974911f148534c9

再去控制节点查看cluster成员:

[root@master ~]# kubectl get nodes
 
NAME     STATUS   ROLES    AGE   VERSION
master   Ready    master   23m   v1.15.1
node1    Ready    <none>   32s   v1.15.1
node2    Ready    <none>   31s   v1.15.1

测试:

kubectl create deployment test-nginx --image=nginx
kubectl get pods -o wide
kubectl scale deployment test-nginx --replicas=4
kubectl get pods -o wide
kubectl expose deployment test-nginx --port 80 
kubectl describe service test-nginx 
# 访问
curl <ip>

删除上述测试资源:

kubectl delete deployment test-nginx
kubectl delete service test-nginx


1)
kubelet节点操作,如果开启了swap,kubelet会启动失败,可以通过将参数 –fail-swap-on=false来忽略
2)
根据文档CRI installation中的内容,对于使用systemd作为init system的Linux的发行版,使用systemd作为docker的cgroup driver可以确保服务器节点在资源紧张的情况更加稳定,因此这里修改各个节点上docker的cgroup driver为systemd
3)
Docker从1.13版本开始调整了默认的防火墙规则,禁用了iptables filter表中FOWARD链,这样会引起Kubernetes集群中跨Node的Pod无法通信,因此docker安装完成后,还需要手动修改iptables规则。
4)
k8s.gcr.io和quay.io
5)
此时需要docker镜像quay.io/coreos/flannel,故docker需要配置代理,不能一直报错“No networks found in /etc/cni/net.d”,如果提前pull了这个镜像就没问题
  • virtualization/k8s/k8s_deploy_kubeadm.txt
  • 最后更改: 2019/11/04 20:40
  • 由 mrco