天道不一定酬所有勤
但是,天道只酬勤

基于CentOS 部署一套 K8S 集群

GitHub 22k Star 的Java工程师成神之路,不来了解一下吗!

最近在探索隐私计算,前几天,基于Docker Compose部署了一套FATE测试环境,测试环境只包含两方,单方只有1台机器。存在严重的资源不足以及单点问题,于是考虑通过 KubeFate 部署一套 FATE 集群方便测试。

在部署KubeFate之前,需要一套 Kubernetes 环境,于是,基于 CentOS 实践了一次 K8S 集群搭建,过程中遇到了很多坑。

本次部署共3台机器,一台Master、两台Node节点。部署过程如下。

前置准备工作

本步骤的所有操作需要在三台机器上分别执行。

关闭防火墙及selinux

[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# systemctl disable firewalld

[root@localhost ~]# setenforce 0
[root@localhost ~]# sed -i 's/=enforcing/=disabled/g' /etc/selinux/config

修改主机名(可选)

# 分别修改主机名为k8s-master1、k8s-node1、k8s-node2
[root@localhost ~]# hostnamectl set-hostname k8s-master1
[root@localhost ~]# bash

[root@localhost ~]# hostnamectl set-hostname k8s-node1
[root@localhost ~]# bash

[root@localhost ~]# hostnamectl set-hostname k8s-node2
[root@localhost ~]# bash

配置阿里云yum源

# 备份
[root@k8s-master1 ~]# mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
# 配置
[root@k8s-master1 ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
# 生成缓存
[root@k8s-master1 ~]# yum makecache

配置主机名解析

[root@k8s-master1 ~]# cat>>/etc/hosts<<EOF
8.130.167.223    k8s-master1
8.130.181.73    k8s-node1
8.130.164.117    k8s-node2
EOF

配置桥接流量

[root@k8s-master1 ~]# cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

Docker 部署

# 安装必要的一些系统工具
[root@k8s-master1 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
# 添加软件源信息
[root@k8s-master1 ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 替换下载源为阿里源
[root@k8s-master1 ~]# sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
# 更新并安装Docker-CE
[root@k8s-master1 ~]# yum makecache fast

# 查看可安装版本
[root@k8s-master1 ~]# yum list docker-ce --showduplicates | sort -r

# 选择版本安装
[root@k8s-master1 ~]# yum -y install docker-ce-19.03.9

[root@k8s-master1 ~]# systemctl enable docker && systemctl start docker


# 配置镜像下载加速,需注册登录:https://cr.console.aliyun.com
[root@k8s-master1 ~]# cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://sqr9a2ic.mirror.aliyuncs.com"]
}
EOF

#重启生效
[root@k8s-master1 ~]# systemctl restart docker
[root@k8s-master1 ~]# docker info | grep 'Server Version'
 Server Version: 18.09.1

K8s 部署

安装 kubeadm

# 配置镜像源
[root@k8s-master1 ~]# cat  > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 由于版本更新频繁,这里指定版本号部署
# 所有节点安装kubeadm
[root@k8s-master1 ~]# yum install kubeadm-1.20.2 -y

# 设置开机启动
[root@k8s-master1 ~]# systemctl enable kubelet

创建 Master

本步骤的操作只需要在 Master机器上执行

$ vi kubeadm.conf
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.20.0
imageRepository: registry.aliyuncs.com/google_containers
networking:
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12

$ kubeadm init --config kubeadm.conf --ignore-preflight-errors=all

安装成功后会有如下提示:

最后一行给出了创建Node节点需要的密钥信息,需要保存并记录下来(可选)

# 拷贝kubectl使用的连接k8s认证文件到默认路径
[root@k8s-master1 ~]# mkdir -p $HOME/.kube
[root@k8s-master1 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master1 ~]# chown $(id -u):$(id -g) $HOME/.kube/config

查看 Master 部署状态

[root@k8s-master1 ~]# kubectl get nodes
NAME          STATUS     ROLES                  AGE     VERSION
k8s-master1   NotReady   control-plane,master   18m     v1.22.3

Node节点加入 (扩容)

本步骤的操作只需要在两台Node机器上依次执行

把刚刚创建 Master成功之后的命令,在 Node机器上执行

kubeadm join 172.29.247.176:6443 --token 4qz2a4.bep8keewd3q27vl0 \
>     --discovery-token-ca-cert-hash sha256:707836a5a432b7a4e036d4d280a4ad01682194bf4e32ca3e0a4ba97865386f29

查看集群情况

回到 master执行命令

[root@k8s-master1 ~]# kubectl get nodes
NAME          STATUS     ROLES                  AGE     VERSION
k8s-master1   NotReady   control-plane,master   18m     v1.22.3
k8s-node1     NotReady   <none>                 5m13s   v1.22.3
k8s-node2     NotReady   <none>                 29s     v1.22.3

再到 Node上执行同样的命令

[root@k8s-master1 ~]# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?

这一步报错了,出现这个问题的原因是kubectl命令需要使用kubernetes-admin来运行,解决方法如下,将主节点中的【/etc/kubernetes/admin.conf】文件拷贝到从节点相同目录下,然后配置环境变量:

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile

立即生效

source ~/.bash_profile

再执行就可以了。

查看状态信息

[root@k8s-master1 ~]# kubectl describe node k8s-master1
Name:               k8s-master1
Roles:              control-plane,master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=k8s-master1
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/control-plane=
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Mon, 08 Nov 2021 20:50:38 +0800
Taints:             node-role.kubernetes.io/master:NoSchedule
                    node.kubernetes.io/not-ready:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  k8s-master1
  AcquireTime:     <unset>
  RenewTime:       Mon, 08 Nov 2021 21:16:53 +0800
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Mon, 08 Nov 2021 21:15:42 +0800   Mon, 08 Nov 2021 20:50:36 +0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Mon, 08 Nov 2021 21:15:42 +0800   Mon, 08 Nov 2021 20:50:36 +0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Mon, 08 Nov 2021 21:15:42 +0800   Mon, 08 Nov 2021 20:50:36 +0800   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Mon, 08 Nov 2021 21:15:42 +0800   Mon, 08 Nov 2021 20:50:36 +0800   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

最后的提示报错了:KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

这里需要安装net插件

安装插件

[root@k8s-master1 ~]# kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')
WARNING: version difference between client (1.22) and server (1.20) exceeds the supported minor version skew of +/-1
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.apps/weave-net created

再执行查看状态就不会报错了。

部署验证

在Kubernetes集群中创建一个pod,验证是否正常运行

[root@k8s-master1 ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created

[root@k8s-master1 ~]# kubectl expose deployment nginx --port=80 --type=8.130.167.223
service/nginx exposed

[root@k8s-master1 ~]# kubectl get pod,svc
NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-6799fc88d8-9nfdj   1/1     Running   0          2m9s

NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        37m
service/nginx        NodePort    10.108.207.196   <none>        80:32133/TCP   2m

请求一下这个 Nginx 暴露出来的服务

[root@k8s-master1 ~]# curl 10.108.207.196:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

则表示成功了,再看下整体情况:

[root@k8s-master1 ~]# kubectl get node -o wide
NAME          STATUS   ROLES                  AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION              CONTAINER-RUNTIME
k8s-master1   Ready    control-plane,master   51m   v1.22.3   172.29.247.176   <none>        CentOS Linux 7 (Core)   3.10.0-693.2.2.el7.x86_64   docker://18.9.1
k8s-node1     Ready    <none>                 37m   v1.22.3   172.29.247.175   <none>        CentOS Linux 7 (Core)   3.10.0-693.2.2.el7.x86_64   docker://18.9.1
k8s-node2     Ready    <none>                 33m   v1.22.3   172.29.247.177   <none>        CentOS Linux 7 (Core)   3.10.0-693.2.2.el7.x86_64   docker://18.9.1

以上基本完成了 K8S 的集群部署。但是因为我们部署的 K8S 要用来创建 FATE 集群,所以还有一些其他的工作要做。

安装ingress-controller

wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml

下载成功后,修改配置文件,添加:

      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet

执行命令安装

kubectl create -f mandatory.yaml

查看状态

(全文完)

扫描二维码,关注作者微信公众号
赞(0)
如未加特殊说明,此网站文章均为原创,转载必须注明出处。HollisChuang's Blog » 基于CentOS 部署一套 K8S 集群
分享到: 更多 (0)

评论 抢沙发

  • 昵称 (必填)
  • 邮箱 (必填)
  • 网址

HollisChuang's Blog

联系我关于我