生产环境部署K8s的方式
- kubeadm
Kubeadm是一个工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。
部署地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/
- 二进制
推荐,从官方下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。
下载地址:https://github.com/kubernetes/kubernetes/releases
- kops
自动化集群制备工具。 有关教程、最佳实践、配置选项和社区联系信息,请查阅 kOps网站。
文档地址:https://kops.sigs.k8s.io/getting_started/install/
提供了 Ansible Playbook、 清单(inventory),制备工具和通用 OS/Kubernetes 集群配置管理任务领域的知识。
服务器硬件配置推荐
使用kubeadm快速部署一个K8s集群
1.1安装需求
在开始之前,部署Kubernetes 集群机器需要满足以下几个条件:
- 一台或多台机器,操作系统CentOS8.4-86_x64
- 硬件配置:2GB 或更多RAM,2 个CPU 或更多CPU,硬盘20GB 或更多
- 集群中所有机器之间网络互通
- 可以访问外网,需要拉取镜像
- 禁止swap 分区
1.2环境准备
标红部分需根据自身VM虚拟机的虚拟网络编辑器的子网IP查看。
liunx默认网关地址为192.168.xxx.2
角色 | IP地址 | 网关 | 安装组件 |
k8s-Master | 192.168.223.100 | 192.168.223.2 | docker,kubectl,kubeadm,kubelet |
k8s-node1 | 192.168.223.101 | 192.168.223.2 | docker,kubectl,kubeadm,kubelet |
k8s-node2 | 192.168.223.102 | 192.168.223.2 | docker,kubectl,kubeadm,kubelet |
2.系统初始化(master和两个node节点同时配置)
操作系统:centos8.4 版本,docker 版本为 20.10.12 ,kubenets版本 1.23.3。 集群状态:⼀主⼆从
1.查看Linux系统版本
cat /etc/redhat-releasecat /etc/redhat-release
2. 配置主机名和修改hosts⽂件,⽤来内部解析。
Master节点:hostnamectl set-hostname k8s-master
Node1节点: hostnamectl set-hostname k8s-node1
Node2节点: hostnamectl set-hostname k8s-node2
3.修改完毕后,可以使用命令查看hostname
cat /etc/hostname
4.修改完主机名后,我们配置hosts⽂件,⽤来进⾏解析:在三个节点同时写⼊此⽂件。
vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.2.100 k8s-master
192.168.2.101 k8s-node1
192.168.2.102 k8s-node2
如果无法使用vim可以用以下命令安装vim插件,再使用vim命令
yum -y install vim
5.配置完上述⽂件后,我们可以使⽤ping node1 节点来进⾏测试,查看我们的hosts文件是否正确。
3.配置防火墙:firewalld
关闭防火墙
systemctl stop firewalld
关闭防火墙的开机自启
systemctl disable firewalld
4.关闭selinux
临时关闭
setenforce 0
关闭selinux的开机自启
vim /etc/selinux/config 设置配置项为: SELINUX=disabled
重启服务器
reboot
查看selinx是否关闭
getenforce
5.关闭swap
vim /etc/fstab
#把有swap分区的⼀行记录注释掉
free -m
可以看到swap的值为0
6.安装Docker
由于国内访问docker官网慢,换为阿⾥云的源
安装Docker
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum -y install docker-ce
查看docker版本和验证docker是否安装成功。
docker -v
启动并加⼊开机启动
systemctl start docker
systemctl enable docker
7.配置docker的cgroup驱动为systemd
查看当前的cgroup驱动
docker info | grep Cgroup
修改docker的cgroup驱动为systemd
vim /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
重启docker服务
systemctl restart docker
查看当前的cgroup驱动
docker info | grep Cgroup
8.在Master和Node节点安装kubernetes
新建kubernetes的repo
vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
安装 kubectl kubelet kubeadm
yum install -y kubelet-1.23.3 kubeadm-1.23.3 kubectl-1.23.3
安装完成后查看版本:
kubelet --version
9.kubeadm 初始化kubenetes
*仅在Master执行
kubeadm init --apiserver-advertise-address=master的IP地址 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.23.3 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16
注意:如果在安装过程中节点发生问题,不管是master还是node出现问题,我们可以使用 kubeadm reset命令清空部署环境,重启kubeadm服务,然后在执行安装或者加入到集群中。
kubeadm init做了哪些工作?
1、[preflight] 环境检查和拉取镜像(kubeadm config images pull)
2、[certs] 创建证书目录/etc/kubernetes/pki,生成证书
3、[kubeconfig] 创建连接apiserver的配置文件目录/etc/kubernetes
4、[kubelet-start] 生成kubelet配置文件和启动
5、[control-plane] 使用静态pod启动master组件 /etc/kubernetes/manifests
6、[upload-config] [upload-certs] [kubelet] 使用 ConfigMap 存储kubelet配置
7、[mark-control-plane] 给master节点添加标签
8、[bootstrap-token] kubelet自动申请证书机制
9、[addons] 安装插件CoreDNS和kube-proxy
将这句放入node节点中,使node加入kubernets集群中
kubeadm join<Master节点的IP和端口>
查看集群情况
kubectl get nodes
出现的问题
问题分析:
环境变量
原因:kubernetes master没有与本机绑定,集群初始化的时候没有绑定,此时设置在本机的环境变量即可解决问题
解决方法:
步骤一:设置环境变量
方式一:编辑文件设置
vim /etc/profile 在底部增加新的环境变量 export KUBECONFIG=/etc/kubernetes/admin.conf
方式二:直接追加文件内容
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
步骤二:使其生效
source /etc/profile
解决问题:
方式三:
mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config
10.部署网络
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
下面是yaml文件
---
kind: Namespace
apiVersion: v1
metadata:
name: kube-flannel
labels:
k8s-app: flannel
pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: flannel
name: flannel
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: flannel
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: flannel
name: flannel
namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-flannel
labels:
tier: node
k8s-app: flannel
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"EnableNFTables": false,
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-flannel
labels:
tier: node
app: flannel
k8s-app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni-plugin
image: docker.io/flannel/flannel-cni-plugin:v1.4.0-flannel1
command:
- cp
args:
- -f
- /flannel
- /opt/cni/bin/flannel
volumeMounts:
- name: cni-plugin
mountPath: /opt/cni/bin
- name: install-cni
image: docker.io/flannel/flannel:v0.24.4
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: docker.io/flannel/flannel:v0.24.4
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: EVENT_QUEUE_DEPTH
value: "5000"
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
- name: xtables-lock
mountPath: /run/xtables.lock
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni-plugin
hostPath:
path: /opt/cni/bin
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
11.测试Kubernetes集群
- 验证pod工作
- 验证pod网络通信
- 验证DNS解析
在kubernetes集群中创建一个pod,验证是否正常运行:
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get pod,svc
12.部署Dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
默认Dashboard只能集群内部访问,修改Service为NodePort,暴露到外部:
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
nodePort: 30001
selector:
k8s-app: kubernetes-dashboard
type: NodePort
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.7.0
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
containers:
- name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.8
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {}
创建serce account并绑定默认cluster-admin管理员集群角色:
# 创建用户
kubectl create serviceaccount dashboard-admin -n kube-system
#用户授权
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
#获取用户Token
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret awk'/dashboard-admin/ (print $1)')
13.切换容器引擎为Containerd
参考文档:https://kubernetes.io/zh/docs/setup/production-environment/container-runtimes/#containerd
1.配置先决条件
将节点设置成不可调度
kubectl cordon 节点hostname
驱逐节点上的pod
kubectl drain 节点hostname --ignore-daemonsets
vim /etc/modules-load.d/containerd.conf
overlay
br_netfilter
sudo modprobe overlay
sudo modprobe br_netfilter
设置必要的sysctl参数,做持久化配置
vim /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables=1
net.ipv4.ip forward=1
net.bridge.bridge-nf-call-ip6tables=1
sudo sysctl --system
2.安装containerd
安装所需依赖包
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
添加Docker仓库
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
安装containerd
sudo yum update && sudo yum install -y containerd.io
配置containerd
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
修改配置文件
vim /etc/containerd/config.toml
[plugins."io.containerd. grpc.vl.cri"]
sandbox_image = “registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6"
...
[plugins."io.containerd. grpc.vl.cri".containerd.runtimes.runc. options]
SystemdCgroup = true
...
[plugins."io.containerd.tracing.processor.v1.otlp"]
endpoint = ["https://docker.mirrors.ustc.edu.cn"]
重启containerd
systemctl daemon-reload
systemctl restart containerd
kubelet连接containerd
vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS= --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock
停止运行docker
systemctl stop docker
重启kubectl
systemctl restart kubelet
取消节点维护
kubectl uncordon 节点hostname