手把手教你一套完善且高效的k8s离线部署方案

作者:郝建伟背景

面对更多项目现场交付,偶而会遇到客户环境不具备公网条件,完全内网部署,这就需要有一套完善且高效的离线部署方案。

系统资源

编号

主机名称

IP

资源类型

CPU

内存

磁盘

01

k8s-master1

10.132.10.91

CentOS-7

4c

8g

40g

02

k8s-master1

10.132.10.92

CentOS-7

4c

8g

40g

03

k8s-master1

10.132.10.93

CentOS-7

4c

8g

40g

04

k8s-worker1

10.132.10.94

CentOS-7

8c

16g

200g

05

k8s-worker2

10.132.10.95

CentOS-7

8c

16g

200g

06

k8s-worker3

10.132.10.96

CentOS-7

8c

16g

200g

07

k8s-worker4

10.132.10.97

CentOS-7

8c

16g

200g

08

k8s-worker5

10.132.10.98

CentOS-7

8c

16g

200g

09

k8s-worker6

10.132.10.99

CentOS-7

8c

16g

200g

10

k8s-harbordeploy

10.132.10.100

CentOS-7

4c

8g

500g

11

k8s-nfs

10.132.10.101

CentOS-7

2c

4g

2000g

12

k8s-lb

10.132.10.120

lb内网

2c

4g

40g

参数配置

注:在全部节点执行以下操作

系统基础设置

工作、日志及数据存储目录设定

$mkdir-p/export/servers$mkdir-p/export/logs$mkdir-p/export/data$mkdir-p/export/upload

内核及网络参数优化

$vim/etc/及时生效_map_count=262144
ulimit优化
$vim/etc/security/需要在部署机器上执行以下命令生成公钥$ssh-keygen-trsa如果没有authorized_keys文件,可先执行创建创建在进行粘贴操作$touch~/.ssh/authorized_keys
3.部署步骤

1)在线安装

$yum-yinstall

2)离线安装

修改以下设置设置以下脚本内容提前上传docker及所有依赖rpm包,并切换至rpm包目录$yum-y./*rpm

3)重新加载配置文件,启动并查看状态

$systemctlstartdocker$systemctlstatusdocker

4)设置开机自启

$systemctlenabledocker

5)查看版本

$dockerversionClient:DockerEngine-CommunityVersion:20.10.17APIversion:1.41Goversion::100c701Built:MonJun623:05:122022OS/Arch:linux/amd64Context:defaultExperimental:trueServer:DockerEngine-CommunityEngine:Version:20.10.17APIversion:1.41()Goversion::a89b842Built:MonJun623:03:332022OS/Arch:linux/amd64Experimental:falsecontainerd:Version:1.6.8GitCommit:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6runc:Version:1.1.4GitCommit::Version:0.19.0GitCommit:de40ad0
docker-compose安装1.环境说明

名称

说明

操作系统

docker-compose

docker-compose-linux-x86_64

节点

deploy

2.部署说明

harbor私有镜像库依赖。

3.部署方法

1)下载docker-compose并上传至服务器

$curl-L

2)修改docker-compose执行权限

$mvdocker-compose/usr/local/bin/$chmod+x/usr/local/bin/docker-compose$docker-composeversion

3)查看版本

$
harbor安装1.环境说明

名称

说明

操作系统

harbor

节点

harbor

2.部署说明

私有镜像仓库。

3.下载harbor离线安装包并上传至服务器
$wget
4.解压安装包
$/export/servers/$cd/export/servers/harbor
5.修改配置文件
$$
6.设置以下内容
hostname:10.132.10.100:8090data_volume:/export/data/:/export/logs/harbor
7.导入harbor镜像
$如果有二次修改文件,请执行使配置文件生效./全部节点返回以下信息CHANGED={"ansible_facts":{"discovered_interpreter_python":"/usr/bin/python"},"changed":true,"checksum":"acd3897edb624cd18a197bcd026e6769797f4f05","dest":"/export/upload/","gid":0,"group":"root","md5sum":"3ba6d9fe6b2ac70860b6638b88d3c89d","mode":"0644","owner":"root","secontext":"system_u:object_r:usr_t:s0","size":103234394,"src":"/root/.ansible/tmp//source","state":"file","uid":0}

3)执行解压并安装

$ansiblek8s-mshell-a"tarxzvf/export/upload//export/upload/yum-yinstall/export/upload/docker-rpm/*"

4)设置开机自启并启动

$ansiblek8s-mshell-a"systemctlenabledockersystemctlstartdocker"

5)查看版本

$ansiblek8s-mshell-a"dockerversion"添加阿里云YUM的软件源:cat/etc//[kubernetes]name=Kubernetesbaseurl=

下载离线安装包

执行命令:/export/download/kubeadm-rpm

无网环境安装

1)上传kubeadm及依赖rpm包

$ls/export/upload/

2)分发安装包

$ansiblek8s-mcopy-a"src=/export/upload/=/export/upload/"可以在有公网环境提前下载镜像$/google_containers/coredns:$/google_containers/etcd:3.5.0-0$/google_containers/kube-apiserver:$/google_containers/kube-controller-manager:$/google_containers/kube-proxy:$/google_containers/kube-scheduler:$/google_containers/pause:3.5$dockerpullrancher/mirrored-flannelcni-flannel-cni-plugin:$dockerpullrancher/mirrored-flannelcni-flannel:镜像打harbor镜像库tag$/google_containers/coredns::8090/community/coredns:$/google_containers/etcd:3.5.0-010.132.10.100:8090/community/etcd:3.5.0-0$/google_containers/kube-apiserver::8090/community/kube-apiserver:$/google_containers/kube-controller-manager::8090/community/kube-controller-manager:$/google_containers/kube-proxy::8090/community/kube-proxy:$/google_containers/kube-scheduler::8090/community/kube-scheduler:$/google_containers/pause:3.510.132.10.100:8090/community/pause:3.5$dockertagrancher/mirrored-flannelcni-flannel-cni-plugin::8090/community/mirrored-flannelcni-flannel-cni-plugin:$dockertagrancher/mirrored-flannelcni-flannel::8090/community/mirrored-flannelcni-flannel:显示以下信息[init]UsingKubernetesversion:[preflight]Runningpre-flightchecks[WARNINGFirewalld]:firewalldisactive,pleaseensureports[644310250]areopenoryourclustermaynotfunctioncorrectly[preflight]PullingimagesrequiredforsettingupaKubernetescluster[preflight]Thismighttakeaminuteortwo,depingonthespeedofyourinternetconnection[preflight]Youcanalsoperformthisactioninbeforehandusing'kubeadmconfigimagespull'[certs]UsingcertificateDirfolder"/etc/kubernetes/pki"[certs]Generating"ca"certificateandkey[certs]Generating"apiserver"certificateandkey[certs]apiserverservingcertissignedforDNSnames[]andIPs[172.16.0.110.132.10.91][certs]Generating"apiserver-kubelet-client"certificateandkey[certs]Generating"front-proxy-ca"certificateandkey[certs]Generating"front-proxy-client"certificateandkey[certs]Generating"etcd/ca"certificateandkey[certs]Generating"etcd/server"certificateandkey[certs]etcd/serverservingcertissignedforDNSnames[localhostmaster01]andIPs[10.132.10.91127.0.0.1::1][certs]Generating"etcd/peer"certificateandkey[certs]etcd/peerservingcertissignedforDNSnames[localhostmaster01]andIPs[10.132.10.91127.0.0.1::1][certs]Generating"etcd/healthcheck-client"certificateandkey[certs]Generating"apiserver-etcd-client"certificateandkey[certs]Generating"sa"keyandpublickey[kubeconfig]Usingkubeconfigfolder"/etc/kubernetes"[kubeconfig]Writing""kubeconfigfile[kubeconfig]Writing""kubeconfigfile[kubeconfig]Writing""kubeconfigfile[kubeconfig]Writing""kubeconfigfile[kubelet-start]Writingkubeletenvironmentfilewithflagstofile"/var/lib/kubelet/"[kubelet-start]Writingkubeletconfigurationtofile"/var/lib/kubelet/"[kubelet-start]Startingthekubelet[control-plane]Usingmanifestfolder"/etc/kubernetes/manifests"[control-plane]CreatingstaticPodmanifestfor"kube-apiserver"[control-plane]CreatingstaticPodmanifestfor"kube-controller-manager"[control-plane]CreatingstaticPodmanifestfor"kube-scheduler"[etcd]CreatingstaticPodmanifestforlocaletcdin"/etc/kubernetes/manifests"[wait-control-plane]WaitingforthekubelettobootupthecontrolplaneasstaticPodsfromdirectory"/etc/kubernetes/manifests".Thiscantakeupto4m0s[apiclient][upload-config]StoringtheconfigurationusedinConfigMap"kubeadm-config"inthe"kube-system"Namespace[kubelet]CreatingaConfigMap""innamespacekube-systemwiththeconfigurationforthekubeletsinthecluster[upload-certs]StoringthecertificatesinSecret"kubeadm-certs"inthe"kube-system"Namespace[upload-certs]Usingcertificatekey:9151ea7bb260a42297f2edc486d5792f67d9868169310b82ef1eb18f6e4c0f13[mark-control-plane]Markingthenodemaster01ascontrol-planebyaddingthelabels:[/master(deprecated)//exclude-from-external-load-balancers][mark-control-plane]Markingthenodemaster01ascontrol-planebyaddingthetaints[/master:NoSchedule][bootstrap-token]Usingtoken:[bootstrap-token]Configuringbootstraptokens,cluster-infoConfigMap,RBACRoles[bootstrap-token]configuredRBACrulestoallowNodeBootstraptokenstogetnodes[bootstrap-token]configuredRBACrulestoallowNodeBootstraptokenstopostCSRsinorderfornodestogetlongtermcertificatecredentials[bootstrap-token]configuredRBACrulestoallowthecsrapprovercontrollerautomaticallyapproveCSRsfromaNodeBootstrapToken[bootstrap-token]configuredRBACrulestoallowcertificaterotationforallnodeclientcertificatesinthecluster[bootstrap-token]Creatingthe"cluster-info"ConfigMapinthe"kube-public"namespace[kubelet-finalize]Updating"/etc/kubernetes/"topointtoarotatablekubeletclientcertificateandkey[addons]Appliedessentialaddon:CoreDNS[addons]Appliedessentialaddon:kube-proxyYourKubernetescontrol-planehasinitializedsuccessfully!Tostartusingyourcluster,youneedtorunthefollowingasaregularuser:mkdir-p$HOME/.kubesudocp-i/etc/kubernetes/$HOME/.kube/configsudochown$(id-u):$(id-g)$HOME/.kube/configAlternatively,ifyouaretherootuser,youcanrun:exportKUBECONFIG=/etc/kubernetes/"kubectlapply-f[podnetwork].yaml"withoneoftheoptionslistedat:\--discovery-token-ca-cert-hashsha256:4884a98b0773bc89c36dc5fa51569293103ff093e9124431c4c8c2d5801a96a2\--control-plane--certificate-key9151ea7bb260a42297f2edc486d5792f67d9868169310b82ef1eb18f6e4c0f13Pleasenotethatthecertificate-keygivesaccesstoclustersensitivedata,keepitsecret!Asasafeguard,uploaded-certswillbedeletedintwohours;Ifnecessary,youcanuse"kubeadminitphaseupload-certs--upload-certs"asroot::6443--\--discovery-token-ca-cert-hashsha256:4884a98b0773bc89c36dc5fa51569293103ff093e9124431c4c8c2d5801a96a2

6)生成kubelet环境配置文件

创建文件$touch/export/servers/kubernetes/$vim/export/servers/kubernetes/在有网环境下可以切换下面地址在无网环境下需要使用私有harbor地址image:10.132.10.100:8090/community/mirrored-flannelcni-flannel-cni-plugin::-cpargs:--f-/flannel-/opt/cni/bin/flannelvolumeMounts:-name:cni-pluginmountPath:/opt/cni/bin-name:install-cniimage:/rancher/mirrored-flannelcni-flannel:在有网环境下可以切换下面地址在无网环境下需要使用私有harbor地址image:10.132.10.100:8090/community/mirrored-flannelcni-flannel::-/opt/bin/flanneldargs:---ip-masq---kube-subnet-mgrresources:requests:cpu:"100m"memory:"50Mi"limits:cpu:"100m"memory:"50Mi"securityContext:privileged:falsecapabilities:add:["NET_ADMIN","NET_RAW"]env:-name:POD_NAMEvalueFrom:fieldRef:fieldPath::POD_NAMESPACEvalueFrom:fieldRef:fieldPath::EVENT_QUEUE_DEPTHvalue:"5000"volumeMounts:-name:runmountPath:/run/flannel-name:flannel-cfgmountPath:/etc/kube-flannel/-name:xtables-lockmountPath:/run/:-name:runhostPath:path:/run/flannel-name:cni-pluginhostPath:path:/opt/cni/bin-name:cnihostPath:path:/etc/cni/:flannel-cfgconfigMap:name:kube-flannel-cfg-name:xtables-lockhostPath:path:/run/:FileOrCreate

8)安装网络插件flannel

查看pods状态$kubectlgetpods-ANAMESPACENAMEREADYSTATUSRESTARTSAGEkube-flannelkube-flannel-ds-kjmt41/1Running0148mkube-systemcoredns-7f84d7b4b5-7qr8g1/1Running04h18mkube-systemcoredns-7f84d7b4b5-fljws1/1Running04h18mkube-systemetcd-master011/1Running04h19mkube-systemkube-apiserver-master011/1Running04h19mkube-systemkube-controller-manager-master011/1Running04h19mkube-systemkube-proxy-wzq2t1/1Running04h18mkube-systemkube-scheduler-master011/1Running04h19m

9)加入其他master节点

查看token列表$kubeadmtokenlist在其他master节点执行如下操作此处如果报错,一般是certificate-key过期,可以在master01执行如下命令更新$kubeadminitphaseupload-certs--upload-certs3b647155b06311d39faf70cb094d9a5e102afd1398323e820cfb3cfd868ae58f生成kubelet环境配置文件$mkdir-p$HOME/.kube$sudocp-i/etc/kubernetes/$HOME/.kube/config$sudochown$(id-u):$(id-g)$HOME/.kube/config在其他worker节点执行master01执行init操作后生成的加入命令如下此处如果报错,一般是token过期,可以在master01执行如下命令重新生成加入命令$:6443\--\--discovery-token-ca-cert-hashsha256:cf30ddd3df1c6215b886df1ea378a68ad5a9faad7933d53ca9891ebbdf9a1c3f查看集成状态$kubectlgetnodesNAMESTATUSROLESAGEVERSIONmaster01Readycontrol-plane,,,

10)配置kubernetesdashboard

apiVersion:v1kind:Namespacemetadata:name:kubernetes-dashboard---apiVersion:v1kind:ServiceAccountmetadata:labels:k8s-app:kubernetes-dashboardname:kubernetes-dashboardnamespace:kubernetes-dashboard---kind:ServiceapiVersion:v1metadata:labels:k8s-app:kubernetes-dashboardname:kubernetes-dashboardnamespace:kubernetes-dashboardspec:type:NodePortports:-port:443targetPort:8443nodePort:31001selector:k8s-app:kubernetes-dashboard---apiVersion:v1kind:Secretmetadata:labels:k8s-app:kubernetes-dashboardname:kubernetes-dashboard-certsnamespace:kubernetes-dashboardtype:Opaque---apiVersion:v1kind:Secretmetadata:labels:k8s-app:kubernetes-dashboardname:kubernetes-dashboard-key-holdernamespace:kubernetes-dashboardtype:Opaque---kind:ConfigMapapiVersion:v1metadata:labels:k8s-app:kubernetes-dashboardname:kubernetes-dashboard-settingsnamespace:kubernetes-dashboard---kind:RoleapiVersion:/v1metadata:labels:k8s-app:kubernetes-dashboardname:kubernetes-dashboardnamespace:kubernetes-dashboardrules:AllowDashboardtogetandupdate'kubernetes-dashboard-settings'configmap.-apiGroups:[""]resources:["configmaps"]resourceNames:["kubernetes-dashboard-settings"]verbs:["get","update"]AllowMetricsScrapertogetmetricsfromtheMetricsserver-apiGroups:[""]resources:["pods","nodes"]verbs:["get","list","watch"]---apiVersion:/v1kind:RoleBindingmetadata:labels:k8s-app:kubernetes-dashboardname:kubernetes-dashboardnamespace:kubernetes-dashboardroleRef:apiGroup::Rolename:kubernetes-dashboardsubjects:-kind:ServiceAccountname:kubernetes-dashboardnamespace:kubernetes-dashboard---apiVersion:/v1kind:ClusterRoleBindingmetadata:name:kubernetes-dashboardroleRef:apiGroup::ClusterRolename:kubernetes-dashboardsubjects:-kind:ServiceAccountname:kubernetes-dashboardnamespace:kubernetes-dashboard---kind:DeploymentapiVersion:apps/v1metadata:labels:k8s-app:kubernetes-dashboardname:kubernetes-dashboardnamespace:kubernetes-dashboardspec:replicas:1revisionHistoryLimit:10selector:matchLabels:k8s-app:kubernetes-dashboardtemplate:metadata:labels:k8s-app:kubernetes-dashboardspec:securityContext:seccompProfile:type:RuntimeDefaultcontainers:-name:kubernetes-dashboardimage:kubernetesui/dashboard::Alwaysports:-containerPort:8443protocol:TCPargs:---auto-generate-certificates---namespace=kubernetes-dashboardIfnotspecified,DashboardwillattempttoautodiscovertheAPIserverandconnect---apiserver-host=http://my-address:portvolumeMounts:-name:kubernetes-dashboard-certsmountPath:/certsCommentthefollowingtolerationsifDashboardmustnotbedeployedonmastertolerations:-key:/mastereffect:NoSchedule---kind:ServiceapiVersion:v1metadata:labels:k8s-app:dashboard-metrics-scrapername:dashboard-metrics-scrapernamespace:kubernetes-dashboardspec:ports:-port:8000targetPort:8000selector:k8s-app:dashboard-metrics-scraper---kind:DeploymentapiVersion:apps/v1metadata:labels:k8s-app:dashboard-metrics-scrapername:dashboard-metrics-scrapernamespace:kubernetes-dashboardspec:replicas:1revisionHistoryLimit:10selector:matchLabels:k8s-app:dashboard-metrics-scrapertemplate:metadata:labels:k8s-app:dashboard-metrics-scraperspec:securityContext:seccompProfile:type:RuntimeDefaultcontainers:-name:dashboard-metrics-scraperimage:kubernetesui/metrics-scraper::-containerPort:8000protocol:TCPlivenessProbe:httpGet:scheme:HTTPpath:/port:8000initialDelaySeconds:30timeoutSeconds:30volumeMounts:-mountPath:/tmpname:tmp-volumesecurityContext:allowPrivilegeEscalation:falsereadOnlyRootFilesystem:truerunAsUser:1001runAsGroup:2001serviceAccountName:kubernetes-dashboardnodeSelector:"/os":linux去除主节点的污点$/master-创建Secret$kubectlcreatesecrettl\--

13)生效dashboardyml配置文件

$kubectlapply-f/export/servers/kubernetes/浏览器访问地址:IP地址为集群任意节点(可以是LB地址)

15)制作访问token

输入以下内容---apiVersion:v1kind:ServiceAccountmetadata:name:admin-usernamespace:kubernetes-dashboard---apiVersion:/v1kind:ClusterRoleBindingmetadata:name:admin-userroleRef:apiGroup::ClusterRolename:cluster-adminsubjects:-kind:ServiceAccountname:admin-usernamespace:kubernetes-dashboard预期输出结果serviceaccount//admin-usercreated查看admin-user账户的token$kubectl-nkubernetes-dashboarddescribesecret$(kubectl-nkubernetes-dashboardgetsecret|grepadmin-user|awk'{print$1}')把上面命令执行获取到的Token复制到登录界面的Token输入框中,即可正常登录dashboard

13)登录dashboard如下



kubectl安装

1.环境说明

名称

说明

操作系统

kubectl

_64

节点

deploy

2.部署说明

Kuberneteskubectl客户端。

3.解压之前上传的kubadm-rpm包

$

4.执行安装

$rpm-ivhbc7a9f8e7c6844_64.rpm

5.增加执行权限

从任意master节点复制内容至上面的配置文件

6.查看版本

$kubectlversionClientVersion:{Major:"1",Minor:"22",GitVersion:"",GitCommit:"b695d79d4f967c403a96986f1750a35eb75e75f1",GitTreeState:"clean",BuildDate:"2021-11-17T15:48:33Z",GoVersion:"",Compiler:"gc",Platform:"linux/amd64"}ServerVersion:{Major:"1",Minor:"22",GitVersion:"",GitCommit:"b695d79d4f967c403a96986f1750a35eb75e75f1",GitTreeState:"clean",BuildDate:"2021-11-17T15:42:41Z",GoVersion:"",Compiler:"gc",Platform:"linux/amd64"}
helm安装

1.环境说明

名称

说明

操作系统

helm

节点

deploy

2.部署说明

Kubernetes资源包及配置管理工具。

3.下载helm离线安装包并上传至服务器

$wget

4.解压安装包

$/export/servers/$cd/export/servers/linux-amd64

5.增加执行权限

$cplinux-amd64/helm/usr/local/bin/$chmod+x/usr/local/bin/helm

6.查看版本

${Version:"",GitCommit:"414ff28d4029ae8c8b05d62aa06c7fe3dee2bc58",GitTreeState:"clean",GoVersion:""}

设置本地存储挂载nas

$mkdir/export/servers/helm_chart/local-path-storagecd/export/servers/helm_chart/local-path-storage/$!/bin/shwhilegetopts"m:s:p:"optdocase$optinp)absolutePath=$OPTARG;;s)sizeInBytes=$OPTARG;;m)volMode=$OPTARG;;esacdonemkdir-m0777-p${absolutePath}teardown:|-#!/bin/shwhilegetopts"m:s:p:"optdocase$optinp)absolutePath=$OPTARG;;s)sizeInBytes=$OPTARG;;m)volMode=$OPTARG;;esacdonerm-rf${absolutePath}:|-apiVersion:v1kind:Podmetadata:name:helper-podspec:containers:-name:helper-podimage:busybox

注:以上依赖镜像需要从公网环境下载依赖并导入镜像库,需要设置以上对应镜像地址从私有镜像库拉取镜像

生效本地存储yaml

$

设置k8s默认存储

$kubectlpatchstorageclasslocal-path-p'{"metadata":{"annotations":{"/is-default-class":"true"}}}'

注:后面部署的中间件及服务需要修改对应的存储为本地存储:"storageClass":"local-path"

发布于 2025-03-16
69
目录

    推荐阅读