再也不踩坑的Kubernetes实战指南

一、安装前必读

二、kubeadm高可用安装k8s集群

1、基本环境安装

Kubeadm安装方式自1.14版本以后,安装方法几乎没有任何变化,此文档可以尝试安装最新的k8s集群,centos采用的是7.x版本。

PS:

K8S官网:https://kubernetes.io/docs/setup/

最新版高可用安装:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/

主机名IP地址说明
k8s-master01 ~ 03192.168.0.107 ~ 203master节点 * 3
k8s-master-lb192.168.0.236keepalived虚拟IP
k8s-node01 ~ 02192.168.0.110 ~ 205worker节点 * 2
表1-1  高可用Kubernetes集群规划
配置信息备注
系统版本CentOS 7.9
Docker版本 19.03.x
Pod网段 172.168.0.0/12
Service网段10.96.0.0/12
表1-2安装环境信息

2、修改hosts

所有节点配置hosts,修改/etc/hosts如下:

[root@k8s-master01 ~]# cat /etc/hosts
192.168.0.107 k8s-master01
192.168.0.108 k8s-master02
192.168.0.109 k8s-master03
192.168.0.236 k8s-master-lb # 如果不是高可用集群,该IP为Master01的IP
192.168.0.110 k8s-node01
192.168.0.111 k8s-node02

3、CentOS7安装yum源如下:

curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo

yum install -y yum-utils device-mapper-persistent-data lvm2

yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo

4、必备工具安装

yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y

5、所有节点关闭防火墙、selinux、dnsmasq、swap。服务器配置如下:

systemctl disable --now firewalld 
systemctl disable --now dnsmasq
systemctl disable --now NetworkManager

setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

6、关闭SWAP分区

swapoff -a && sysctl -w vm.swappiness=0
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab

7、安装ntpdate

rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm
yum install ntpdate -y

8、所有节点同步时间。时间同步配置如下:

ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' >/etc/timezone
ntpdate time2.aliyun.com
# 加入到crontab
*/5 * * * * /usr/sbin/ntpdate time2.aliyun.com

9、所有节点配置limit

ulimit -SHn 65535

vim /etc/security/limits.conf
# 末尾添加如下内容
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited

10、Master01节点免密钥登录其他节点,安装过程中生成配置文件和证书均在Master01上操作,集群管理也在Master01上操作,阿里云或者AWS上需要单独一台kubectl服务器。密钥配置如下:

ssh-keygen -t rsa
for i in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02;do ssh-copy-id -i .ssh/id_rsa.pub $i;done

11、下载安装所有的源码文件:

cd /root/ ; git clone https://github.com/dotbalo/k8s-ha-install.git

12、所有节点升级系统并重启,此处升级没有升级内核,下节会单独升级内核:

13、内核配置

CentOS7 需要升级内核至4.18+,本地升级的版本为4.19

13.1、在master01节点下载内核:

cd /root
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm

13.2、从master01节点传到其他节点:

for i in k8s-master02 k8s-master03 k8s-node01 k8s-node02;do scp kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm $i:/root/ ; done

13.3、所有节点安装内核:

cd /root && yum localinstall -y kernel-ml*

13.4、所有节点更改内核启动顺序:

grub2-set-default  0 && grub2-mkconfig -o /etc/grub2.cfg

grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"

13.5、检查默认内核是不是4.19

[root@k8s-master02 ~]# grubby --default-kernel
/boot/vmlinuz-4.19.12-1.el7.elrepo.x86_64

13.6、所有节点重启,然后检查内核是不是4.19

[root@k8s-master02 ~]# uname -a
Linux k8s-master02 4.19.12-1.el7.elrepo.x86_64 #1 SMP Fri Dec 21 11:06:36 EST 2018 x86_64 x86_64 x86_64 GNU/Linux

13.7、所有节点安装ipvsadm

yum install ipvsadm ipset sysstat conntrack libseccomp -y

13.8、所有节点配置ipvs模块,在内核4.19+版本nf_conntrack_ipv4已经改为nf_conntrack, 4.18以下使用nf_conntrack_ipv4即可:

modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
vim /etc/modules-load.d/ipvs.conf 
	# 加入以下内容
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip

13.9、然后执行即可:

systemctl enable --now systemd-modules-load.service

14、开启一些k8s集群中必须的内核参数,所有节点配置k8s内核:

cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720

net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
sysctl --system

15、所有节点配置完内核后,重启服务器,保证重启后内核依旧加载

reboot
lsmod | grep --color=auto -e ip_vs -e nf_conntrack

16、基本组件安装

本节主要安装的是集群中用到的各种组件,比如Docker-ce、Kubernetes各组件等。

16.1、所有节点安装Docker-ce 19.03

yum install docker-ce-19.03.* docker-cli-19.03.* -y

16.2、由于新版kubelet建议使用systemd,所以可以把docker的CgroupDriver改成systemd

mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

16.3、所有节点设置开机自启动Docker:

systemctl daemon-reload && systemctl enable --now docker

16.4、安装k8s组件:

yum list kubeadm.x86_64 --showduplicates | sort -r

16.5、所有节点安装最新版本kubeadm:

yum install kubeadm-1.20* kubelet-1.20* kubectl-1.20* -y

16.6、默认配置的pause镜像使用gcr.io仓库,国内可能无法访问,所以这里配置
Kubelet使用阿里云的pause镜像:

cat >/etc/sysconfig/kubelet<<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2"
EOF

16.7、设置Kubelet开机自启动:

systemctl daemon-reload
systemctl enable --now kubelet

17、高可用组件安装

公有云要用公有云自带的负载均衡,比如阿里云的SLB,腾讯云的ELB,用来替代haproxy
和keepalived,因为公有云大部分都是不支持keepalived的,另外如果用阿里云的话,kubectl控制端不能放在master节点,推荐使用腾讯云,因为阿里云的slb有回环的问题,也就是slb代理的服务器不能反向访问SLB,但是腾讯云修复了这个问题。

注意:如果不是高可用集群,haproxy和keepalived无需安装。

17.1、所有Master节点通过yum安装HAProxy和KeepAlived:

yum install keepalived haproxy -y

17.2、所有Master节点配置HAProxy(详细配置参考HAProxy文档,所有Master节点的HAProxy配置相同):

[root@k8s-master01 etc]# mkdir /etc/haproxy
[root@k8s-master01 etc]# vim /etc/haproxy/haproxy.cfg 
global
  maxconn  2000
  ulimit-n  16384
  log  127.0.0.1 local0 err
  stats timeout 30s

defaults
  log global
  mode  http
  option  httplog
  timeout connect 5000
  timeout client  50000
  timeout server  50000
  timeout http-request 15s
  timeout http-keep-alive 15s

frontend monitor-in
  bind *:33305
  mode http
  option httplog
  monitor-uri /monitor

frontend k8s-master
  bind 0.0.0.0:16443
  bind 127.0.0.1:16443
  mode tcp
  option tcplog
  tcp-request inspect-delay 5s
  default_backend k8s-master

backend k8s-master
  mode tcp
  option tcplog
  option tcp-check
  balance roundrobin
  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  server k8s-master01	192.168.0.107:6443  check
  server k8s-master02	192.168.0.108:6443  check
  server k8s-master03	192.168.0.109:6443  check

17.3、所有Master节点配置KeepAlived,配置不一样,注意区分 [root@k8s-master01 pki]# vim /etc/keepalived/keepalived.conf ,注意每个节点的IP和网卡(interface参数)

Master01节点的配置:

[root@k8s-master01 etc]# mkdir /etc/keepalived

[root@k8s-master01 ~]# vim /etc/keepalived/keepalived.conf 
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5
    weight -5
    fall 2  
rise 1
}
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    mcast_src_ip 192.168.0.107
    virtual_router_id 51
    priority 101
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.0.236
    }
    track_script {
       chk_apiserver
    }
}

Master02节点的配置:

! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
   interval 5
    weight -5
    fall 2  
rise 1
}
vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    mcast_src_ip 192.168.0.108
    virtual_router_id 51
    priority 100
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.0.236
    }
    track_script {
       chk_apiserver
    }
}

Master03节点的配置:

! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
 interval 5
    weight -5
    fall 2  
rise 1
}
vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    mcast_src_ip 192.168.0.109
    virtual_router_id 51
    priority 100
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.0.236
    }
    track_script {
       chk_apiserver
    }
}

所有master节点配置KeepAlived健康检查文件:

[root@k8s-master01 keepalived]# cat /etc/keepalived/check_apiserver.sh 
#!/bin/bash

err=0
for k in $(seq 1 3)
do
    check_code=$(pgrep haproxy)
    if [[ $check_code == "" ]]; then
        err=$(expr $err + 1)
        sleep 1
        continue
    else
        err=0
        break
    fi
done

if [[ $err != "0" ]]; then
    echo "systemctl stop keepalived"
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi
chmod +x /etc/keepalived/check_apiserver.sh

启动haproxy和keepalived

[root@k8s-master01 keepalived]# systemctl daemon-reload
[root@k8s-master01 keepalived]# systemctl enable --now haproxy
[root@k8s-master01 keepalived]# systemctl enable --now keepalived
测试VIP
[root@k8s-master01 ~]# ping 192.168.0.236 -c 4
PING 192.168.0.236 (192.168.0.236) 56(84) bytes of data.
64 bytes from 192.168.0.236: icmp_seq=1 ttl=64 time=0.464 ms
64 bytes from 192.168.0.236: icmp_seq=2 ttl=64 time=0.063 ms
64 bytes from 192.168.0.236: icmp_seq=3 ttl=64 time=0.062 ms
64 bytes from 192.168.0.236: icmp_seq=4 ttl=64 time=0.063 ms

--- 192.168.0.236 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3106ms
rtt min/avg/max/mdev = 0.062/0.163/0.464/0.173 ms
[root@k8s-master01 ~]# telnet 192.168.0.236 16443
Trying 192.168.0.236...
Connected to 192.168.0.236.
Escape character is '^]'.
Connection closed by foreign host.

如果ping不通且telnet没有出现 ] ,则认为VIP不可以,不可在继续往下执行,需要排查keepalived的问题,比如防火墙和selinux,haproxy和keepalived的状态,监听端口等

所有节点查看防火墙状态必须为disable和inactive:systemctl status firewalld

所有节点查看selinux状态,必须为disable:getenforce

master节点查看haproxy和keepalived状态:systemctl status keepalived haproxy

master节点查看监听端口:netstat -lntp

18、集群初始化

官方初始化文档:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability

Master01节点创建kubeadm-config.yaml配置文件如下:

Master01:(# 注意,如果不是高可用集群,192.168.0.236:16443改为master01的地址,16443改为apiserver的端口,默认是6443,注意更改v1.18.5自己服务器kubeadm的版本:kubeadm version)

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: 7t2weq.bjbawausm0jaxury
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.0.107
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master01
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  certSANs:
  - 192.168.0.236
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 192.168.0.236:16443
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.20.0
networking:
  dnsDomain: cluster.local
  podSubnet: 172.168.0.0/12
  serviceSubnet: 10.96.0.0/12
scheduler: {}

更新kubeadm文件

kubeadm config migrate --old-config kubeadm-config.yaml --new-config new.yaml

将new.yaml文件复制到其他master节点,之后所有Master节点提前下载镜像,可以节省初始化时间:

kubeadm config images pull --config /root/new.yaml

所有节点设置开机自启动kubelet

systemctl enable --now kubelet  #(如果启动失败无需管理,初始化成功以后即可启动)

Master01节点初始化,初始化以后会在/etc/kubernetes目录下生成对应的证书和配置文件,之后其他Master节点加入Master01即可:

kubeadm init --config /root/new.yaml  --upload-certs

如果初始化失败,重置后再次初始化,命令如下:

kubeadm reset -f ; ipvsadm --clear  ; rm -rf ~/.kube

初始化成功以后,会产生Token值,用于其他节点加入时使用,因此要记录下初始化成功生成的token值(令牌值):

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.0.236:16443 --token 7t2weq.bjbawausm0jaxury \
    --discovery-token-ca-cert-hash sha256:8c92ecb336be2b9372851a9af2c7ca1f7f60c12c68f6ffe1eb513791a1b8a908 \
    --control-plane --certificate-key ac2854de93aaabdf6dc440322d4846fc230b290c818c32d6ea2e500fc930b0aa

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.236:16443 --token 7t2weq.bjbawausm0jaxury \
    --discovery-token-ca-cert-hash sha256:8c92ecb336be2b9372851a9af2c7ca1f7f60c12c68f6ffe1eb513791a1b8a908

Master01节点配置环境变量,用于访问Kubernetes集群:

cat <<EOF >> /root/.bashrc
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF
source /root/.bashrc

查看节点状态:

[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS     ROLES                  AGE   VERSION
k8s-master01   NotReady   control-plane,master   74s   v1.20.0

采用初始化安装方式,所有的系统组件均以容器的方式运行并且在kube-system命名空间内,此时可以查看Pod状态:

[root@k8s-master01 ~]# kubectl get pods -n kube-system -o wide
NAME                                   READY     STATUS    RESTARTS   AGE       IP              NODE
coredns-777d78ff6f-kstsz               0/1       Pending   0          14m       <none>          <none>
coredns-777d78ff6f-rlfr5               0/1       Pending   0          14m       <none>          <none>
etcd-k8s-master01                      1/1       Running   0          14m       192.168.0.107   k8s-master01
kube-apiserver-k8s-master01            1/1       Running   0          13m       192.168.0.107   k8s-master01
kube-controller-manager-k8s-master01   1/1       Running   0          13m       192.168.0.107   k8s-master01
kube-proxy-8d4qc                       1/1       Running   0          14m       192.168.0.107   k8s-master01
kube-scheduler-k8s-master01            1/1       Running   0          13m       192.168.0.107   k8s-master01

19、高可用Master

Token过期后生成新的token

kubeadm token create –print-join-command

Master需要生成–certificate-key

kubeadm init phase upload-certs  –upload-certs

Token没有过期直接执行Join就行了

初始化其他master加入集群

kubeadm join 192.168.0.236:16443 --token fgtxr1.bz6dw1tci1kbj977     --discovery-token-ca-cert-hash sha256:06ebf46458a41922ff1f5b3bc49365cf3dd938f1a7e3e4a8c8049b5ec5a3aaa5 \
    --control-plane --certificate-key 03f99fb57e8d5906e4b18ce4b737ce1a055de1d144ab94d3cdcf351dfcd72a8b

19、Node节点配置

Node节点上主要部署公司的一些业务应用,生产环境中不建议Master节点部署系统组件之外的其他Pod,测试环境可以允许Master节点部署Pod以节省系统资源。

kubeadm join 192.168.0.236:16443 --token 7t2weq.bjbawausm0jaxury \
    --discovery-token-ca-cert-hash sha256:8c92ecb336be2b9372851a9af2c7ca1f7f60c12c68f6ffe1eb513791a1b8a908

所有节点初始化完成后,查看集群状态

[root@k8s-master01]# kubectl  get node
NAME           STATUS     ROLES                  AGE     VERSION
k8s-master01   NotReady   control-plane,master   8m53s   v1.20.0
k8s-master02   NotReady   control-plane,master   2m25s   v1.20.0
k8s-master03   NotReady   control-plane,master   31s     v1.20.0
k8s-node01     NotReady   <none>                 32s     v1.20.0
k8s-node02     NotReady   <none>                 88s     v1.20.0

20、Calico组件的安装

以下步骤只在master01执行

cd /root/k8s-ha-install && git checkout manual-installation-v1.20.x && cd calico/

修改calico-etcd.yaml的以下位置

sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://192.168.0.107:2379,https://192.168.0.108:2379,https://192.168.0.109:2379"#g' calico-etcd.yaml


ETCD_CA=`cat /etc/kubernetes/pki/etcd/ca.crt | base64 | tr -d '\n'`
ETCD_CERT=`cat /etc/kubernetes/pki/etcd/server.crt | base64 | tr -d '\n'`
ETCD_KEY=`cat /etc/kubernetes/pki/etcd/server.key | base64 | tr -d '\n'`
sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml


sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml

POD_SUBNET=`cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep cluster-cidr= | awk -F= '{print $NF}'`

所以更改的时候请确保这个步骤的这个网段没有被统一替换掉,如果被替换掉了,还请改回来:

sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@#   value: "192.168.0.0/16"@  value: '"${POD_SUBNET}"'@g' calico-etcd.yaml
kubectl apply -f calico-etcd.yaml

查看容器状态

[root@k8s-master01 calico]# kubectl  get po -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-5f6d4b864b-pwvnb   1/1     Running   0          3m29s
calico-node-5lz9m                          1/1     Running   0          3m29s
calico-node-8z4bg                          1/1     Running   0          3m29s
calico-node-lmzvf                          1/1     Running   0          3m29s
calico-node-mpngv                          1/1     Running   0          3m29s
calico-node-vmqsl                          1/1     Running   0          3m29s
coredns-54d67798b7-8525g                   1/1     Running   0          39m
coredns-54d67798b7-fxs72                   1/1     Running   0          39m
etcd-k8s-master01                          1/1     Running   0          39m
etcd-k8s-master02                          1/1     Running   0          33m
etcd-k8s-master03                          1/1     Running   0          31m
kube-apiserver-k8s-master01                1/1     Running   0          39m
kube-apiserver-k8s-master02                1/1     Running   0          33m
kube-apiserver-k8s-master03                1/1     Running   0          30m
kube-controller-manager-k8s-master01       1/1     Running   1          39m
kube-controller-manager-k8s-master02       1/1     Running   0          33m
kube-controller-manager-k8s-master03       1/1     Running   0          31m
kube-proxy-hnkmj                           1/1     Running   0          39m
kube-proxy-jk4dm                           1/1     Running   0          32m
kube-proxy-nbcg2                           1/1     Running   0          32m
kube-proxy-qv9k7                           1/1     Running   0          32m
kube-proxy-x6xdc                           1/1     Running   0          33m
kube-scheduler-k8s-master01                1/1     Running   1          39m
kube-scheduler-k8s-master02                1/1     Running   0          33m
kube-scheduler-k8s-master03                1/1     Running   0          30m

21、Metrics部署

在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率。

将Master01节点的front-proxy-ca.crt复制到所有Node节点

scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node01:/etc/kubernetes/pki/front-proxy-ca.crt
scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node(其他节点自行拷贝):/etc/kubernetes/pki/front-proxy-ca.crt

安装metrics server

cd /root/k8s-ha-install/metrics-server-0.4.x-kubeadm/

[root@k8s-master01 metrics-server-0.4.x-kubeadm]# kubectl  create -f comp.yaml 
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

查看状态

[root@k8s-master01 metrics-server-0.4.x-kubeadm]# kubectl  top node
NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master01   109m         2%     1296Mi          33%       
k8s-master02   99m          2%     1124Mi          29%       
k8s-master03   104m         2%     1082Mi          28%       
k8s-node01     55m          1%     761Mi           19%       
k8s-node02     53m          1%     663Mi           17%

22、Dashboard部署

Dashboard用于展示集群中的各类资源,同时也可以通过Dashboard实时查看Pod的日志和在容器中执行一些命令等。

22.1、安装指定版本dashboard

cd /root/k8s-ha-install/dashboard/

[root@k8s-master01 dashboard]# kubectl  create -f .
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

22.2、安装最新版本

官方GitHub地址:https://github.com/kubernetes/dashboard

可以在官方dashboard查看到最新版dashboard

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml

22.3、创建管理员用户
vim admin.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding 
metadata: 
  name: admin-user
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
kubectl apply -f admin.yaml -n kube-system

22.4、登录Dashboard

在谷歌浏览器(Chrome)启动文件中加入启动参数,用于解决无法访问Dashboard的问题,参考图1-1:

--test-type --ignore-certificate-errors

更改dashboard的svc为NodePort:

kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

将ClusterIP更改为NodePort(如果已经为NodePort忽略此步骤):

查看端口号:

kubectl get svc kubernetes-dashboard -n kubernetes-dashboard

根据自己的实例端口号,通过任意安装了kube-proxy的宿主机或者VIP的IP+端口即可访问到dashboard:

访问Dashboard:https://192.168.0.236:18282(请更改18282为自己的端口),选择登录方式为令牌(即token方式),参考图1-2

查看token值:

[root@k8s-master01 1.1.1]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-r4vcp
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 2112796c-1c9e-11e9-91ab-000c298bf023

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXI0dmNwIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyMTEyNzk2Yy0xYzllLTExZTktOTFhYi0wMDBjMjk4YmYwMjMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.bWYmwgRb-90ydQmyjkbjJjFt8CdO8u6zxVZh-19rdlL_T-n35nKyQIN7hCtNAt46u6gfJ5XXefC9HsGNBHtvo_Ve6oF7EXhU772aLAbXWkU1xOwQTQynixaypbRIas_kiO2MHHxXfeeL_yYZRrgtatsDBxcBRg-nUQv4TahzaGSyK42E_4YGpLa3X3Jc4t1z0SQXge7lrwlj8ysmqgO4ndlFjwPfvg0eoYqu9Qsc5Q7tazzFf9mVKMmcS1ppPutdyqNYWL62P1prw_wclP0TezW1CsypjWSVT4AuJU8YmH8nTNR1EXn8mJURLSjINv6YbZpnhBIPgUGk1JYVLcn47w

将token值输入到令牌后,单击登录即可访问
Dashboard,参考图1-3:

23、一些必须的配置更改

将Kube-proxy改为ipvs模式,因为在初始化集群的时候注释了ipvs配置,所以需要自行修改一下:

在master01节点执行

kubectl edit cm kube-proxy -n kube-system
mode: “ipvs”

更新Kube-Proxy的Pod:

kubectl patch daemonset kube-proxy -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}" -n kube-system

验证Kube-Proxy模式

[root@k8s-master01 1.1.1]# curl 127.0.0.1:10249/proxyMode
ipvs

三、注意事项

注意:kubeadm安装的集群,证书有效期默认是一年。master节点的kube-apiserver、kube-scheduler、kube-controller-manager、etcd都是以容器运行的。可以通过kubectl get po -n kube-system查看。

启动和二进制不同的是:

kubelet的配置文件在/etc/sysconfig/kubelet和/var/lib/kubelet/config.yaml

其他组件的配置文件在/etc/Kubernetes/manifests目录下,比如kube-apiserver.yaml,该yaml文件更改后,kubelet会自动刷新配置,也就是会重启pod。不能再次创建该文件

Kubeadm安装后,master节点默认不允许部署pod,可以通过以下方式打开:

查看Taints:

[root@k8s-master01 ~]# kubectl  describe node -l node-role.kubernetes.io/master=  | grep Taints
Taints:             node-role.kubernetes.io/master:NoSchedule
Taints:             node-role.kubernetes.io/master:NoSchedule
Taints:             node-role.kubernetes.io/master:NoSchedule

####删除Taint:#########
[root@k8s-master01 ~]# kubectl  taint node  -l node-role.kubernetes.io/master node-role.kubernetes.io/master:NoSchedule-
node/k8s-master01 untainted
node/k8s-master02 untainted
node/k8s-master03 untainted
[root@k8s-master01 ~]# kubectl  describe node -l node-role.kubernetes.io/master=  | grep Taints
Taints:             <none>
Taints:             <none>
Taints:             <none>

ceph HEALTH_ERR 1 scrub errors

1、ceph health detail

HEALTH_ERR 1 scrub errors; Possible data damage: 1 pg inconsistent
OSD_SCRUB_ERRORS 1 scrub errors
PG_DAMAGED Possible data damage: 1 pg inconsistent
pg 2.307 is active+clean+inconsistent, acting [69,174]

2、ceph pg repair 2.307

instructing pg 2.307 on osd.69 to repair

3、ceph health detail

HEALTH_ERR 1 scrub errors; Possible data damage: 1 pg inconsistent
OSD_SCRUB_ERRORS 1 scrub errors
PG_DAMAGED Possible data damage: 1 pg inconsistent
pg 2.307 is active+clean+scrubbing+deep+inconsistent+repair, acting [69,174]


云迁移项目的流程

1、系统在云化迁移的过程中需要根据各自业务特性的差异及实现技术的方式采取不同的迁移策略进行云化迁移。

2、首先要进行系统调研,包括业务、系统架构、数据库、应用程序方面的,以及相关业务目标。

3、然后要评估迁移原则、流程、经济效应,以及方法模型,特别是风险,给出评估结论。

4、接下来进行方案设计,包括业务架构方案,系统架构方案,系统改造和实施方案。

5、迁移之前,需要完成必要的系统架构、数据库、应用程序等相关改造,并做好验证测试。

6、实施迁移之前,要做好相关资源的准备,实施过程中要按照预先制定的方案分阶段进行。

7、迁移完成后,要做好功能、性能测试以及一致性验证,并未后期运维做好基础材料的整理工作。

云计算中各类存储区别

HDFS 不适合低延时的数据访问方式,不适合大量小文件、不适合多方读写,需要任意的文件修改,不支持多个机器同时写入。

HBase 适合低延时的数据访问方式。

ceph 支持文件存储、块存储、对象存储。 支持cephfs rbd

swift 在swift对象存储中,客户端需要联系swift网关,这会带来一些潜在的单点故障。

持续更新中。。。。。。

ceph相关命令

###############centos7

mkdir my-cluster

cd my-cluster

yum install ceph-deploy -y

ceph-deploy new mon01

yum install python-minimal -y

ceph-deploy install mon01 node01 node02

MDS=”mon01″

MON=”mon01 node01 node02″

OSDS=”mon01 node01 node02″

INST=”$OSDS $MON”

echo “osd pool default size = 2

osd max object name len = 256

osd max object namespace len = 64

mon_pg_warn_max_per_osd = 2000

mon clock drift allowed = 30

mon clock drift warn backoff = 30

rbd cache writethrough until flush = false” >> ceph.conf

apt-get install -y ceph-base

apt-get install -y ceph-common

apt-get install -y ceph-fs-common

apt-get install -y ceph-fuse

apt-get install -y ceph-mds

apt-get install -y ceph-mon

apt-get install -y ceph-osd

[Ceph]

name=Ceph packages for $basearch

baseurl=http://mirrors.163.com/ceph/rpm-nautilus/el7/$basearch

enabled=1

priority=1

gpgcheck=1

gpgkey=https://download.ceph.com/keys/release.asc

[Ceph-noarch]

name=Ceph noarch packages

baseurl=http://mirrors.163.com/ceph/rpm-nautilus/el7/noarch

enabled=1

priority=1

gpgcheck=1

gpgkey=https://download.ceph.com/keys/release.asc

[ceph-source]

name=Ceph source packages

baseurl=http://mirrors.163.com/ceph/rpm-nautilus/el7/SRPMS

enabled=0

priority=1

gpgcheck=1

gpgkey=https://download.ceph.com/keys/release.asc

yum install python-setuptools

yum install -y deltarpm

yum install -y gdisk

ceph mgr module disable dashboard

ceph-deploy mgr create mon01 node01 node02

ceph dashboard ac-user-create admin passw0rd administrator

systemctl restart ceph-mon.target

ceph-deploy –overwrite-conf mds create mon01 node01 node02 创建mds

ceph osd pool create cephfs_data 128

ceph osd pool create cephfs_metadata 128

ceph fs new myfs cephfs_metadata cephfs_data

ceph osd pool rm cephfs_data cephfs_data –yes-i-really-really-mean-it 删除cephfs_data

ceph fs rm myfs –yes-i-really-mean-it 删除myfs

mount -t ceph 192.168.169.190:6789:/ /fsdata -o name=admin,secretfile=/etc/ceph/admin.secret

##ceph添加新节点

ceph-deploy –overwrite-conf config push admin mon01 node01 node02 node03

ceph-deploy –overwrite-conf mon create node03

ceph-deploy –overwrite-conf mon add node03

ceph-deploy osd create –data /dev/sdb node03

#######rbd链接方式

ceph osd pool create rbd_test 128 128

rbd create rbd_date –size 20480 -p rbd_test

rbd –image rbd_date -p rbd_test info

rbd feature disable rbd_test/rbd_date object-map fast-diff deep-flatten

rbd map rbd_date -p rbd_test

rbd showmapped

ceph osd pool application enable rbd_test rbd_date

mkfs.xfs /dev/rbd0

mkdir /rbddate

mount /dev/rbd0 /rbddate/

dd if=/dev/zero of=/rbddate/10G bs=1M count=10240

#cloudstack挂载rbd存储

ceph auth get-or-create client.cloudstack mon ‘allow r’ osd ‘allow rwx pool=vm-data’

AQD+VQVfELMbJRAA5LspVxtCykwJ3LFzwYLyFQ==

####删除RBD

rbd list -p rbd_test

rbd unmap /dev/rbd0

rbd rm rbd_date -p rbd_test

rbd list -p rbd_test 查看镜像

rbd snap ls rbd_test/dfe36912-ba7f-11ea-a837-000c297bc10e 查看镜像的快照

rbd snap unprotect rbd_test/dfe36912-ba7f-11ea-a837-000c297bc10e@cloudstack-base-snap 解除快照保护

rbd snap purge rbd_test/ede76ccf-f86a-4ab7-afa7-1adc4f1b576b 删除快照

rbd rm rbd_test/ede76ccf-f86a-4ab7-afa7-1adc4f1b576b 删除镜像

rbd children vm-data/2368966f-0ea3-11eb-8538-3448edf6aa08@cloudstack-base-snap 查看子快照

rbd flatten vm-data/79900df4-0b18-42cb-854b-c29778f02aff 还原子快照

问题:This means the image is still open or the client using it crashed. Try again after closing/unmapping it or waiting 30s for the crashed client to timeout.

解决:

rbd status vm-data/6926af02-27c3-47ad-a7ee-86c7d95aa353 查看戳号

ceph osd blacklist add 172.31.156.11:0/4126702798

查看rbd 残留的watch信息

[root@node-2 ~]# rbd status compute/2d05517a-8670-4cce-b39d-709e055381d6_disk Watchers: watcher=192.168.55.2:0/2900899764 client.14844 cookie=139644428642944

将该残留的watch信息添加到osd的黑名单,再查看watch是否存在。

[root@node-2 ~]# ceph osd blacklist add 192.168.55.2:0/2900899764 blacklisting 192.168.55.2:0/2900899764 until 2018-06-11 14:25:31.027420 (3600 sec) [root@node-2 ~]# rbd status compute/2d05517a-8670-4cce-b39d-709e055381d6_disk Watchers: none

删除rbd

[root@node-2 ~]# rbd rm compute/2d05517a-8670-4cce-b39d-709e055381d6_disk Removing image: 100% complete…done.

对象存储minio的安装方法

docker run -d –name minio \

–restart=always –net=host \

-e MINIO_ACCESS_KEY=admin \

-e MINIO_SECRET_KEY=passw0rd \

-v /data:/data \

minio/minio server \

http://oss-{01…04}/data/

########客户端实时同步脚本如下:########

#修改成你需要实时同步备份的文件夹

backup=”/backup/filebackup”

#修改成你要备份到的存储桶

bucket=”minio”

#将以下代码一起复制到SSH运行

cat > /etc/systemd/system/miniocfile.service <

[Unit]

Description=miniocfile

After=network.target

[Service]

Type=simple

ExecStart=$(command -v mc) mirror -w –overwrite ${backup} ${bucket}/${backup}

Restart=on-failure

[Install]

WantedBy=multi-user.target

EOF

#######################################

ceph更换硬盘方法

1、systemctl stop ceph-osd@21

2、ceph osd out osd.21

3、ceph osd crush remove osd.21

4、ceph auth del osd.21

5、ceph osd rm 21

6、cd /var/lib/ceph/osd/ceph-xx #查找相应的OSD对应的/dev是什么。

7、umount /dev/sdj

8、换硬盘

smartctl –all /dev/sde ########识别硬盘SN号

9、登录到c01

10、cd /root/my-cluster

11、ceph-deploy osd create –data /dev/sdj c02

什么是ELK

ELK是三个开源软件的缩写,分别表示:Elasticsearch , Logstash, Kibana , 它们都是开源软件。新增了一个FileBeat,它是一个轻量级的日志收集处理工具(Agent),Filebeat占用资源少,适合于在各个服务器上搜集日志后传输给Logstash,官方也推荐此工具。

1.1、Elasticsearch是个开源分布式搜索引擎,提供搜集、分析、存储数据三大功能。它的特点有:分布式,零配置,自动发现,索引自动分片,索引副本机制,restful风格接口,多数据源,自动搜索负载等。

1.2、Logstash 主要是用来日志的搜集、分析、过滤日志的工具,支持大量的数据获取方式。一般工作方式为c/s架构,client端安装在需要收集日志的主机上,server端负责将收到的各节点日志进行过滤、修改等操作在一并发往elasticsearch上去。

1.3、Kibana 也是一个开源和免费的工具,Kibana可以为 Logstash 和 ElasticSearch 提供的日志分析友好的 Web 界面,可以帮助汇总、分析和搜索重要数据日志。

云安全有什么不同?

云安全防护是指保护云计算所涉及的数据、应用和基础架构。云环境(无论是公共、私有还是混合云)需要考虑的安全因素与内部 IT 架构存在诸多共同之处。

主要安全顾虑包括:未经授权的数据披露和泄漏、薄弱的访问控制、易受网络攻击、可用性中断。不管是传统 IT 系统,还是云系统,都会受到这些隐患的影响。和任何计算环境一样,云安全防护也涉及持续提供充分的预防性保护,从而使您能够:

确信数据和系统安全无虞。
能够掌握当前的安全状态。
及时知晓所有异常情况。
能够跟踪意外事件并做出响应。

云安全防护为什么有所不同
虽然很多人都清楚云计算的优点,但却因各种安全威胁而对其望而却步。我们知道对于介于互联网发送的无定形资源与物理服务器之间的事物,您一定很难理解。其实,它是一种不断变化的动态环境。例如,其面临的安全威胁会不断变化。也就是说,在很大程度上,云安全防护就是 IT 安全防护。在理清这两者的具体区别之后,您就不会再觉得“云”这个词没有安全感了。

边界消融

安全与访问权限密切相关。传统环境通常会利用边界安全模型来控制访问权限。云环境具有很高的连通性,这使得流量能够轻松地绕过传统的边界防御措施。不安全的应用编程接口(API)、不严格的身份和凭据管理、帐户劫持以及存在恶意企图的内部人员,都会致使系统和数据面临各种威胁。要防止对云端的未经授权访问,您需要改用以数据为中心的方案。对数据进行加密。改进授权流程。使用高强度密码和双重身份验证。确保每个层级都安全无虞。

如今,一切都处于软件之中

“云”是指通过软件交付给用户的托管资源。云计算基础架构及其处理的所有数据都是动态、可扩展且可移植的。要控制云的安全性,就要对各种环境变量以及与之相关的静态及动态工作负载和数据作出响应,既可以通过工作负载的内在措施(如加密)作出响应,也可以利用云管理系统和 API 作出动态响应。这有助于防止云环境发生系统损坏和数据丢失。

复杂的威胁局势

复杂威胁是指会对现代化计算(当然也包括云计算)造成不利影响的所有威胁。越来越多的复杂恶意软件和其他攻击,比如高级持续性威胁(APT),都会通过利用计算堆栈中的漏洞来绕开各种网络防御措施。数据泄露可能会引发未经授权的信息披露和数据篡改。对于这些威胁,除了及时采取会随着新出现的威胁不断改进的云安全防护实践之外,没有任何明确有效的解决方案。

如下图是菜鸟给某数据中心的私有云安全防护体系架构。

某数据中心私有云架构方案

什么是云计算?

形象点来说说“云计算”:

1、荤段子论:

男人找个女友或老婆是自建私有云,单身约炮或者到娱乐场所消费是公有云服务,按需使用并可弹性扩容,已婚男人找二奶小蜜则属于混合云。

这种解释方式对男人比较适用,通常稍微一解释就心领神会!

2、滴滴出行论:

出行需要用车,云计算或者云服务好比乘坐出租车或专车快车共享单车,随时需要随时用,按用量(路程)付费即可。

自己买车开车是混合云,车是自己的,出去付费停车或加油相当于部分使用公有云,而亚马逊或微软云在国内跟黑车差不多被政策限制。

3、吃货论:

饿了要吃饭,在家里自己做饭属于自建私有云,需要建造厨房购买锅碗瓢盆柴米油盐等,吃完饭还需要自己刷锅洗碗等运维工作,费时费力;外面餐馆提供的就相当于公有云服务,按需胡吃海塞吃完结账抹嘴走人,餐馆后厨如何安排做菜顺序并加快出菜速度就是负载均衡和虚拟化概念;请厨师到家里上门做饭则属于典型的混合云,在资产安全的情况下有限使用公有云。