启用嵌套VT-x/AMD-V”功能变灰问题>>>(主机硬件必须要支持Intel VT-x或AMD-V虚拟化技术,并在BIOS设置打开)
0.打开VirtualBox本体文件所在的位置,搜索栏输入cmd回车
1.输入【VBoxManage.exe list vms】,回车
2.输入【VBoxManage.exe modifyvm “改成虚拟机名字” –nested-hw-virt on】(on改off,则为关闭),回车
我是钱锅锅,我无所畏惧,我一生渴望被收藏好,妥善安放,细心保存。免我惊,免我苦,免我四下流离,免我无枝可依。
启用嵌套VT-x/AMD-V”功能变灰问题>>>(主机硬件必须要支持Intel VT-x或AMD-V虚拟化技术,并在BIOS设置打开)
0.打开VirtualBox本体文件所在的位置,搜索栏输入cmd回车
1.输入【VBoxManage.exe list vms】,回车
2.输入【VBoxManage.exe modifyvm “改成虚拟机名字” –nested-hw-virt on】(on改off,则为关闭),回车
CloudStack遇到数据库增长的问题,可能的原因有哪些呢?可能是日志数据过多,或者配置了过多的历史记录,比如虚拟机操作日志、事件日志、使用量数据等。另外,可能有一些未清理的旧数据,比如删除的虚拟机、卷、快照等残留的记录。还有可能是数据库本身的表碎片过多,导致空间占用大,当数据增长到一定程度后会导致云平台操作超时无法打开。
总结一下,处理步骤可能包括:
1. 清理旧数据和日志。
2. 调整CloudStack的清理参数。
3. 优化数据库表。
4. 使用分区表。
5. 归档和备份策略。
6. 调整配置参数。
7. 升级版本。
8. 监控和维护。
针对CloudStack数据库不断增大的问题,以下是分步解决方案及详细说明:
CloudStack默认会保留历史数据,需检查自动清理配置是否合理。
-- 修改事件保留天数(默认30天)
UPDATE `cloud`.`configuration` SET value='7' WHERE name='event.purge.delay';
-- 修改使用量统计保留天数(默认180天)
UPDATE `cloud`.`configuration` SET value='30' WHERE name='usage.stats.job.retention.time';
-- 调整清理任务间隔(默认86400秒=1天)
UPDATE `cloud`.`configuration` SET value='86400' WHERE name='event.purge.interval';
# 登录CloudStack管理节点
cloudstack-management purge-old-events -d 7 # 清理7天前事件
cloudstack-management purge-usage -d 30 # 清理30天前使用量数据
若无需计费功能,可关闭使用量统计。
UPDATE `cloud`.`configuration` SET value='false' WHERE name='usage.stats.job.enable';
UPDATE `cloud`.`configuration` SET value='1800' WHERE name='usage.stats.job.interval';
针对碎片化严重的表进行优化。
OPTIMIZE TABLE cloud_usage.usage_volume;
OPTIMIZE TABLE cloud_usage.usage_event;
OPTIMIZE TABLE cloud.op_nwgrp_work;
注:InnoDB引擎建议改用ALTER TABLE <表名> ENGINE=InnoDB;,避免全表锁问题。
pt-online-schema-change --alter "ENGINE=InnoDB" D=cloud_usage,t=usage_volume --execute
按时间分区,快速删除旧数据。
cloud_usage.usage_event表): ALTER TABLE usage_event
PARTITION BY RANGE (TO_DAYS(created)) (
PARTITION p202301 VALUES LESS THAN (TO_DAYS('2023-02-01')),
PARTITION p202302 VALUES LESS THAN (TO_DAYS('2023-03-01')),
PARTITION p_max VALUES LESS THAN MAXVALUE
);
ALTER TABLE usage_event DROP PARTITION p202301;
减少非必要日志量。
log4j2.xml): <Logger name="com.cloud" level="info" additivity="false">
<Logger name="org.apache.cloudstack" level="warn" />
find /var/log/cloudstack/ -name "*.log*" -mtime +7 -delete
分离历史数据并压缩备份。
CREATE TABLE cloud_usage.usage_event_archive LIKE cloud_usage.usage_event;
INSERT INTO cloud_usage.usage_event_archive
SELECT * FROM cloud_usage.usage_event WHERE created < '2023-01-01';
DELETE FROM cloud_usage.usage_event WHERE created < '2023-01-01';
mysqldump -u root -p --single-transaction --quick cloud | gzip > cloud_backup.sql.gz
新版可能优化数据管理,如CloudStack 4.18+改进事件清理逻辑。
#!/bin/bash
mysql -u cloud -p密码 -e "DELETE FROM cloud_usage.usage_event WHERE created < NOW() - INTERVAL 90 DAY;"
mysqlcheck -o cloud_usage usage_event
mysqldump -u root -p --databases cloud cloud_usage > backup.sql通过上述步骤,可有效控制数据库增长,提升CloudStack性能。
当云服务器后端使用的是rbd方式对接云平台时,有时会遇到宿主机意外断电,停机或重启故障后,VM会报超级块(superblock)的问题导致VM的系统无法正常启动的处理办法:
原因:由于主机突然断电导致RBD锁住,VM无法启动,需要解锁。Ceph RBD的锁机制。当使用exclusive-lock特性时,RBD镜像会被客户端锁定,防止多个客户端同时写入导致数据损坏。突然断电可能导致锁未被正确释放,所以VM无法启动,因为锁仍然存在。
解决方法:
#查看镜像的锁信息(需指定池和镜像名称)
rbd lock ls <pool>/<image>
# 强制删除锁(需指定锁的ID)
rbd lock remove <pool>/<image> <lock-id> <client-id>
#实例:
rbd lock ls hdd_pool_01/abad99f9-50dd-4c8d-b595-0ff7f50cfa4d988
rbd lock remove hdd_pool_01/abad99f9-50dd-4c8d-b595-0ff7f50cfa4d "auto 94778552067968" client.1264649449
一、安装前必读
1、请不要使用带中文的操作系统服务器和虚拟机。
2、生产环境建议使用二进制安装方式。
3、文档中的IP地址请统一替换,不要一个一个替换!!!
4、如果是一个一个替换的,请不要找我排查故障!!!!!!!
二、kubeadm高可用安装k8s集群
1、基本环境安装
Kubeadm安装方式自1.14版本以后,安装方法几乎没有任何变化,此文档可以尝试安装最新的k8s集群,centos采用的是7.x版本。
PS:
K8S官网:https://kubernetes.io/docs/setup/
最新版高可用安装:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/
| 主机名 | IP地址 | 说明 |
| k8s-master01 ~ 03 | 192.168.0.107 ~ 203 | master节点 * 3 |
| k8s-master-lb | 192.168.0.236 | keepalived虚拟IP |
| k8s-node01 ~ 02 | 192.168.0.110 ~ 205 | worker节点 * 2 |
| 配置信息 | 备注 |
| 系统版本 | CentOS 7.9 |
| Docker版本 | 19.03.x |
| Pod网段 | 172.168.0.0/12 |
| Service网段 | 10.96.0.0/12 |
注意:VIP(虚拟IP)不要和内网IP重复,首先去ping一下,不通才可用。VIP需要和主机在同一个局域网内!公有云上搭建VIP是公有云的负载均衡的IP,比如阿里云的内网SLB的地址,腾讯云内网ELB的地址。
2、修改hosts
所有节点配置hosts,修改/etc/hosts如下:
[root@k8s-master01 ~]# cat /etc/hosts
192.168.0.107 k8s-master01
192.168.0.108 k8s-master02
192.168.0.109 k8s-master03
192.168.0.236 k8s-master-lb # 如果不是高可用集群,该IP为Master01的IP
192.168.0.110 k8s-node01
192.168.0.111 k8s-node02
3、CentOS7安装yum源如下:
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
4、必备工具安装
yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y
5、所有节点关闭防火墙、selinux、dnsmasq、swap。服务器配置如下:
systemctl disable --now firewalld
systemctl disable --now dnsmasq
systemctl disable --now NetworkManager
setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
6、关闭SWAP分区
swapoff -a && sysctl -w vm.swappiness=0
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
7、安装ntpdate
rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm
yum install ntpdate -y
8、所有节点同步时间。时间同步配置如下:
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' >/etc/timezone
ntpdate time2.aliyun.com
# 加入到crontab
*/5 * * * * /usr/sbin/ntpdate time2.aliyun.com
9、所有节点配置limit
ulimit -SHn 65535
vim /etc/security/limits.conf
# 末尾添加如下内容
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
10、Master01节点免密钥登录其他节点,安装过程中生成配置文件和证书均在Master01上操作,集群管理也在Master01上操作,阿里云或者AWS上需要单独一台kubectl服务器。密钥配置如下:
ssh-keygen -t rsa
for i in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02;do ssh-copy-id -i .ssh/id_rsa.pub $i;done
11、下载安装所有的源码文件:
cd /root/ ; git clone https://github.com/dotbalo/k8s-ha-install.git
12、所有节点升级系统并重启,此处升级没有升级内核,下节会单独升级内核:
yum update -y --exclude=kernel* && reboot #CentOS7需要升级,CentOS8可以按需升级系统
13、内核配置
CentOS7 需要升级内核至4.18+,本地升级的版本为4.19
13.1、在master01节点下载内核:
cd /root
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
13.2、从master01节点传到其他节点:
for i in k8s-master02 k8s-master03 k8s-node01 k8s-node02;do scp kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm $i:/root/ ; done
13.3、所有节点安装内核:
cd /root && yum localinstall -y kernel-ml*
13.4、所有节点更改内核启动顺序:
grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg
grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
13.5、检查默认内核是不是4.19
[root@k8s-master02 ~]# grubby --default-kernel
/boot/vmlinuz-4.19.12-1.el7.elrepo.x86_64
13.6、所有节点重启,然后检查内核是不是4.19
[root@k8s-master02 ~]# uname -a
Linux k8s-master02 4.19.12-1.el7.elrepo.x86_64 #1 SMP Fri Dec 21 11:06:36 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
13.7、所有节点安装ipvsadm
yum install ipvsadm ipset sysstat conntrack libseccomp -y
13.8、所有节点配置ipvs模块,在内核4.19+版本nf_conntrack_ipv4已经改为nf_conntrack, 4.18以下使用nf_conntrack_ipv4即可:
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
vim /etc/modules-load.d/ipvs.conf
# 加入以下内容
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
13.9、然后执行即可:
systemctl enable --now systemd-modules-load.service
14、开启一些k8s集群中必须的内核参数,所有节点配置k8s内核:
cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
sysctl --system
15、所有节点配置完内核后,重启服务器,保证重启后内核依旧加载
reboot
lsmod | grep --color=auto -e ip_vs -e nf_conntrack
16、基本组件安装
本节主要安装的是集群中用到的各种组件,比如Docker-ce、Kubernetes各组件等。
16.1、所有节点安装Docker-ce 19.03
yum install docker-ce-19.03.* docker-cli-19.03.* -y
16.2、由于新版kubelet建议使用systemd,所以可以把docker的CgroupDriver改成systemd
mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
16.3、所有节点设置开机自启动Docker:
systemctl daemon-reload && systemctl enable --now docker
16.4、安装k8s组件:
yum list kubeadm.x86_64 --showduplicates | sort -r
16.5、所有节点安装最新版本kubeadm:
yum install kubeadm-1.20* kubelet-1.20* kubectl-1.20* -y
16.6、默认配置的pause镜像使用gcr.io仓库,国内可能无法访问,所以这里配置
Kubelet使用阿里云的pause镜像:
cat >/etc/sysconfig/kubelet<<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2"
EOF
16.7、设置Kubelet开机自启动:
systemctl daemon-reload
systemctl enable --now kubelet
17、高可用组件安装
公有云要用公有云自带的负载均衡,比如阿里云的SLB,腾讯云的ELB,用来替代haproxy
和keepalived,因为公有云大部分都是不支持keepalived的,另外如果用阿里云的话,kubectl控制端不能放在master节点,推荐使用腾讯云,因为阿里云的slb有回环的问题,也就是slb代理的服务器不能反向访问SLB,但是腾讯云修复了这个问题。
注意:如果不是高可用集群,haproxy和keepalived无需安装。
17.1、所有Master节点通过yum安装HAProxy和KeepAlived:
yum install keepalived haproxy -y
17.2、所有Master节点配置HAProxy(详细配置参考HAProxy文档,所有Master节点的HAProxy配置相同):
[root@k8s-master01 etc]# mkdir /etc/haproxy
[root@k8s-master01 etc]# vim /etc/haproxy/haproxy.cfg
global
maxconn 2000
ulimit-n 16384
log 127.0.0.1 local0 err
stats timeout 30s
defaults
log global
mode http
option httplog
timeout connect 5000
timeout client 50000
timeout server 50000
timeout http-request 15s
timeout http-keep-alive 15s
frontend monitor-in
bind *:33305
mode http
option httplog
monitor-uri /monitor
frontend k8s-master
bind 0.0.0.0:16443
bind 127.0.0.1:16443
mode tcp
option tcplog
tcp-request inspect-delay 5s
default_backend k8s-master
backend k8s-master
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server k8s-master01 192.168.0.107:6443 check
server k8s-master02 192.168.0.108:6443 check
server k8s-master03 192.168.0.109:6443 check
17.3、所有Master节点配置KeepAlived,配置不一样,注意区分 [root@k8s-master01 pki]# vim /etc/keepalived/keepalived.conf ,注意每个节点的IP和网卡(interface参数)
Master01节点的配置:
[root@k8s-master01 etc]# mkdir /etc/keepalived
[root@k8s-master01 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
script_user root
enable_script_security
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state MASTER
interface ens33
mcast_src_ip 192.168.0.107
virtual_router_id 51
priority 101
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.0.236
}
track_script {
chk_apiserver
}
}
Master02节点的配置:
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
script_user root
enable_script_security
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
mcast_src_ip 192.168.0.108
virtual_router_id 51
priority 100
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.0.236
}
track_script {
chk_apiserver
}
}
Master03节点的配置:
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
script_user root
enable_script_security
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
mcast_src_ip 192.168.0.109
virtual_router_id 51
priority 100
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.0.236
}
track_script {
chk_apiserver
}
}
所有master节点配置KeepAlived健康检查文件:
[root@k8s-master01 keepalived]# cat /etc/keepalived/check_apiserver.sh
#!/bin/bash
err=0
for k in $(seq 1 3)
do
check_code=$(pgrep haproxy)
if [[ $check_code == "" ]]; then
err=$(expr $err + 1)
sleep 1
continue
else
err=0
break
fi
done
if [[ $err != "0" ]]; then
echo "systemctl stop keepalived"
/usr/bin/systemctl stop keepalived
exit 1
else
exit 0
fi
chmod +x /etc/keepalived/check_apiserver.sh
启动haproxy和keepalived
[root@k8s-master01 keepalived]# systemctl daemon-reload
[root@k8s-master01 keepalived]# systemctl enable --now haproxy
[root@k8s-master01 keepalived]# systemctl enable --now keepalived
重要:如果安装了keepalived和haproxy,需要测试keepalived是否是正常的
测试VIP
[root@k8s-master01 ~]# ping 192.168.0.236 -c 4
PING 192.168.0.236 (192.168.0.236) 56(84) bytes of data.
64 bytes from 192.168.0.236: icmp_seq=1 ttl=64 time=0.464 ms
64 bytes from 192.168.0.236: icmp_seq=2 ttl=64 time=0.063 ms
64 bytes from 192.168.0.236: icmp_seq=3 ttl=64 time=0.062 ms
64 bytes from 192.168.0.236: icmp_seq=4 ttl=64 time=0.063 ms
--- 192.168.0.236 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3106ms
rtt min/avg/max/mdev = 0.062/0.163/0.464/0.173 ms
[root@k8s-master01 ~]# telnet 192.168.0.236 16443
Trying 192.168.0.236...
Connected to 192.168.0.236.
Escape character is '^]'.
Connection closed by foreign host.
如果ping不通且telnet没有出现 ] ,则认为VIP不可以,不可在继续往下执行,需要排查keepalived的问题,比如防火墙和selinux,haproxy和keepalived的状态,监听端口等
所有节点查看防火墙状态必须为disable和inactive:systemctl status firewalld
所有节点查看selinux状态,必须为disable:getenforce
master节点查看haproxy和keepalived状态:systemctl status keepalived haproxy
master节点查看监听端口:netstat -lntp
18、集群初始化
官方初始化文档:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability
Master01节点创建kubeadm-config.yaml配置文件如下:
Master01:(# 注意,如果不是高可用集群,192.168.0.236:16443改为master01的地址,16443改为apiserver的端口,默认是6443,注意更改v1.18.5自己服务器kubeadm的版本:kubeadm version)
注意:以下文件内容,宿主机网段、podSubnet网段、serviceSubnet网段不能重复,具体安装前需要对集群安装网段划分。
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: 7t2weq.bjbawausm0jaxury
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.0.107
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: k8s-master01
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
certSANs:
- 192.168.0.236
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 192.168.0.236:16443
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: 172.168.0.0/12
serviceSubnet: 10.96.0.0/12
scheduler: {}
更新kubeadm文件
kubeadm config migrate --old-config kubeadm-config.yaml --new-config new.yaml
将new.yaml文件复制到其他master节点,之后所有Master节点提前下载镜像,可以节省初始化时间:
kubeadm config images pull --config /root/new.yaml
所有节点设置开机自启动kubelet
systemctl enable --now kubelet #(如果启动失败无需管理,初始化成功以后即可启动)
Master01节点初始化,初始化以后会在/etc/kubernetes目录下生成对应的证书和配置文件,之后其他Master节点加入Master01即可:
kubeadm init --config /root/new.yaml --upload-certs
如果初始化失败,重置后再次初始化,命令如下:
kubeadm reset -f ; ipvsadm --clear ; rm -rf ~/.kube
初始化成功以后,会产生Token值,用于其他节点加入时使用,因此要记录下初始化成功生成的token值(令牌值):
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join 192.168.0.236:16443 --token 7t2weq.bjbawausm0jaxury \
--discovery-token-ca-cert-hash sha256:8c92ecb336be2b9372851a9af2c7ca1f7f60c12c68f6ffe1eb513791a1b8a908 \
--control-plane --certificate-key ac2854de93aaabdf6dc440322d4846fc230b290c818c32d6ea2e500fc930b0aa
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.0.236:16443 --token 7t2weq.bjbawausm0jaxury \
--discovery-token-ca-cert-hash sha256:8c92ecb336be2b9372851a9af2c7ca1f7f60c12c68f6ffe1eb513791a1b8a908
Master01节点配置环境变量,用于访问Kubernetes集群:
cat <<EOF >> /root/.bashrc
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF
source /root/.bashrc
查看节点状态:
[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 NotReady control-plane,master 74s v1.20.0
采用初始化安装方式,所有的系统组件均以容器的方式运行并且在kube-system命名空间内,此时可以查看Pod状态:
[root@k8s-master01 ~]# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE
coredns-777d78ff6f-kstsz 0/1 Pending 0 14m <none> <none>
coredns-777d78ff6f-rlfr5 0/1 Pending 0 14m <none> <none>
etcd-k8s-master01 1/1 Running 0 14m 192.168.0.107 k8s-master01
kube-apiserver-k8s-master01 1/1 Running 0 13m 192.168.0.107 k8s-master01
kube-controller-manager-k8s-master01 1/1 Running 0 13m 192.168.0.107 k8s-master01
kube-proxy-8d4qc 1/1 Running 0 14m 192.168.0.107 k8s-master01
kube-scheduler-k8s-master01 1/1 Running 0 13m 192.168.0.107 k8s-master01
19、高可用Master
注意:以下步骤是上述init命令产生的Token过期了才需要执行以下步骤,如果没有过期不需要执行
Token过期后生成新的token:
kubeadm token create –print-join-command
Master需要生成–certificate-key
kubeadm init phase upload-certs –upload-certs
Token没有过期直接执行Join就行了
初始化其他master加入集群
kubeadm join 192.168.0.236:16443 --token fgtxr1.bz6dw1tci1kbj977 --discovery-token-ca-cert-hash sha256:06ebf46458a41922ff1f5b3bc49365cf3dd938f1a7e3e4a8c8049b5ec5a3aaa5 \
--control-plane --certificate-key 03f99fb57e8d5906e4b18ce4b737ce1a055de1d144ab94d3cdcf351dfcd72a8b
19、Node节点配置
Node节点上主要部署公司的一些业务应用,生产环境中不建议Master节点部署系统组件之外的其他Pod,测试环境可以允许Master节点部署Pod以节省系统资源。
kubeadm join 192.168.0.236:16443 --token 7t2weq.bjbawausm0jaxury \
--discovery-token-ca-cert-hash sha256:8c92ecb336be2b9372851a9af2c7ca1f7f60c12c68f6ffe1eb513791a1b8a908
所有节点初始化完成后,查看集群状态
[root@k8s-master01]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 NotReady control-plane,master 8m53s v1.20.0
k8s-master02 NotReady control-plane,master 2m25s v1.20.0
k8s-master03 NotReady control-plane,master 31s v1.20.0
k8s-node01 NotReady <none> 32s v1.20.0
k8s-node02 NotReady <none> 88s v1.20.0
20、Calico组件的安装
以下步骤只在master01执行
cd /root/k8s-ha-install && git checkout manual-installation-v1.20.x && cd calico/
修改calico-etcd.yaml的以下位置
sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://192.168.0.107:2379,https://192.168.0.108:2379,https://192.168.0.109:2379"#g' calico-etcd.yaml
ETCD_CA=`cat /etc/kubernetes/pki/etcd/ca.crt | base64 | tr -d '\n'`
ETCD_CERT=`cat /etc/kubernetes/pki/etcd/server.crt | base64 | tr -d '\n'`
ETCD_KEY=`cat /etc/kubernetes/pki/etcd/server.key | base64 | tr -d '\n'`
sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml
sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml
POD_SUBNET=`cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep cluster-cidr= | awk -F= '{print $NF}'`
# 注意下面的这个步骤是把calico-etcd.yaml文件里面的CALICO_IPV4POOL_CIDR下的网段改成自己的Pod网段,也就是把192.168.x.x/16改成自己的集群网段,并打开注释。
所以更改的时候请确保这个步骤的这个网段没有被统一替换掉,如果被替换掉了,还请改回来:
sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@# value: "192.168.0.0/16"@ value: '"${POD_SUBNET}"'@g' calico-etcd.yaml
kubectl apply -f calico-etcd.yaml
查看容器状态
[root@k8s-master01 calico]# kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-5f6d4b864b-pwvnb 1/1 Running 0 3m29s
calico-node-5lz9m 1/1 Running 0 3m29s
calico-node-8z4bg 1/1 Running 0 3m29s
calico-node-lmzvf 1/1 Running 0 3m29s
calico-node-mpngv 1/1 Running 0 3m29s
calico-node-vmqsl 1/1 Running 0 3m29s
coredns-54d67798b7-8525g 1/1 Running 0 39m
coredns-54d67798b7-fxs72 1/1 Running 0 39m
etcd-k8s-master01 1/1 Running 0 39m
etcd-k8s-master02 1/1 Running 0 33m
etcd-k8s-master03 1/1 Running 0 31m
kube-apiserver-k8s-master01 1/1 Running 0 39m
kube-apiserver-k8s-master02 1/1 Running 0 33m
kube-apiserver-k8s-master03 1/1 Running 0 30m
kube-controller-manager-k8s-master01 1/1 Running 1 39m
kube-controller-manager-k8s-master02 1/1 Running 0 33m
kube-controller-manager-k8s-master03 1/1 Running 0 31m
kube-proxy-hnkmj 1/1 Running 0 39m
kube-proxy-jk4dm 1/1 Running 0 32m
kube-proxy-nbcg2 1/1 Running 0 32m
kube-proxy-qv9k7 1/1 Running 0 32m
kube-proxy-x6xdc 1/1 Running 0 33m
kube-scheduler-k8s-master01 1/1 Running 1 39m
kube-scheduler-k8s-master02 1/1 Running 0 33m
kube-scheduler-k8s-master03 1/1 Running 0 30m
21、Metrics部署
在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率。
将Master01节点的front-proxy-ca.crt复制到所有Node节点
scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node01:/etc/kubernetes/pki/front-proxy-ca.crt
scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node(其他节点自行拷贝):/etc/kubernetes/pki/front-proxy-ca.crt
安装metrics server
cd /root/k8s-ha-install/metrics-server-0.4.x-kubeadm/
[root@k8s-master01 metrics-server-0.4.x-kubeadm]# kubectl create -f comp.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
查看状态
[root@k8s-master01 metrics-server-0.4.x-kubeadm]# kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8s-master01 109m 2% 1296Mi 33%
k8s-master02 99m 2% 1124Mi 29%
k8s-master03 104m 2% 1082Mi 28%
k8s-node01 55m 1% 761Mi 19%
k8s-node02 53m 1% 663Mi 17%
22、Dashboard部署
Dashboard用于展示集群中的各类资源,同时也可以通过Dashboard实时查看Pod的日志和在容器中执行一些命令等。
22.1、安装指定版本dashboard
cd /root/k8s-ha-install/dashboard/
[root@k8s-master01 dashboard]# kubectl create -f .
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
22.2、安装最新版本
官方GitHub地址:https://github.com/kubernetes/dashboard
可以在官方dashboard查看到最新版dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml
22.3、创建管理员用户
vim admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
kubectl apply -f admin.yaml -n kube-system
22.4、登录Dashboard
在谷歌浏览器(Chrome)启动文件中加入启动参数,用于解决无法访问Dashboard的问题,参考图1-1:
--test-type --ignore-certificate-errors

更改dashboard的svc为NodePort:
kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

将ClusterIP更改为NodePort(如果已经为NodePort忽略此步骤):
查看端口号:
kubectl get svc kubernetes-dashboard -n kubernetes-dashboard

根据自己的实例端口号,通过任意安装了kube-proxy的宿主机或者VIP的IP+端口即可访问到dashboard:
访问Dashboard:https://192.168.0.236:18282(请更改18282为自己的端口),选择登录方式为令牌(即token方式),参考图1-2

查看token值:
[root@k8s-master01 1.1.1]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name: admin-user-token-r4vcp
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: 2112796c-1c9e-11e9-91ab-000c298bf023
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXI0dmNwIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyMTEyNzk2Yy0xYzllLTExZTktOTFhYi0wMDBjMjk4YmYwMjMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.bWYmwgRb-90ydQmyjkbjJjFt8CdO8u6zxVZh-19rdlL_T-n35nKyQIN7hCtNAt46u6gfJ5XXefC9HsGNBHtvo_Ve6oF7EXhU772aLAbXWkU1xOwQTQynixaypbRIas_kiO2MHHxXfeeL_yYZRrgtatsDBxcBRg-nUQv4TahzaGSyK42E_4YGpLa3X3Jc4t1z0SQXge7lrwlj8ysmqgO4ndlFjwPfvg0eoYqu9Qsc5Q7tazzFf9mVKMmcS1ppPutdyqNYWL62P1prw_wclP0TezW1CsypjWSVT4AuJU8YmH8nTNR1EXn8mJURLSjINv6YbZpnhBIPgUGk1JYVLcn47w
将token值输入到令牌后,单击登录即可访问
Dashboard,参考图1-3:

23、一些必须的配置更改
将Kube-proxy改为ipvs模式,因为在初始化集群的时候注释了ipvs配置,所以需要自行修改一下:
在master01节点执行
kubectl edit cm kube-proxy -n kube-system
mode: “ipvs”
更新Kube-Proxy的Pod:
kubectl patch daemonset kube-proxy -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}" -n kube-system
验证Kube-Proxy模式
[root@k8s-master01 1.1.1]# curl 127.0.0.1:10249/proxyMode
ipvs
三、注意事项
注意:kubeadm安装的集群,证书有效期默认是一年。master节点的kube-apiserver、kube-scheduler、kube-controller-manager、etcd都是以容器运行的。可以通过kubectl get po -n kube-system查看。
启动和二进制不同的是:
kubelet的配置文件在/etc/sysconfig/kubelet和/var/lib/kubelet/config.yaml
其他组件的配置文件在/etc/Kubernetes/manifests目录下,比如kube-apiserver.yaml,该yaml文件更改后,kubelet会自动刷新配置,也就是会重启pod。不能再次创建该文件
Kubeadm安装后,master节点默认不允许部署pod,可以通过以下方式打开:
查看Taints:
[root@k8s-master01 ~]# kubectl describe node -l node-role.kubernetes.io/master= | grep Taints
Taints: node-role.kubernetes.io/master:NoSchedule
Taints: node-role.kubernetes.io/master:NoSchedule
Taints: node-role.kubernetes.io/master:NoSchedule
####删除Taint:#########
[root@k8s-master01 ~]# kubectl taint node -l node-role.kubernetes.io/master node-role.kubernetes.io/master:NoSchedule-
node/k8s-master01 untainted
node/k8s-master02 untainted
node/k8s-master03 untainted
[root@k8s-master01 ~]# kubectl describe node -l node-role.kubernetes.io/master= | grep Taints
Taints: <none>
Taints: <none>
Taints: <none>
HEALTH_ERR 1 scrub errors; Possible data damage: 1 pg inconsistent
OSD_SCRUB_ERRORS 1 scrub errors
PG_DAMAGED Possible data damage: 1 pg inconsistent
pg 2.307 is active+clean+inconsistent, acting [69,174]
instructing pg 2.307 on osd.69 to repair
HEALTH_ERR 1 scrub errors; Possible data damage: 1 pg inconsistent
OSD_SCRUB_ERRORS 1 scrub errors
PG_DAMAGED Possible data damage: 1 pg inconsistent
pg 2.307 is active+clean+scrubbing+deep+inconsistent+repair, acting [69,174]
1、系统在云化迁移的过程中需要根据各自业务特性的差异及实现技术的方式采取不同的迁移策略进行云化迁移。
2、首先要进行系统调研,包括业务、系统架构、数据库、应用程序方面的,以及相关业务目标。
3、然后要评估迁移原则、流程、经济效应,以及方法模型,特别是风险,给出评估结论。
4、接下来进行方案设计,包括业务架构方案,系统架构方案,系统改造和实施方案。
5、迁移之前,需要完成必要的系统架构、数据库、应用程序等相关改造,并做好验证测试。
6、实施迁移之前,要做好相关资源的准备,实施过程中要按照预先制定的方案分阶段进行。
7、迁移完成后,要做好功能、性能测试以及一致性验证,并未后期运维做好基础材料的整理工作。
HDFS 不适合低延时的数据访问方式,不适合大量小文件、不适合多方读写,需要任意的文件修改,不支持多个机器同时写入。
HBase 适合低延时的数据访问方式。
ceph 支持文件存储、块存储、对象存储。 支持cephfs rbd
swift 在swift对象存储中,客户端需要联系swift网关,这会带来一些潜在的单点故障。
持续更新中。。。。。。
###############centos7
mkdir my-cluster
cd my-cluster
yum install ceph-deploy -y
ceph-deploy new mon01
yum install python-minimal -y
ceph-deploy install mon01 node01 node02
MDS=”mon01″
MON=”mon01 node01 node02″
OSDS=”mon01 node01 node02″
INST=”$OSDS $MON”
echo “osd pool default size = 2
osd max object name len = 256
osd max object namespace len = 64
mon_pg_warn_max_per_osd = 2000
mon clock drift allowed = 30
mon clock drift warn backoff = 30
rbd cache writethrough until flush = false” >> ceph.conf
apt-get install -y ceph-base
apt-get install -y ceph-common
apt-get install -y ceph-fs-common
apt-get install -y ceph-fuse
apt-get install -y ceph-mds
apt-get install -y ceph-mon
apt-get install -y ceph-osd
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.163.com/ceph/rpm-nautilus/el7/$basearch
enabled=1
priority=1
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.163.com/ceph/rpm-nautilus/el7/noarch
enabled=1
priority=1
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.163.com/ceph/rpm-nautilus/el7/SRPMS
enabled=0
priority=1
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc
yum install python-setuptools
yum install -y deltarpm
yum install -y gdisk
ceph mgr module disable dashboard
ceph-deploy mgr create mon01 node01 node02
ceph dashboard ac-user-create admin passw0rd administrator
systemctl restart ceph-mon.target
ceph-deploy –overwrite-conf mds create mon01 node01 node02 创建mds
ceph osd pool create cephfs_data 128
ceph osd pool create cephfs_metadata 128
ceph fs new myfs cephfs_metadata cephfs_data
ceph osd pool rm cephfs_data cephfs_data –yes-i-really-really-mean-it 删除cephfs_data
ceph fs rm myfs –yes-i-really-mean-it 删除myfs
mount -t ceph 192.168.169.190:6789:/ /fsdata -o name=admin,secretfile=/etc/ceph/admin.secret
##ceph添加新节点
ceph-deploy –overwrite-conf config push admin mon01 node01 node02 node03
ceph-deploy –overwrite-conf mon create node03
ceph-deploy –overwrite-conf mon add node03
ceph-deploy osd create –data /dev/sdb node03
#######rbd链接方式
ceph osd pool create rbd_test 128 128
rbd create rbd_date –size 20480 -p rbd_test
rbd –image rbd_date -p rbd_test info
rbd feature disable rbd_test/rbd_date object-map fast-diff deep-flatten
rbd map rbd_date -p rbd_test
rbd showmapped
ceph osd pool application enable rbd_test rbd_date
mkfs.xfs /dev/rbd0
mkdir /rbddate
mount /dev/rbd0 /rbddate/
dd if=/dev/zero of=/rbddate/10G bs=1M count=10240
#cloudstack挂载rbd存储
ceph auth get-or-create client.cloudstack mon ‘allow r’ osd ‘allow rwx pool=vm-data’
AQD+VQVfELMbJRAA5LspVxtCykwJ3LFzwYLyFQ==
####删除RBD
rbd list -p rbd_test
rbd unmap /dev/rbd0
rbd rm rbd_date -p rbd_test
rbd list -p rbd_test 查看镜像
rbd snap ls rbd_test/dfe36912-ba7f-11ea-a837-000c297bc10e 查看镜像的快照
rbd snap unprotect rbd_test/dfe36912-ba7f-11ea-a837-000c297bc10e@cloudstack-base-snap 解除快照保护
rbd snap purge rbd_test/ede76ccf-f86a-4ab7-afa7-1adc4f1b576b 删除快照
rbd rm rbd_test/ede76ccf-f86a-4ab7-afa7-1adc4f1b576b 删除镜像
rbd children vm-data/2368966f-0ea3-11eb-8538-3448edf6aa08@cloudstack-base-snap 查看子快照
rbd flatten vm-data/79900df4-0b18-42cb-854b-c29778f02aff 还原子快照
问题:This means the image is still open or the client using it crashed. Try again after closing/unmapping it or waiting 30s for the crashed client to timeout.
解决:
rbd status vm-data/6926af02-27c3-47ad-a7ee-86c7d95aa353 查看戳号
ceph osd blacklist add 172.31.156.11:0/4126702798
查看rbd 残留的watch信息
[root@node-2 ~]# rbd status compute/2d05517a-8670-4cce-b39d-709e055381d6_disk Watchers: watcher=192.168.55.2:0/2900899764 client.14844 cookie=139644428642944
将该残留的watch信息添加到osd的黑名单,再查看watch是否存在。
[root@node-2 ~]# ceph osd blacklist add 192.168.55.2:0/2900899764 blacklisting 192.168.55.2:0/2900899764 until 2018-06-11 14:25:31.027420 (3600 sec) [root@node-2 ~]# rbd status compute/2d05517a-8670-4cce-b39d-709e055381d6_disk Watchers: none
删除rbd
[root@node-2 ~]# rbd rm compute/2d05517a-8670-4cce-b39d-709e055381d6_disk Removing image: 100% complete…done.
docker run -d –name minio \
–restart=always –net=host \
-e MINIO_ACCESS_KEY=admin \
-e MINIO_SECRET_KEY=passw0rd \
-v /data:/data \
minio/minio server \
http://oss-{01…04}/data/
########客户端实时同步脚本如下:########
#修改成你需要实时同步备份的文件夹
backup=”/backup/filebackup”
#修改成你要备份到的存储桶
bucket=”minio”
#将以下代码一起复制到SSH运行
cat > /etc/systemd/system/miniocfile.service <
[Unit]
Description=miniocfile
After=network.target
[Service]
Type=simple
ExecStart=$(command -v mc) mirror -w –overwrite ${backup} ${bucket}/${backup}
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
#######################################
1、systemctl stop ceph-osd@21
2、ceph osd out osd.21
3、ceph osd crush remove osd.21
4、ceph auth del osd.21
5、ceph osd rm 21
6、cd /var/lib/ceph/osd/ceph-xx #查找相应的OSD对应的/dev是什么。
7、umount /dev/sdj
8、换硬盘
smartctl –all /dev/sde ########识别硬盘SN号
9、登录到c01
10、cd /root/my-cluster
11、ceph-deploy osd create –data /dev/sdj c02