本文介绍了如何使用在最新kubernetes版本上安装glusterfs
。 官方源仓库master分支 相关脚本无法部署成功,主要是由于kubernetes的版本发生了变化, 一些beta版本的资源升至稳定版本,所以一些模板yaml文件需要修改,这些修改很多在issus中能够发现,目前还未合并到主分支,仍然需要手动修改。 所以我Fork了源仓库,合并了这些这些修改。代码仓库地址为:gluster-kubernetes 。 同时还将镜像源改为国内镜像源,镜像源使用国内daocloud.io/daocloud
下的镜像。
安装前 以下原因可能会导致安装失败,安装前请先检查。
glusterfs server和glusterfs client版本需要尽量保持一直,由于镜像中的glusterfs server版本较低,为7.1
, 所以客户端不能够安装最新版本,centos7 yum版本回退操作如下:
1 2 3 4 5 6 7 8 9 10 yum history info glusterfs-fuse yum history undo ${事务ID} yum install -y https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-7/glusterfs-libs-7.1-1.el7.x86_64.rpm yum install -y https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-7/glusterfs-7.1-1.el7.x86_64.rpm yum install -y https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-7/glusterfs-client-xlators-7.1-1.el7.x86_64.rpm yum install -y https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-7/glusterfs-fuse-7.1-1.el7.x86_64.rpm
国内需要注意时区问题,可能导致挂载失败;
设备不能包含任何数据;
需要加载必要的内核模块;
在每个节点上执行 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 fdisk -l lsmod | grep dm_snapshot || modprobe dm_snapshot lsmod | grep dm_mirror || modprobe dm_mirror lsmod | grep dm_thin_pool || modprobe dm_thin_pool lsmod | egrep '^(dm_snapshot|dm_mirror|dm_thin_pool)' yum install -y https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-7/glusterfs-libs-7.1-1.el7.x86_64.rpm yum install -y https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-7/glusterfs-7.1-1.el7.x86_64.rpm yum install -y https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-7/glusterfs-client-xlators-7.1-1.el7.x86_64.rpm yum install -y https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-7/glusterfs-fuse-7.1-1.el7.x86_64.rpm glusterfs --version glusterfs 7.1 mount.glusterfs -V glusterfs 7.1
在主节点上执行 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 git clone https://github.com/donggangcj/gluster-kubernetes.git cd gluster-kubernetes/deploycat << EOF > topology.json { "clusters" : [ { "nodes" : [ { "node" : { "hostnames" : { "manage" : [ "node01" ], "storage" : [ "${IP_LIST['node01']} " ] }, "zone" : 1 }, "devices" : [ "/dev/sdb" ] }, { "node" : { "hostnames" : { "manage" : [ "node02" ], "storage" : [ "${IP_LIST['node02']} " ] }, "zone" : 1 }, "devices" : [ "/dev/sdb" ] }, { "node" : { "hostnames" : { "manage" : [ "node03" ], "storage" : [ "${IP_LIST['node03']} " ] }, "zone" : 1 }, "devices" : [ "/dev/sdb" ] } ] } ] } EOF kubectl get nodes ADMIN_KEY=adminkey USER_KEY=userkey ./gk-deploy -g -y -v --admin-key ${ADMIN_KEY} --user-key ${USER_KEY} export HEKETI_CLI_SERVER=$(kubectl get svc/heketi --template 'http://{{.spec.clusterIP}}:{{(index .spec.ports 0).port}}' )echo $HEKETI_CLI_SERVER curl $HEKETI_CLI_SERVER /hello SECRET_KEY=`echo -n "${ADMIN_KEY} " | base64` cat << EOF | kubectl apply -f - apiVersion: v1 kind: Secret metadata: name: heketi-secret namespace: default data: key: ${SECRET_KEY} type : kubernetes.io/glusterfsEOF cat << EOF | kubectl apply -f - apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: glusterfs-storage provisioner: kubernetes.io/glusterfs parameters: resturl: "${HEKETI_CLI_SERVER} " restuser: "admin" secretNamespace: "default" secretName: "heketi-secret" volumetype: "replicate:3" EOF kubectl get nodes,pods NAME STATUS ROLES AGE VERSION node/node01 Ready <none> 5d3h v1.17.0 node/node02 Ready <none> 5d3h v1.17.0 node/node03 Ready <none> 5d3h v1.17.0 NAME READY STATUS RESTARTS AGE pod/glusterfs-bhprz 1/1 Running 0 45m pod/glusterfs-jt64n 1/1 Running 0 45m pod/glusterfs-vkfp5 1/1 Running 0 45m pod/heketi-779bc95979-272qk 1/1 Running 0 38m
总结 上述仓库后续不会继续维护,相关安装参考还是以官方仓库为准。