存储卷概述 容器磁盘上的文件的生命周期是短暂的,这就使得在容器中运行重要应用时会出现一些问题。首先,当容器崩溃时,kubelet 会重启它,但是容器中的文件将丢失,容器以干净的状态(镜像最初的状态)重新启动。其次,在Pod中同时运行多个容器时,这些容器之间通常需要共享文件。Kubernetes 中的Volume抽象就很好的解决了这些问题
Kubernetes 中的卷有明确的寿命 与封装它的 Pod 相同。所以,卷的生命比 Pod 中的所有容器都长,当这个容器重启时数据仍然得以保存。当然,当 Pod 不再存在时,卷也将不复存在。也许更重要的是,Kubernetes支持多种类型的卷,Pod 可以同时使用任意数量的卷
Kubernetes支持的类型
awsElasticBlockStore azureDisk azureFile cephfs csi downwardAPI emptyDir
fc flocker gcePersistentDisk gitRepo glusterfs hostPath iscsi local nfs
persistentVolumeClaim projected portworxVolume quobyte rbd scaleIO secret
storageos vsphereVolume
emptyDir 当 Pod 被分配给节点时,首先创建emptyDir卷,并且只要该 Pod 在该节点上运行,该卷就会存在。正如卷的名字所述,它最初是空的。Pod 中的容器可以读取和写入emptyDir卷中的相同文件,尽管该卷可以挂载到每个容器中的相同或不同路径上。当出于任何原因从节点中删除 Pod 时,emptyDir中的数据将被永久删除。
emptyDir的用法有:
暂存空间,例如用于基于磁盘的合并排序,临时缓存数据
用作长时间计算崩溃恢复时的检查点
Web服务器容器提供数据时,保存内容管理器容器提取的文件
apiVersion: v1 kind: Pod metadata: name: nginx labels: app: nginx spec: containers: - name: nginx image: harbor.ui.com/secret/nginx:1.13 ports: - containerPort: 80 volumeMounts: - mountPath: /cache name: cache-volume imagePullSecrets: - name: registrykey volumes: - name: cache-volume emptyDir: {} bin cache etc lib media opt root sbin sys usr
HostPath hostPath允许挂载Node上的文件系统到Pod里面去。如果Pod需要使用Node上的文件,可以使用hostPath
使用这种卷类型是请注意,因为:
由于每个节点上的文件都不同,具有相同配置(例如从 podTemplate 创建的)的 pod 在不同节点上的行为可能会有所不同
当 Kubernetes 按照计划添加资源感知调度时,将无法考虑hostPath使用的资源
在底层主机上创建的文件或目录只能由 root 写入。您需要在特权容器中以 root 身份运行进程,或修改主机上的文件权限以便写入hostPath卷
下面yml文件定义一个nginx的Pod资源,通过hostPath类型的存储卷,将Pod的宿主机的/etc/localtime挂载到容器中的/etc/localtime文件,实现时间同步
apiVersion: v1 kind: Pod metadata: name: nginx labels: app: nginx spec: containers: - name: nginx image: harbor.ui.com/secret/nginx:1.13 ports: - containerPort: 80 volumeMounts: - name: host-time mountPath: /etc/localtime imagePullSecrets: - name: registrykey volumes: - name: host-time hostPath: path: /etc/localtime pod/nginx created Thu Oct 31 11 :51:28 CST 2019
NFS NFS 是Network File System的缩写,即网络文件系统。Kubernetes中通过简单地配置就可以挂载NFS到Pod中,而NFS中的数据是可以永久保存的,同时NFS支持同时写操作。Pod被删除时,Volume被卸载,内容被保留。
部署nfs # yum install -y nfs-utils # systemctl enable rpcbind.service nfs Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service. # vim /etc/exports /docker 192.168.9.0/24(rw,async,no_root_squash,no_all_squash) # chown nfsnobody.nfsnobody /docker/# systemctl start rpcbind.service nfs # showmount -e 192.168.9.20 Export list for 192.168.9.20: /docker 192.168.9.0/24 # echo "Hello World" >index.html
ro 该主机对该共享目录有只读权限 rw 该主机对该共享目录有读写权限 root_squash 客户机用root用户访问该共享文件夹时,将root用户映射成匿名用户 no_root_squash 客户机用root访问该共享文件夹时,不映射root用户 all_squash 客户机上的任何用户访问该共享目录时都映射成匿名用户 anonuid 将客户机上的用户映射成指定的本地用户ID的用户 anongid 将客户机上的用户映射成属于指定的本地用户组ID sync 资料同步写入到内存与硬盘中 async 资料会先暂存于内存中,而非直接写入硬盘 insecure 允许从这台机器过来的非授权访问
创建nfs类型volume apiVersion: v1 kind: Pod metadata: name: nginx labels: app: nginx spec: containers: - name: nginx image: harbor.ui.com/secret/nginx:1.13 ports: - containerPort: 80 volumeMounts: - name: nginx-html mountPath: /usr/share/nginx/html imagePullSecrets: - name: registrykey volumes: - name: nginx-html nfs: path: /docker server: 192.168 .9 .20 pod "nginx" deleted index.html
GlusterFs Glusterfs是一个开源分布式文件系统,具有强大的横向扩展能力,可支持数PB存储容量和数千客户端,通过网络互联成一个并行的网络文件系统。具有可扩展性、高性能、高可用性等特点
安装glusterfs(所有节点) # yum install centos-release-gluster -y # yum install glusterfs-server -y # systemctl start glusterd.service # systemctl enable glusterd.service Created symlink from /etc/systemd/system/multi-user.target.wants/glusterd.service to /usr/lib/systemd/system/glusterd.service. # mkdir -p /gfs/data
添加资源池 # gluster peer probe master peer probe: success. # gluster peer probe node01 peer probe: success. # gluster peer probe node02 peer probe: success. # gluster pool list UUID Hostname State bb8c5f28-4984-477c-a8fc-1e6347cc9a2c master Connected 9ff5a852-27ad-47e7-8798-006db8962ad4 node01 Connected 988ed87a-d429-466c-8ffc-5d8ce707ebd6 node02 Connected 3bc23b91-2f7f-472f-ad57-1f6a4ca149f9 localhost Connected
查看资源节点状态
# gluster peer status Number of Peers: 3 Hostname: master Uuid: bb8c5f28-4984-477c-a8fc-1e6347cc9a2c State: Peer in Cluster (Connected) Hostname: node01 Uuid: 9ff5a852-27ad-47e7-8798-006db8962ad4 State: Peer in Cluster (Connected) Hostname: node02 Uuid: 988ed87a-d429-466c-8ffc-5d8ce707ebd6 State: Peer in Cluster (Connected)
创建卷 # gluster volume create tomcat-data transport tcp \ > harbor:/gfs/data \ > master:/gfs/data \ > node01:/gfs/data \ > node02:/gfs/data volume create: tomcat-data: failed: The brick harbor:/gfs/data is being created in the root partition. It is recommended that you don't use the system's root partition for storage backend. Or use 'force' at the end of the command if you want to override this behavior. # 错误提示让使用磁盘,可以force放在命令结尾,强制创建 # gluster volume create tomcat-data transport tcp \ harbor:/gfs/data \ master:/gfs/data \ node01:/gfs/data \ node02:/gfs/data \ force volume create: tomcat-data: success: please start the volume to access data
查看卷状态
# gluster volume info tomcat-data Volume Name: tomcat-data Type: Distribute Volume ID: 0131a258-7ab7-4533-8ec3-73e9ca2d51db Status: Created Snapshot Count: 0 Number of Bricks: 4 Transport-type: tcp Bricks: Brick1: harbor:/gfs/data Brick2: master:/gfs/data Brick3: node01:/gfs/data Brick4: node02:/gfs/data Options Reconfigured: transport.address-family: inet nfs.disable: on
启动卷 # gluster volume start tomcat-data volume start: tomcat-data: success
挂载卷 # mount -t glusterfs 192.168.9.20:/tomcat-data /mnt
glusterfs对接k8s 借助kubernetes的Endpoints直接将外部服务器映射为kubernetes内部的一个服务
iapiVersion: v1 kind: Service metadata: name: glusterfs namespace: tomcat spec: ports: - port: 49152 protocol: TCP targetPort: 49152 sessionAffinity: None type: ClusterIP --- apiVersion: v1 kind: Endpoints metadata: name: glusterfs namespace: tomcat subsets: - addresses: - ip: 192.168 .9 .20 - ip: 192.168 .9 .21 - ip: 192.168 .9 .22 - ip: 192.168 .9 .23 ports: - port: 49152 protocol: TCP service/glusterfs created endpoints/glusterfs unchanged
查看glusterfs信息
# kubectl describe svc -n tomcat glusterfs Name: glusterfs Namespace: tomcat Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"glusterfs","namespace":"tomcat"},"spec":{"ports":[{"port":49152,"... Selector: <none> Type: ClusterIP IP: 10.107.22.219 Port: <unset> 49152/TCP TargetPort: 49152/TCP Endpoints: 192.168.9.20:49152,192.168.9.21:49152,192.168.9.22:49152 + 1 more... Session Affinity: None Events: <none>
创建glusterfs类型volume
containers: - name: nginx image: harbor.ui.com/secret/nginx:1.13 ports: - containerPort: 80 volumeMounts: - name: nginx-html mountPath: /usr/share/nginx/html imagePullSecrets: - name: registrykey volumes: - name: nginx-html glusterfs: path: tomcat-data endpoints: glusterfs --- apiVersion: v1 kind: Service metadata: name: nginx spec: ports: - port: 80 targetPort: 80 selector: app: nginx root@nginx:/# cd /usr/share/nginx/html/ root@nginx:/# echo '123' > index.html total 4 -rw-r--r-- 2 root root 4 Nov 1 11 :51 index.html NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx ClusterIP 10.99 .87 .35 <none> 80 /TCP 44h 123