음..식상하지 않은 인사가 뭐가 있을까 고민해 봤는데 딱히 없네요 ㅠㅠ
여~~~하튼 k8s의 Dynamic volume을 구현하고 싶은데, 번거로운건 좀 싫구
이래저래 계속 알아보다 보니 NFS로 어느정도 구현할수 있는 프로젝트들이 있더라고요
대표적으로 NFS-Provisioner 그리고 NFS-Client가 있습니다.
이름이 ....-_- 인간적으로 거의 같아서..엄청 헷깔리고 뭐...아시다시피 영어로 된 설명도 고만고만하고 그래서 이럴때는 제일 좋은게 테스트죠!!!
사실 해봐야 쓸만한지도 알고....그래서 해봤습니다.
(영상은..고민 중이에요 / 영상이 글 쓰는 것보다 사실 요즘은 더 쉽긴 한데 말이죵)
0. 실행 전에 편리를 위해서 bash-completion.sh 를 설치합니당
bash <(curl -s https://raw.githubusercontent.com/sysnet4admin/IaC/master/manifests/bash-completion.sh)
1. NFS-Provisioner
- 실습에 사용한 소스
- https://github.com/sysnet4admin/IaC/tree/master/manifests/vol/nfs-provisioner
- Original은 해당 링크에 표시
1-1. 우선 serviceaccount랑 rbac이랑 svc랑 이것저것 설치해야 됩니다. 그래서 다 뭉쳐뒀죠 하하하하 :)
[root@m-k8s ~]# kubectl apply -f https://raw.githubusercontent.com/sysnet4admin/IaC/master/manifests/vol/nfs-provisioner/1.nfs-provisioner+.yaml serviceaccount/nfs-provisioner created role.rbac.authorization.k8s.io/leader-locking-nfs-provisioner created rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-provisioner created clusterrole.rbac.authorization.k8s.io/nfs-provisioner-runner created clusterrolebinding.rbac.authorization.k8s.io/run-nfs-provisioner created storageclass.storage.k8s.io/nfs-svr-sc created service/nfs-provisioner created deployment.apps/nfs-provisioner created
1-2. 그리고 pvc 1Gi를 claim 해 봅니다아 그리고 확인해 보면 좀 있다가 pv가 생성된걸 확인할수 있죠
[root@m-k8s ~]# k get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nfs-provisioner-9fdb5b47-djg55 1/1 Running 0 24m 172.16.221.129 w1-k8s <none> <none>
[root@m-k8s ~]# kubectl apply -f https://raw.githubusercontent.com/sysnet4admin/IaC/master/manifests/vol/nfs-provisioner/2.claim-pvc1Gi.yaml persistentvolumeclaim/claim-svr-pvc created
[root@m-k8s ~]# k get pv,pvc NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-c8d98b5b-86b1-4b58-ae77-05d9233227ee 1Gi RWX Delete Bound default/claim-svr-pvc nfs-svr-sc 14s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/claim-svr-pvc Bound pvc-c8d98b5b-86b1-4b58-ae77-05d9233227ee 1Gi RWX nfs-svr-sc 14s [root@m-k8s ~]# kubectl apply -f https://raw.githubusercontent.com/sysnet4admin/IaC/master/manifests/vol/nfs-provisioner/3.sts-claim-vct1Gi.yaml statefulset.apps/add-svr-pvc created
이게 w1-k8s 노드에 생성되도록 지정했고 /srv 위치에 했으니 해당 위치에 가면 생성규칙에 따라 디렉터리가 생성되어 있습니다아.
[root@w1-k8s ~]# ls /srv/
ganesha.log nfs-provisioner.identity pvc-c8d98b5b-86b1-4b58-ae77-05d9233227ee v4old v4recov vfs.conf
1-3 이번에 sts로 생성해 볼까요?
[root@m-k8s ~]# kubectl apply -f https://raw.githubusercontent.com/sysnet4admin/IaC/master/manifests/vol/nfs-provisioner/3.sts-claim-vct1Gi.yaml statefulset.apps/add-svr-pvc created
[root@m-k8s ~]# k get pv,pvc NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-22f09ae8-8813-40e1-ba82-c63f2049e200 1Gi RWX Delete Bound default/vct-svr-vol-add-svr-pvc-0 nfs-svr-sc 4s persistentvolume/pvc-c8d98b5b-86b1-4b58-ae77-05d9233227ee 1Gi RWX Delete Bound default/claim-svr-pvc nfs-svr-sc 4m2s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/claim-svr-pvc Bound pvc-c8d98b5b-86b1-4b58-ae77-05d9233227ee 1Gi RWX nfs-svr-sc 4m2s persistentvolumeclaim/vct-svr-vol-add-svr-pvc-0 Bound pvc-22f09ae8-8813-40e1-ba82-c63f2049e200 1Gi RWX nfs-svr-sc 4s잘 되는군요!!
2. NFS-Client
- 실습에 사용한 소스
- https://github.com/sysnet4admin/IaC/tree/master/manifests/vol/nfs-client-provisioner
- Original은 해당 링크에 표시
2-0. 이걸 위해서는 NFS-Server가 있어야 하므로 우선 서버를 설치합니다.
[root@m-k8s ~]# bash <(curl -s https://raw.githubusercontent.com/sysnet4admin/IaC/master/manifests/vol/nfs-client-provisioner/0.Builder_nfs_server.sh) Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.
[root@m-k8s ~]# exportfs /nfs_shared 192.168.1.0/24
2-1. 그리고 나서 일전과 같이 serviceaccount와 이것저것을 설치합니다. rbac은 공통이라 unchanged 랍니다아~
[root@m-k8s ~]# kubectl apply -f https://raw.githubusercontent.com/sysnet4admin/IaC/master/manifests/vol/nfs-client-provisioner/1.nfs-client+.yaml serviceaccount/nfs-client-provisioner created role.rbac.authorization.k8s.io/leader-locking-nfs-provisioner unchanged rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-provisioner configured clusterrole.rbac.authorization.k8s.io/nfs-provisioner-runner unchanged clusterrolebinding.rbac.authorization.k8s.io/run-nfs-provisioner configured storageclass.storage.k8s.io/nfs-sc created deployment.apps/nfs-client created
nfs-client라는 pod가가 NFS-Provisioner의 역할을 합니다. (sc의 provisioner가 아니라 claim을 받아주는 역할이요)
[root@m-k8s ~]# k get pod NAME READY STATUS RESTARTS AGE add-svr-pvc-0 1/1 Running 0 18m nfs-client-54fdb4c5bc-z6bdc 1/1 Running 0 81s <<<<<< nfs-provisioner-9fdb5b47-djg55 1/1 Running 1 31m
2-2-1. 그러면 pvc를 claim해 볼까요? 새로 생성된건 claim-pvc 입니당 (헷깔리실까)
[root@m-k8s ~]# kubectl apply -f https://raw.githubusercontent.com/sysnet4admin/IaC/master/manifests/vol/nfs-client-provisioner/2-1.claim-pvc1Gi.yaml persistentvolumeclaim/claim-pvc created
[root@m-k8s ~]# k get pv,pvc NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-22f09ae8-8813-40e1-ba82-c63f2049e200 1Gi RWX Delete Bound default/vct-svr-vol-add-svr-pvc-0 nfs-svr-sc 18m persistentvolume/pvc-55c80adc-55ec-4db9-9768-bda13099f1f3 1Gi RWX Delete Bound default/claim-pvc nfs-sc 4s persistentvolume/pvc-c8d98b5b-86b1-4b58-ae77-05d9233227ee 1Gi RWX Delete Bound default/claim-svr-pvc nfs-svr-sc 22m NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/claim-pvc Bound pvc-55c80adc-55ec-4db9-9768-bda13099f1f3 1Gi RWX nfs-sc 4s persistentvolumeclaim/claim-svr-pvc Bound pvc-c8d98b5b-86b1-4b58-ae77-05d9233227ee 1Gi RWX nfs-svr-sc 22m persistentvolumeclaim/vct-svr-vol-add-svr-pvc-0 Bound pvc-22f09ae8-8813-40e1-ba82-c63f2049e200 1Gi RWX nfs-svr-sc 18m
2-2-2. pvc가 생성되었으니 생성된 pvc를 이용해서 pod에 volume을 마운트해서 올려봐야 겠죠오?
[root@m-k8s ~]# kubectl apply -f https://raw.githubusercontent.com/sysnet4admin/IaC/master/manifests/vol/nfs-client-provisioner/2-2.use-pvc.yaml deployment.apps/use-pvc created생성된 pod 내부 좀 볼까요?
[root@m-k8s ~]# k get pod NAME READY STATUS RESTARTS AGE add-svr-pvc-0 1/1 Running 0 19m nfs-client-54fdb4c5bc-z6bdc 1/1 Running 0 2m15s nfs-provisioner-9fdb5b47-djg55 1/1 Running 1 31m use-pvc-64679676d7-cb7hx 1/1 Running 0 17s <<<<<<<<
[root@m-k8s ~]# k exec use-pvc-64679676d7-cb7hx -it -- /bin/bash
root@use-pvc-64679676d7-cb7hx:/# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 37G 2.5G 35G 7% /
tmpfs 496M 0 496M 0% /dev
tmpfs 496M 0 496M 0% /sys/fs/cgroup
192.168.1.10:/nfs_shared/default-claim-pvc-pvc-55c80adc-55ec-4db9-9768-bda13099f1f3 37G 3.1G 34G 9% /pvc-vol
/dev/mapper/centos_k8s-root 37G 2.5G 35G 7% /etc/hosts
shm 64M 0 64M 0% /dev/shm
tmpfs 496M 12K 496M 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 496M 0 496M 0% /proc/acpi
tmpfs 496M 0 496M 0% /proc/scsi
tmpfs
재미나게 마운트 되어 있고 디렉터리도 생성했네요 :)
2-3. sts로도 잘 되겠지만 pvc 생성 없이 바로 되는거 확인해 보도록 하죠
[root@m-k8s ~]# kubectl apply -f https://raw.githubusercontent.com/sysnet4admin/IaC/master/manifests/vol/nfs-client-provisioner/3.sts-claim-vct1Gi.yaml statefulset.apps/add-pvc created
[root@m-k8s ~]# k get pod
NAME READY STATUS RESTARTS AGE
add-pvc-0 1/1 Running 0 39s
add-svr-pvc-0 1/1 Running 0 20m
nfs-client-54fdb4c5bc-z6bdc 1/1 Running 0 3m33s
nfs-provisioner-9fdb5b47-djg55 1/1 Running 1 33m
use-pvc-64679676d7-cb7hx 1/1 Running 0 95s
[root@m-k8s ~]# k exec add-pvc-0 -it -- /bin/bash
root@add-pvc-0:/# df -h Filesystem Size Used Avail Use% Mounted on overlay 37G 2.8G 35G 8% / tmpfs 496M 0 496M 0% /dev tmpfs 496M 0 496M 0% /sys/fs/cgroup 192.168.1.10:/nfs_shared/default-vct-vol-add-pvc-0-pvc-10cc8eca-34f5-4679-a12e-e0c26841bc63 37G 3.1G 34G 9% /pvc-vol /dev/mapper/centos_k8s-root 37G 2.8G 35G 8% /etc/hosts shm 64M 0 64M 0% /dev/shm tmpfs 496M 12K 496M 1% /run/secrets/kubernetes.io/serviceaccount tmpfs 496M 0 496M 0% /proc/acpi tmpfs 496M 0 496M 0% /proc/scsi tmpfs 496M 0 496M 0% /sys/firmware root@add-pvc-0:/# exit exit
[root@m-k8s ~]# ls /nfs_shared/ default-claim-pvc-pvc-55c80adc-55ec-4db9-9768-bda13099f1f3 default-vct-vol-add-pvc-0-pvc-10cc8eca-34f5-4679-a12e-e0c26841bc63 [root@m-k8s ~]#
2개의 디렉터리가 NFS_Server(/nfs_shared)에 만들어졌네요
NFS-Provisioner(와 client)를 이용하면 smb 마켓에서 k8s의 volume을 Dynamic에 가깝게 쓸수 있을꺼 같아요
=====================================
3줄 요약 (요즘 유행인거 같아서?)
1. NFS-Provisioner는 따로 NFS 설정 없이 hostPath를 바인딩해서 넘겨준다
NFS-Client는 NFS 서버의 shared dir(예: /nfs_shared)를 바인딩해서 넘겨준다
2. 둘다 용량을 Dynamic하게 써주는 건 여전히 안된다 (용량 제한관리가 안됨)
3. pvc를 claim해서 pv가 자동생성 및 StatefulSet에 volumeClaimTemplates 으로 volume 마운트가 둘다 되니 골라서 쓰면 된다. (비슷비슷함)
빠잉~!