Mayastor
는 CPU 사용량이 높고, 설정이 어려움
StorageClass
사용Velero
를 이용하여 클라우드의 객체 스토리지에 백업 가능SATA3
와 NVME
두 종류의 인터페이스 존재
$ df -h
$ sudo fdisk -l
...
Disk /dev/sda1
Disk model: CT1000MX500SSD1
Disk /dev/nvme0n1
Disk model: SHGP31-1000GM-2
각각 mount
# prepare a directory for mounting
$ mkdir /data # sata3
$ mkdir /data_nvme # nvme
아래 명령어로 디스크 쓰기 성능측정
$ sudo dd if=/dev/zero bs=1024 count=2048 of=/data/test_file oflag=direct
$ sudo dd if=/dev/zero bs=1024 count=2048 of=/data_nvme/test_file oflag=direct
아래 명령어로 디스크 읽기 성능측정
$ sudo dd if=/data/test_file of=/dev/null bs=1024
$ sudo dd if=/data_nvme/test_file of=/dev/null bs=1024
hdparm
package를 이용하여 측정
$ sudo hdparm -Tt /dev/sda1
$ sudo hdparm -Tt /dev/nvme0n1
SATA
인터페이스를 기본 저장장치로 설정
# add helm repo
$ helm repo add openebs <https://openebs.github.io/charts>
$ helm repo update
# install openebs
$ helm upgrade --cleanup-on-fail \\
--install openebs openebs/openebs \\
--namespace openebs --create-namespace \\
--set localprovisioner.basePath=/data
# patch storageclass be a default
$ kubectl patch storageclass openebs-hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
NVME를 사용하는 StorageClass
는 별도로 생성
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
cas.openebs.io/config: |
- name: StorageType
value: "hostpath"
- name: BasePath
value: "/data_nvme"
meta.helm.sh/release-name: openebs
meta.helm.sh/release-namespace: openebs
openebs.io/cas-type: local
labels:
app.kubernetes.io/managed-by: Helm
name: openebs-hostpath-nvme
provisioner: openebs.io/local
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
dd
NAME: openebs
LAST DEPLOYED: Fri Jul 8 06:12:03 2022
NAMESPACE: openebs
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Successfully installed OpenEBS.
Check the status by running: kubectl get pods -n openebs
The default values will install NDM and enable OpenEBS hostpath and device
storage engines along with their default StorageClasses. Use `kubectl get sc`
to see the list of installed OpenEBS StorageClasses.
**Note**: If you are upgrading from the older helm chart that was using cStor
and Jiva (non-csi) volumes, you will have to run the following command to include
the older provisioners:
helm upgrade openebs openebs/openebs \\
--namespace openebs \\
--set legacy.enabled=true \\
--reuse-values
For other engines, you will need to perform a few more additional steps to
enable the engine, configure the engines (e.g. creating pools) and create
StorageClasses.
For example, cStor can be enabled using commands like:
helm upgrade openebs openebs/openebs \\
--namespace openebs \\
--set cstor.enabled=true \\
--reuse-values
For more information,
- view the online documentation at <https://openebs.io/docs> or
- connect with an active community on Kubernetes slack #openebs channel.
kubestr
을 이용하여 PV의 fio
성능 테스트 가능. 아래 명령어로 kubestr
다운로드
# download kubestr
$ wget <https://github.com/kastenhq/kubestr/releases/download/v0.4.16/kubestr-v0.4.16-linux-amd64.tar.gz>
$ tar -zxvf kubestr-v0.4.16-linux-amd64.tar.gz
$ sudo mv kubestr /usr/local/bin # add to execution dir
$ export KUBE_CONFIG_PATH=~/.kube/config # kubestr requires KUBE_CONFIG_PATH env
fio
스크립트 예제는 링크 참조. 아래 설정으로 테스트
[global]
name=fio-rand-read
filename=fio-rand-read
rw=randread
bs=4K
direct=1
numjobs=16
time_based
runtime=120
numjobs=16
group_reporting
[file1]
size=1G
ioengine=libaio
iodepth=64
NVME 테스트 결과
kubestr fio -f fio-rand-read.fio -s openebs-hostpath-nvme
PVC created kubestr-fio-pvc-fxpr5
Pod created kubestr-fio-pod-d9wcq
Running FIO test (fio-rand-read.fio) on StorageClass (openebs-hostpath-nvme) with a PVC of Size (100Gi)
Elapsed time- 2m1.37089566s
FIO test results:
FIO version - fio-3.30
Global options - ioengine= verify= direct=1 gtod_reduce=
JobName:
blocksize= filesize=1G iodepth=64 rw=
read:
IOPS=596395.687500 BW(KiB/s)=2385582
iops: min=567726 max=616972 avg=596777.375000
bw(KiB/s): min=2270904 max=2467888 avg=2387109.500000
Disk stats (read/write):
nvme0n1: ios=71538998/1068 merge=0/3118 ticks=122207275/1940 in_queue=122209926, util=99.976662%
- OK
SATA3 테스트 결과
kubestr fio -f fio-rand-read.fio -s openebs-hostpath
PVC created kubestr-fio-pvc-45hfg
Pod created kubestr-fio-pod-t8qxw
Running FIO test (fio-rand-read.fio) on StorageClass (openebs-hostpath) with a PVC of Size (100Gi)
Elapsed time- 2m1.464291994s
FIO test results:
FIO version - fio-3.30
Global options - ioengine= verify= direct=1 gtod_reduce=
JobName:
blocksize= filesize=1G iodepth=64 rw=
read:
IOPS=596206.875000 BW(KiB/s)=2384827
iops: min=569324 max=616212 avg=596600.187500
bw(KiB/s): min=2277296 max=2464848 avg=2386400.750000
Disk stats (read/write):
nvme0n1: ios=71498184/680 merge=0/3001 ticks=122181346/1799 in_queue=122183893, util=99.974991%
- OK