Skip to content
Advertisements

AWS- Working with ZFS filesystem in amazon linux (RAID 0, RAID 1, RAID-Z and RAID-Z 2

ZFS selfhealing

ZFS is a combined filesystem and logical volume manager  created by Sun Microsystems (now owned by Oracle). The features of ZFS include pooled storage (integrated volume management – zpool), protection against data corruption, support for high storage capacities, efficient data compression, integration of the filesystem and volume management concept, snapshots and copy-on-write clones, continuous integrity checking and automatic repair, RAID-Z and native NFSv4 ACLs. ZFS is licensed under the Common Development and Distribution License (CDDL).

Installing ZFS in Amazon Linux

sudo yum update -y
sudo yum repolist enabled
sudo yum install -y gcc
sudo yum install kernel-devel zlib-devel libuuid-devel -y
wget -c https://github.com/zfsonlinux/zfs/releases/download/zfs-0.6.5.8/spl-0.6.5.8.tar.gz
wget -c https://github.com/zfsonlinux/zfs/releases/download/zfs-0.6.5.8/zfs-0.6.5.8.tar.gz
tar -zxvf spl-0.6.5.8.tar.gz
tar -zxvf zfs-0.6.5.8.tar.gz
cd spl-0.6.5.8;./configure;sudo make && sudo make install
cd ..
cd zfs-0.6.5.8;./configure –with-spl=/usr/local/src/spl-0.6.5.8; sudo make && sudo make install

# sudo visudo and add /usr/local/sbin to Defaults secure_path variable
sudo modprobe zfs
lsmod |grep zfs

In this blog we are going to explain how we can create RAID0, RAID1, RAID-Z and RAID-Z2. Lets attached some EBS volume using AWS console.

We can check the available disks with this command. You will see output something similar to this. For this demo I attached 9 extra ebs volume each with 10gb size.

$ ls /dev/sd*
/dev/sda /dev/sda1 /dev/sda2 /dev/sda5 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk

We will use /dev/sdb until /dev/sdk for the ZFS filesystem. Now we can start to create the pool.

Test 1: RAID0 (stripe)

$ zpool list
no pools available

zpool create -f pool0 /dev/sdb
$ zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
pool0 9.98G 64K 9.98G – 0% 0% 1.00x ONLINE –

The command “zpool list” shows that we successfully created one raid0 zfs pool, the name of the pool is pool0, and the size is 10GB.

Test 2: RAID1 (Mirror)

$ zpool create -f pool1 mirror /dev/sdc /dev/sdd
$ zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
pool0 9.98G 64K 9.98G – 0% 0% 1.00x ONLINE –
pool1 9.98G 64K 9.98G – 0% 0% 1.00x ONLINE –

We can see that we have two pools now, pool0 for raid0 and pool1 for raid1. To check the status of the pool’s, we can use the command below:

$ zpool status
pool: pool0
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
pool0 ONLINE 0 0 0
sdb ONLINE 0 0 0

errors: No known data errors

pool: pool1
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
pool1 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0

errors: No known data errors

Test 3: RAID-Z (one parity – 3 disk)

RAID-Z requires a minimum of three hard drives and is sort of a compromise between RAID 0 and RAID 1. In an RAID-Z pool: If a single disk in your pool dies, simply replace that disk and ZFS will automatically rebuild the data based on the parity information from the other disks. To loose all of the information in your storage pool, two disks would have to die.

$ zpool create -f poolz1 raidz sde sdf sdg
$ zpool list poolz1
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
poolz1 29.94G 117K 29.94G – 0% 0% 1.00x ONLINE –

$ zpool status poolz1
pool: poolz1
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
poolz1 ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
sde ONLINE 0 0 0
sdf ONLINE 0 0 0
sdg ONLINE 0 0 0

errors: No known data errors

$ df -h /poolz1
Filesystem Size Used Avail Use% Mounted on
poolz1 19.9G 0 19.9G 0% /poolz1

Test 4: RAID-Z2 (double parity – 4 disk)

To make the drive setup even more redundant, you can use RAID 6 (RAID-Z2 in case of ZFS) to get double parity.

$ zpool create -f poolz2 raidz2 sdh sdi sdj sdk
$ zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
poolz2 39.94G 135K 39.94G – 0% 0% 1.00x ONLINE –

$ df -h /poolz2
Filesystem Size Used Avail Use% Mounted on
poolz2 19.9G 0 19.9G 0% /poolz2

$ zpool status poolz2
pool: poolz2
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
poolz2 ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
sdh ONLINE 0 0 0
sdi ONLINE 0 0 0
sdj ONLINE 0 0 0
sde ONLINE 0 0 0

errors: No known data errors

As we can see, df -h shows that our 40GB pool has now been reduced to 20GB, since 20GB are being used to hold the parity information twice. With the “zpool status” command, we see that our pool is using RAID-Z2 now.

Advertisements

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s

%d bloggers like this: