Build a home server - take two, part 3.
Creating the zpool for data
I am going to use the hard disks serial number as label and later reuse it when creating the zpool. Start by listing your harddisks:
And issue the command
Initiliaze the disks:
Create ZFS partitions:
Create the zpool
Configure backup and basic maintenance
It is a good thing to have some kind of backup of your work during workdays. We will set up hourly, daily, weekly and monthly snapshots that will be kept on the server. As many have told on their blogs; this is to be considered temporary. During workdays you can use the snapshots to roll back. But for long time storage you will have to replicate your work off-site. I'll come back to that.
It is also recommended that the pool is scrubbed regurarly. For consumer disks the recommended interval is once a week. We will configure that too.
The guide: http://www.neces.com/blog/technology/integrating-freebsd-zfs-and-periodic-snapshots-and-scrubs
As you can see from Ross' post, he uses ZFS in an enterprise environment. I'm using consumer disks so I will do weekly scrub instead. Start by installing zfs-periodic from /usr/ports/sysutils/zfs-periodic. My modifications:
Edit
and
Add
---
When the clock has passed full hour.
A few hours later:
Eventually you will get a statusmail:
I am going to use the hard disks serial number as label and later reuse it when creating the zpool. Start by listing your harddisks:
ohlala# ls -l /dev/ada*
crw-r----- 1 root operator 0, 89 May 27 08:07 /dev/ada0
crw-r----- 1 root operator 0, 91 May 27 08:07 /dev/ada1
crw-r----- 1 root operator 0, 93 May 27 08:07 /dev/ada2
crw-r----- 1 root operator 0, 101 May 27 08:07 /dev/ada2p1
crw-r----- 1 root operator 0, 103 May 27 08:07 /dev/ada2p2
crw-r----- 1 root operator 0, 105 May 27 08:07 /dev/ada2p3
crw-r----- 1 root operator 0, 107 May 27 08:07 /dev/ada2p4
crw-r----- 1 root operator 0, 109 May 27 08:07 /dev/ada2p5
crw-r----- 1 root operator 0, 95 May 27 08:07 /dev/ada3
crw-r----- 1 root operator 0, 111 May 27 08:07 /dev/ada3p1
crw-r----- 1 root operator 0, 113 May 27 08:07 /dev/ada3p2
crw-r----- 1 root operator 0, 115 May 27 08:07 /dev/ada3p3
crw-r----- 1 root operator 0, 117 May 27 08:07 /dev/ada3p4
crw-r----- 1 root operator 0, 119 May 27 08:07 /dev/ada3p5
crw-r----- 1 root operator 0, 97 May 27 08:07 /dev/ada4
crw-r----- 1 root operator 0, 99 May 27 08:07 /dev/ada5
ohlala#
And issue the command
/usr/local/sbin/smartctl -d auto -i /dev/adaX
for every disk:ohlala# /usr/local/sbin/smartctl -d auto -i /dev/ada0
smartctl 5.42 2011-10-20 r3458 [FreeBSD 9.0-RELEASE amd64] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net
=== START OF INFORMATION SECTION ===
Model Family: Seagate Barracuda Green (Adv. Format)
Device Model: ST2000DL003-9VT166
Serial Number: 5YD7JNXT
LU WWN Device Id: 5 000c50 045645768
Firmware Version: CC3C
User Capacity: 2,000,398,934,016 bytes [2.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Device is: In smartctl database [for details use: -P show]
ATA Version is: 8
ATA Standard is: ATA-8-ACS revision 4
Local Time is: Sun May 27 09:36:06 2012 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
Initiliaze the disks:
ohlala#
ohlala# gpart create -s gpt ada0
ada0 created
ohlala# gpart create -s gpt ada1
ada1 created
ohlala# gpart create -s gpt ada4
ada4 created
ohlala# gpart create -s gpt ada5
ada5 created
Create ZFS partitions:
ohlala# gpart add -t freebsd-zfs -l 5YD7JNXT ada0
ada0p1 added
ohlala# gpart add -t freebsd-zfs -l 5YD7SM2A ada1
ada1p1 added
ohlala# gpart add -t freebsd-zfs -l 5YD7SMPJ ada4
ada4p1 added
ohlala# gpart add -t freebsd-zfs -l 5YD8AVDH ada5
ada5p1 added
ohlala# ls -l /dev/gpt
total 0
crw-r----- 1 root operator 0, 138 May 27 10:30 5YD7JNXT
crw-r----- 1 root operator 0, 162 May 27 10:32 5YD7SM2A
crw-r----- 1 root operator 0, 166 May 27 10:33 5YD7SMPJ
crw-r----- 1 root operator 0, 170 May 27 10:34 5YD8AVDH
crw-r----- 1 root operator 0, 121 May 27 08:07 boot0
crw-r----- 1 root operator 0, 132 May 27 08:07 boot1
crw-r----- 1 root operator 0, 130 May 27 08:07 cache0
crw-r----- 1 root operator 0, 141 May 27 08:07 cache1
crw-r----- 1 root operator 0, 124 May 27 10:07 swap0
crw-r----- 1 root operator 0, 135 May 27 08:07 swap1
crw-r----- 1 root operator 0, 128 May 27 08:07 zil0
crw-r----- 1 root operator 0, 139 May 27 08:07 zil1
ohlala#
Create the zpool
ohlala# zpool create data raidz /dev/gpt/5YD7JNXT /dev/gpt/5YD7SM2A /dev/gpt/5YD7SMPJ spare /dev/gpt/5YD8AVDH log mirror /dev/gpt/zil0 /dev/gpt/zil1 cache /dev/gpt/cache0 /dev/gpt/cache1
ohlala# zpool status
pool: data
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
data ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
gpt/5YD7JNXT ONLINE 0 0 0
gpt/5YD7SM2A ONLINE 0 0 0
gpt/5YD7SMPJ ONLINE 0 0 0
logs
mirror-1 ONLINE 0 0 0
gpt/zil0 ONLINE 0 0 0
gpt/zil1 ONLINE 0 0 0
cache
gpt/cache0 ONLINE 0 0 0
gpt/cache1 ONLINE 0 0 0
spares
gpt/5YD8AVDH AVAIL
errors: No known data errors
pool: zroot
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada2p3 ONLINE 0 0 0
ada3p3 ONLINE 0 0 0
errors: No known data errors
ohlala#
ohlala# zfs set checksum=fletcher4 data
ohlala#
Configure backup and basic maintenance
It is a good thing to have some kind of backup of your work during workdays. We will set up hourly, daily, weekly and monthly snapshots that will be kept on the server. As many have told on their blogs; this is to be considered temporary. During workdays you can use the snapshots to roll back. But for long time storage you will have to replicate your work off-site. I'll come back to that.
It is also recommended that the pool is scrubbed regurarly. For consumer disks the recommended interval is once a week. We will configure that too.
The guide: http://www.neces.com/blog/technology/integrating-freebsd-zfs-and-periodic-snapshots-and-scrubs
As you can see from Ross' post, he uses ZFS in an enterprise environment. I'm using consumer disks so I will do weekly scrub instead. Start by installing zfs-periodic from /usr/ports/sysutils/zfs-periodic. My modifications:
ohlala# cd /usr/ports/sysutils/zfs-periodic/
ohlala# make install clean
[root@ohlala /etc/periodic]# cp /usr/local/etc/periodic/monthly/998.zfs-scrub /usr/local/etc/periodic/weekly/998.zfs-scrub
Edit
/usr/local/etc/periodic/weekly/998.zfs-scrub.
Change"pools=$monthly_zfs_scrub_pools"
to "pools=$weekly_zfs_scrub_pools"
and
"case "$monthly_zfs_scrub_enable" in"
to case ""$weekly_zfs_scrub_enable" in".
ohlala# vi
/usr/local/etc/periodic/weekly/998.zfs-scrub
My /etc/periodic.conf:
hourly_output="root"
hourly_show_success="NO"
hourly_show_info="YES"
hourly_show_badconfig="NO"
hourly_zfs_snapshot_enable="YES"
hourly_zfs_snapshot_pools="data"
hourly_zfs_snapshot_keep=10
daily_zfs_snapshot_enable="YES"
daily_zfs_snapshot_pools="data"
daily_zfs_snapshot_keep=7
# daily_status_zfs_enable="YES"
# daily_output="
[email protected]
"
daily_zfs_scrub_enable="YES"
daily_zfs_scrub_pools="data zroot"
weekly_zfs_snapshot_enable="YES"
weekly_zfs_snapshot_pools="data"
weekly_zfs_snapshot_keep=5
weekly_zfs_scrub_enable="YES"
weekly_zfs_scrub_pools="data zroot"
monthly_zfs_snapshot_enable="YES"
# monthly_zfs_scrub_enable="YES"
# monthly_zfs_scrub_pools="data zroot"
monthly_zfs_snapshot_pools="data"
monthly_zfs_snapshot_keep=2
Add
"[email protected]"
to /etc/crontab
.---
When the clock has passed full hour.
ohlala# zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
data@hourly-2012-06-09-08 0 - 41.3K -
data/virt@hourly-2012-06-09-08 0 - 44.0K -
data/virt/DC1@hourly-2012-06-09-08 0 - 40.0K -
data/virt/OS@hourly-2012-06-09-08 170K - 1.94G -
data/virt/SERVER@hourly-2012-06-09-08 0 - 40.0K -
ohlala#
A few hours later:
$ zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
data@hourly-2012-06-09-08 0 - 41.3K -
data@hourly-2012-06-09-09 0 - 41.3K -
data@hourly-2012-06-09-10 0 - 41.3K -
data@hourly-2012-06-09-11 0 - 41.3K -
data@hourly-2012-06-09-12 24.0K - 41.3K -
data/virt@hourly-2012-06-09-08 0 - 44.0K -
data/virt@hourly-2012-06-09-09 0 - 44.0K -
data/virt@hourly-2012-06-09-10 0 - 45.3K -
data/virt@hourly-2012-06-09-11 0 - 45.3K -
data/virt@hourly-2012-06-09-12 42.0K - 106K -
data/virt/DC1@hourly-2012-06-09-08 0 - 40.0K -
data/virt/DC1@hourly-2012-06-09-09 0 - 40.0K -
data/virt/DC1@hourly-2012-06-09-10 0 - 39.0G -
data/virt/DC1@hourly-2012-06-09-11 0 - 39.0G -
data/virt/DC1@hourly-2012-06-09-12 169M - 39.0G -
data/virt/OS@hourly-2012-06-09-08 172K - 1.94G -
data/virt/OS@hourly-2012-06-09-09 24.0K - 2.95G -
data/virt/OS@hourly-2012-06-09-10 0 - 2.95G -
data/virt/OS@hourly-2012-06-09-11 0 - 2.95G -
data/virt/OS@hourly-2012-06-09-12 24.0K - 2.95G -
data/virt/SERVER@hourly-2012-06-09-08 0 - 40.0K -
data/virt/SERVER@hourly-2012-06-09-09 0 - 40.0K -
data/virt/SERVER@hourly-2012-06-09-10 0 - 40.0K -
data/virt/SERVER@hourly-2012-06-09-11 0 - 40.0K -
data/virt/SERVER@hourly-2012-06-09-12 0 - 40.0K -
Eventually you will get a statusmail:
Removing stale files from /var/preserve:
Cleaning out old system announcements:
Removing stale files from /var/rwho:
Backup passwd and group files:
Verifying group file syntax:
/etc/group is fine
Backing up mail aliases:
Backing up package db directory:
Disk status:
Filesystem Size Used Avail Capacity Mounted on
zroot 15G 2.6G 13G 16% /
devfs 1.0k 1.0k 0B 100% /dev
data 3.6T 41k 3.6T 0% /data
data/virt 3.6T 44k 3.6T 0% /data/virt
data/virt/DC1 3.6T 40k 3.6T 0% /data/virt/DC1
data/virt/OS 3.6T 3G 3.6T 0% /data/virt/OS
data/virt/SERVER 3.6T 40k 3.6T 0% /data/virt/SERVER
Last dump(s) done (Dump '>' file systems):
Checking status of zfs pools:
all pools are healthy
Network interface status:
Name Mtu Network Address Ipkts Ierrs Idrop Opkts Oerrs Coll
usbus 0 <Link#1> 0 0 0 0 0 0
re0 1500 <Link#2> 15:da:e9:bd:b5:8f 3715 0 0 2622 0 0
re0 1500 192.168.1.0 ohlala 3653 - - 2565 - -
re0 1500 fe80::16da:e9 fe80::16da:e9ff:f 0 - - 1 - -
usbus 0 <Link#3> 0 0 0 0 0 0
usbus 0 <Link#4> 0 0 0 0 0 0
lo0 16384 <Link#5> 0 0 0 0 0 0
lo0 16384 localhost ::1 0 - - 0 - -
lo0 16384 fe80::1%lo0 fe80::1 0 - - 0 - -
lo0 16384 your-net localhost 0 - - 0 - -
Local system status:
9:05AM up 53 mins, 2 users, load averages: 0.00, 0.00, 0.00
Mail in local queue:
mailq: Mail queue is empty
Mail in submit queue:
mailq: Mail queue is empty
Security check:
(output mailed separately)
Checking for rejected mail hosts:
Checking for denied zone transfers (AXFR and IXFR):
Doing zfs daily snapshots:
taking snapshot, data@daily-2012-06-17
Doing zfs scrubs:
starting scrub on data
pool: data
state: ONLINE
scan: scrub repaired 0 in 0h0m with 0 errors on Sun Jun 17 09:05:30 2012
config:
NAME STATE READ WRITE CKSUM
data ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
gpt/5YD7JNXT ONLINE 0 0 0
gpt/5YD7SM2A ONLINE 0 0 0
gpt/5YD7SMPJ ONLINE 0 0 0
logs
mirror-1 ONLINE 0 0 0
gpt/zil0 ONLINE 0 0 0
gpt/zil1 ONLINE 0 0 0
cache
gpt/cache0 ONLINE 0 0 0
gpt/cache1 ONLINE 0 0 0
spares
gpt/5YD8AVDH AVAIL
errors: No known data errors
starting scrub on zroot
pool: zroot
state: ONLINE
scan: scrub repaired 0 in 0h0m with 0 errors on Sun Jun 17 09:06:38 2012
config:
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0
ada2p3 ONLINE 0 0 0
errors: No known data errors
-- End of daily output --
Trackback