Build a home server - take two, part 3.

Creating the zpool for data
I am going to use the hard disks serial number as label and later reuse it when creating the zpool. Start by listing your harddisks:

ohlala# ls -l /dev/ada*
crw-r-----  1 root  operator    0,  89 May 27 08:07 /dev/ada0
crw-r-----  1 root  operator    0,  91 May 27 08:07 /dev/ada1
crw-r-----  1 root  operator    0,  93 May 27 08:07 /dev/ada2
crw-r-----  1 root  operator    0, 101 May 27 08:07 /dev/ada2p1
crw-r-----  1 root  operator    0, 103 May 27 08:07 /dev/ada2p2
crw-r-----  1 root  operator    0, 105 May 27 08:07 /dev/ada2p3
crw-r-----  1 root  operator    0, 107 May 27 08:07 /dev/ada2p4
crw-r-----  1 root  operator    0, 109 May 27 08:07 /dev/ada2p5
crw-r-----  1 root  operator    0,  95 May 27 08:07 /dev/ada3
crw-r-----  1 root  operator    0, 111 May 27 08:07 /dev/ada3p1
crw-r-----  1 root  operator    0, 113 May 27 08:07 /dev/ada3p2
crw-r-----  1 root  operator    0, 115 May 27 08:07 /dev/ada3p3
crw-r-----  1 root  operator    0, 117 May 27 08:07 /dev/ada3p4
crw-r-----  1 root  operator    0, 119 May 27 08:07 /dev/ada3p5
crw-r-----  1 root  operator    0,  97 May 27 08:07 /dev/ada4
crw-r-----  1 root  operator    0,  99 May 27 08:07 /dev/ada5
ohlala#


And issue the command /usr/local/sbin/smartctl -d auto -i /dev/adaX for every disk:

ohlala# /usr/local/sbin/smartctl -d auto -i /dev/ada0
smartctl 5.42 2011-10-20 r3458 [FreeBSD 9.0-RELEASE amd64] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF INFORMATION SECTION ===
Model Family:     Seagate Barracuda Green (Adv. Format)
Device Model:     ST2000DL003-9VT166
Serial Number:    5YD7JNXT
LU WWN Device Id: 5 000c50 045645768
Firmware Version: CC3C
User Capacity:    2,000,398,934,016 bytes [2.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   8
ATA Standard is:  ATA-8-ACS revision 4
Local Time is:    Sun May 27 09:36:06 2012 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled


Initiliaze the disks:
ohlala#
ohlala# gpart create -s gpt ada0
ada0 created
ohlala# gpart create -s gpt ada1
ada1 created
ohlala# gpart create -s gpt ada4
ada4 created
ohlala# gpart create -s gpt ada5
ada5 created

Create ZFS partitions:
ohlala# gpart add -t freebsd-zfs -l 5YD7JNXT ada0
ada0p1 added
ohlala# gpart add -t freebsd-zfs -l 5YD7SM2A ada1
ada1p1 added
ohlala# gpart add -t freebsd-zfs -l 5YD7SMPJ ada4
ada4p1 added
ohlala# gpart add -t freebsd-zfs -l 5YD8AVDH ada5
ada5p1 added
ohlala# ls -l /dev/gpt
total 0
crw-r-----  1 root  operator    0, 138 May 27 10:30 5YD7JNXT
crw-r-----  1 root  operator    0, 162 May 27 10:32 5YD7SM2A
crw-r-----  1 root  operator    0, 166 May 27 10:33 5YD7SMPJ
crw-r-----  1 root  operator    0, 170 May 27 10:34 5YD8AVDH
crw-r-----  1 root  operator    0, 121 May 27 08:07 boot0
crw-r-----  1 root  operator    0, 132 May 27 08:07 boot1
crw-r-----  1 root  operator    0, 130 May 27 08:07 cache0
crw-r-----  1 root  operator    0, 141 May 27 08:07 cache1
crw-r-----  1 root  operator    0, 124 May 27 10:07 swap0
crw-r-----  1 root  operator    0, 135 May 27 08:07 swap1
crw-r-----  1 root  operator    0, 128 May 27 08:07 zil0
crw-r-----  1 root  operator    0, 139 May 27 08:07 zil1
ohlala#


Create the zpool
ohlala# zpool create data raidz /dev/gpt/5YD7JNXT /dev/gpt/5YD7SM2A /dev/gpt/5YD7SMPJ spare /dev/gpt/5YD8AVDH log mirror /dev/gpt/zil0 /dev/gpt/zil1 cache /dev/gpt/cache0 /dev/gpt/cache1
ohlala# zpool status
pool: data
state: ONLINE
scan: none requested
config:

NAME              STATE     READ WRITE CKSUM
data              ONLINE       0     0     0
raidz1-0        ONLINE       0     0     0
gpt/5YD7JNXT  ONLINE       0     0     0
gpt/5YD7SM2A  ONLINE       0     0     0
gpt/5YD7SMPJ  ONLINE       0     0     0
logs
mirror-1        ONLINE       0     0     0
gpt/zil0      ONLINE       0     0     0
gpt/zil1      ONLINE       0     0     0
cache
gpt/cache0      ONLINE       0     0     0
gpt/cache1      ONLINE       0     0     0
spares
gpt/5YD8AVDH    AVAIL

errors: No known data errors

pool: zroot
state: ONLINE
scan: none requested
config:

NAME        STATE     READ WRITE CKSUM
zroot       ONLINE       0     0     0
mirror-0  ONLINE       0     0     0
ada2p3  ONLINE       0     0     0
ada3p3  ONLINE       0     0     0

errors: No known data errors
ohlala#
ohlala# zfs set checksum=fletcher4 data
ohlala#


Configure backup and basic maintenance
It is a good thing to have some kind of backup of your work during workdays. We will set up hourly, daily, weekly and monthly snapshots that will be kept on the server. As many have told on their blogs; this is to be considered temporary. During workdays you can use the snapshots to roll back. But for long time storage you will have to replicate your work off-site. I'll come back to that.

It is also recommended that the pool is scrubbed regurarly. For consumer disks the recommended interval is once a week. We will configure that too.

The guide: http://www.neces.com/blog/technology/integrating-freebsd-zfs-and-periodic-snapshots-and-scrubs

As you can see from Ross' post, he uses ZFS in an enterprise environment. I'm using consumer disks so I will do weekly scrub instead. Start by installing zfs-periodic from /usr/ports/sysutils/zfs-periodic. My modifications:

ohlala# cd /usr/ports/sysutils/zfs-periodic/
ohlala# make install clean
[root@ohlala /etc/periodic]# cp /usr/local/etc/periodic/monthly/998.zfs-scrub /usr/local/etc/periodic/weekly/998.zfs-scrub


Edit /usr/local/etc/periodic/weekly/998.zfs-scrub. Change
"pools=$monthly_zfs_scrub_pools" to "pools=$weekly_zfs_scrub_pools"
and
"case "$monthly_zfs_scrub_enable" in" to case ""$weekly_zfs_scrub_enable" in".

ohlala# vi /usr/local/etc/periodic/weekly/998.zfs-scrub


My /etc/periodic.conf:
hourly_output="root"
hourly_show_success="NO"
hourly_show_info="YES"
hourly_show_badconfig="NO"

hourly_zfs_snapshot_enable="YES"
hourly_zfs_snapshot_pools="data"
hourly_zfs_snapshot_keep=10

daily_zfs_snapshot_enable="YES"
daily_zfs_snapshot_pools="data"
daily_zfs_snapshot_keep=7

# daily_status_zfs_enable="YES"
# daily_output="
[email protected]"

daily_zfs_scrub_enable="YES"
daily_zfs_scrub_pools="data zroot"

weekly_zfs_snapshot_enable="YES"
weekly_zfs_snapshot_pools="data"
weekly_zfs_snapshot_keep=5

weekly_zfs_scrub_enable="YES"
weekly_zfs_scrub_pools="data zroot"

monthly_zfs_snapshot_enable="YES"
# monthly_zfs_scrub_enable="YES"
# monthly_zfs_scrub_pools="data zroot"
monthly_zfs_snapshot_pools="data"
monthly_zfs_snapshot_keep=2

Add "[email protected]" to /etc/crontab.

---


When the clock has passed full hour.

ohlala# zfs list -t snapshot
NAME                                    USED  AVAIL  REFER  MOUNTPOINT
data@hourly-2012-06-09-08                  0      -  41.3K  -
data/virt@hourly-2012-06-09-08             0      -  44.0K  -
data/virt/DC1@hourly-2012-06-09-08         0      -  40.0K  -
data/virt/OS@hourly-2012-06-09-08       170K      -  1.94G  -
data/virt/SERVER@hourly-2012-06-09-08      0      -  40.0K  -
ohlala#


A few hours later:

$ zfs list -t snapshot
NAME                                    USED  AVAIL  REFER  MOUNTPOINT
data@hourly-2012-06-09-08                  0      -  41.3K  -
data@hourly-2012-06-09-09                  0      -  41.3K  -
data@hourly-2012-06-09-10                  0      -  41.3K  -
data@hourly-2012-06-09-11                  0      -  41.3K  -
data@hourly-2012-06-09-12              24.0K      -  41.3K  -
data/virt@hourly-2012-06-09-08             0      -  44.0K  -
data/virt@hourly-2012-06-09-09             0      -  44.0K  -
data/virt@hourly-2012-06-09-10             0      -  45.3K  -
data/virt@hourly-2012-06-09-11             0      -  45.3K  -
data/virt@hourly-2012-06-09-12         42.0K      -   106K  -
data/virt/DC1@hourly-2012-06-09-08         0      -  40.0K  -
data/virt/DC1@hourly-2012-06-09-09         0      -  40.0K  -
data/virt/DC1@hourly-2012-06-09-10         0      -  39.0G  -
data/virt/DC1@hourly-2012-06-09-11         0      -  39.0G  -
data/virt/DC1@hourly-2012-06-09-12      169M      -  39.0G  -
data/virt/OS@hourly-2012-06-09-08       172K      -  1.94G  -
data/virt/OS@hourly-2012-06-09-09      24.0K      -  2.95G  -
data/virt/OS@hourly-2012-06-09-10          0      -  2.95G  -
data/virt/OS@hourly-2012-06-09-11          0      -  2.95G  -
data/virt/OS@hourly-2012-06-09-12      24.0K      -  2.95G  -
data/virt/SERVER@hourly-2012-06-09-08      0      -  40.0K  -
data/virt/SERVER@hourly-2012-06-09-09      0      -  40.0K  -
data/virt/SERVER@hourly-2012-06-09-10      0      -  40.0K  -
data/virt/SERVER@hourly-2012-06-09-11      0      -  40.0K  -
data/virt/SERVER@hourly-2012-06-09-12      0      -  40.0K  -


Eventually you will get a statusmail:
Removing stale files from /var/preserve:
Cleaning out old system announcements:
Removing stale files from /var/rwho:
Backup passwd and group files:
Verifying group file syntax:
/etc/group is fine

Backing up mail aliases:
Backing up package db directory:

Disk status:
Filesystem          Size    Used   Avail Capacity  Mounted on
zroot                15G    2.6G     13G    16%    /
devfs               1.0k    1.0k      0B   100%    /dev
data                3.6T     41k    3.6T     0%    /data
data/virt           3.6T     44k    3.6T     0%    /data/virt
data/virt/DC1       3.6T     40k    3.6T     0%    /data/virt/DC1
data/virt/OS        3.6T      3G    3.6T     0%    /data/virt/OS
data/virt/SERVER    3.6T     40k    3.6T     0%    /data/virt/SERVER

Last dump(s) done (Dump '>' file systems):

Checking status of zfs pools:
all pools are healthy

Network interface status:
Name    Mtu Network       Address              Ipkts Ierrs Idrop    Opkts Oerrs  Coll
usbus     0 <Link#1>                               0     0     0        0     0     0
re0    1500 <Link#2>      15:da:e9:bd:b5:8f     3715     0     0     2622     0     0
re0    1500 192.168.1.0   ohlala                3653     -     -     2565     -     -
re0    1500 fe80::16da:e9 fe80::16da:e9ff:f        0     -     -        1     -     -
usbus     0 <Link#3>                               0     0     0        0     0     0
usbus     0 <Link#4>                               0     0     0        0     0     0
lo0   16384 <Link#5>                               0     0     0        0     0     0
lo0   16384 localhost     ::1                      0     -     -        0     -     -
lo0   16384 fe80::1%lo0   fe80::1                  0     -     -        0     -     -
lo0   16384 your-net      localhost                0     -     -        0     -     -

Local system status:
 9:05AM  up 53 mins, 2 users, load averages: 0.00, 0.00, 0.00

Mail in local queue:
mailq: Mail queue is empty

Mail in submit queue:
mailq: Mail queue is empty

Security check:
    (output mailed separately)

Checking for rejected mail hosts:

Checking for denied zone transfers (AXFR and IXFR):

Doing zfs daily snapshots:
taking snapshot, data@daily-2012-06-17

Doing zfs scrubs:
starting scrub on data
  pool: data
 state: ONLINE
 scan: scrub repaired 0 in 0h0m with 0 errors on Sun Jun 17 09:05:30 2012
config:

NAME              STATE     READ WRITE CKSUM
data              ONLINE       0     0     0
  raidz1-0        ONLINE       0     0     0
    gpt/5YD7JNXT  ONLINE       0     0     0
    gpt/5YD7SM2A  ONLINE       0     0     0
    gpt/5YD7SMPJ  ONLINE       0     0     0
logs
  mirror-1        ONLINE       0     0     0
    gpt/zil0      ONLINE       0     0     0
    gpt/zil1      ONLINE       0     0     0
cache
  gpt/cache0      ONLINE       0     0     0
  gpt/cache1      ONLINE       0     0     0
spares
  gpt/5YD8AVDH    AVAIL  

errors: No known data errors
starting scrub on zroot
  pool: zroot
 state: ONLINE
 scan: scrub repaired 0 in 0h0m with 0 errors on Sun Jun 17 09:06:38 2012
config:

NAME        STATE     READ WRITE CKSUM
zroot       ONLINE       0     0     0
  mirror-0  ONLINE       0     0     0
    ada1p3  ONLINE       0     0     0
    ada2p3  ONLINE       0     0     0

errors: No known data errors

-- End of daily output --



Build a home server - take two, part 2.

Update the system
ohlala# freebsd-update fetch
ohlala# freebsd-update install


Basic monitoring
We are going to use Smartmontools and we want to be informed by email when something happens with the harddisks. The default MTA installed is Sendmail. But it is far to big for our purpose, so we replace it with sSMTP:

Goto http://www.freebsd.org/ports/ and search for ssmtp. Here you get information about where sSMTP is situated in the /user/ports directory. Simply cd into that directory and start the installer.

ohlala# cd /usr/ports/mail/ssmtp/
ohlala# make install replace clean


I made a default installation. But install the extra patches. The "replace" command replaces sendmail as the default mailer with sSMTP. See also http://www.freebsd.org/doc/handbook/outgoing-only.html and http://www.freebsd.org/doc/handbook/mail-changingmta.html#MAIL-DISABLE-SENDMAIL. Continue with configuring sSMTP:

ohlala# mv /usr/local/etc/ssmtp/ssmtp.conf.sample /usr/local/etc/ssmtp/ssmtp.conf
ohlala# mv /usr/local/etc/ssmtp/revaliases.sample  /usr/local/etc/ssmtp/revaliases

ohlala# vi /usr/local/etc/ssmtp/ssmtp.conf
ohlala# vi /usr/local/etc/ssmtp/revaliases

Check drdata.blogg.se/2012/april/build-a-home-server-part-4.html for details.

Continue with installing Smartmontools:
Again search http://www.freebsd.org/ports/ for the location of Smartmontools in the /usr/ports directory and simply install by typing make install clean.

List your available harddisks. Here you can see the designations for the data disks:

ohlala# ls -l /dev/ada*
crw-r-----  1 root  operator    0,  89 May 27 08:07 /dev/ada0 crw-r-----  1 root  operator    0,  91 May 27 08:07 /dev/ada1 crw-r-----  1 root  operator    0,  93 May 27 08:07 /dev/ada2 crw-r-----  1 root  operator    0, 101 May 27 08:07 /dev/ada2p1 crw-r-----  1 root  operator    0, 103 May 27 08:07 /dev/ada2p2 crw-r-----  1 root  operator    0, 105 May 27 08:07 /dev/ada2p3 crw-r-----  1 root  operator    0, 107 May 27 08:07 /dev/ada2p4 crw-r-----  1 root  operator    0, 109 May 27 08:07 /dev/ada2p5 crw-r-----  1 root  operator    0,  95 May 27 08:07 /dev/ada3 crw-r-----  1 root  operator    0, 111 May 27 08:07 /dev/ada3p1 crw-r-----  1 root  operator    0, 113 May 27 08:07 /dev/ada3p2 crw-r-----  1 root  operator    0, 115 May 27 08:07 /dev/ada3p3 crw-r-----  1 root  operator    0, 117 May 27 08:07 /dev/ada3p4 crw-r-----  1 root  operator    0, 119 May 27 08:07 /dev/ada3p5 crw-r-----  1 root  operator    0,  97 May 27 08:07 /dev/ada4 crw-r-----  1 root  operator    0,  99 May 27 08:07 /dev/ada5
ohlala#

ohlala# cp /usr/local/etc/smartd.conf.sample /usr/local/etc/smartd.conf
ohlala# vi /usr/local/etc/smartd.conf

# The word DEVICESCAN will cause any remaining lines in this
# configuration file to be ignored: it tells smartd to scan for all
# ATA and SCSI devices.  DEVICESCAN may be followed by any of the
# Directives listed below, which will be applied to all devices that
# are found.  Most users should comment out DEVICESCAN and explicitly
# list the devices that they wish to monitor.
#DEVICESCAN

/dev/ada0 -m [email protected] -M test

/dev/ada0 -a -d auto -o on -S on -s (S/../.././02|L/../../6/03) -m [email protected]
/dev/ada1 -a -d auto -o on -S on -s (S/../.././02|L/../../6/03) -m [email protected]
/dev/ada2 -a -d auto -o on -S on -s (S/../.././02|L/../../6/03) -m [email protected]
/dev/ada3 -a -d auto -o on -S on -s (S/../.././02|L/../../6/03) -m [email protected]
/dev/ada4 -a -d auto -o on -S on -s (S/../.././02|L/../../6/03) -m [email protected]
/dev/ada5 -a -d auto -o on -S on -s (S/../.././02|L/../../6/03) -m [email protected]

ohlala# echo 'smartd_enable="YES"' >> /etc/rc.conf

ohlala# /usr/local/etc/rc.d/smartd start
Starting smartd.
(pass1:siisch1:0:0:0): SMART. ACB: b0 db 00 4f c2 40 00 00 00 00 f8 00
(pass1:siisch1:0:0:0): CAM status: ATA Status Error
(pass1:siisch1:0:0:0): ATA status: 51 (DRDY SERV ERR), error: 04 (ABRT )
(pass1:siisch1:0:0:0): RES: 51 04 00 4f c2 40 00 00 00 f8 00
(pass2:ahcich0:0:0:0): SMART. ACB: b0 db 00 4f c2 40 00 00 00 00 f8 00
(pass2:ahcich0:0:0:0): CAM status: ATA Status Error
(pass2:ahcich0:0:0:0): ATA status: 51 (DRDY SERV ERR), error: 04 (ABRT )
(pass2:ahcich0:0:0:0): RES: 51 04 00 4f c2 40 00 00 00 f8 0
ohlala#



Part 3 - Configure the zpool.



Build a home server - take two, part 1.

The new approach - OS install
I have two 60 GB SSD disks. So far, after having installed OS, VirtualBox and other applications, I have used 3,5 GB on root. My guess is that when completed I will have used up cirka 4 GB on root. It seems kind of hefty to dedicate two 60 GB quite expensive SSD disks and not fully use them. And I really want to take advantage of the nifty features ZIL and L2ARC...

You do not have to dedicate whole disks to ZFS. But if you do, ZFS uses the disks cache. That is of course an advantage. ZFS can also use partitions, which I will use to fully utilize the SSD disks.

Follow this (among many) guide to create a ZFS root mirror:  http://www.freebsdwiki.net/index.php/ZFS,_booting_from. The only thing I did different was partitioning:

    # gpart add -b 34 -s 128 -t freebsd-boot -l boot0 ada0
    # gpart add -s 12288M -t freebsd-swap -l swap0 ada0
    # gpart add -s 16G -t freebsd-zfs -l root0 ada0
    # gpart add -s 4096M -t freebsd-zfs -l zil0 ada0
    # gpart add -t freebsd-zfs -l cache0 ada0
    # gpart add -b 34 -s 128 -t freebsd-boot -l boot1 ada3 # gpart add -s 12288M -t freebsd-swap -l swap1 ada3 # gpart add -s 16G -t freebsd-zfs -l root1 ada3 # gpart add -s 4096M -t freebsd-zfs -l zil1 ada3 # gpart add -t freebsd-zfs -l cache1 ada3
This gives you the following layout:
    [root@ohlala ~]# gpart show
    => 34 117231341 ada0 GPT (55G) 34 128 1 freebsd-boot (64k) 162 25165824 2 freebsd-swap (12G) 25165986 33554432 3 freebsd-zfs (16G) 58720418 8388608 4 freebsd-zfs (4.0G) 67109026 50122349 5 freebsd-zfs (23G)
    => 34 117231341 ada3 GPT (55G) 34 128 1 freebsd-boot (64k) 162 25165824 2 freebsd-swap (12G) 25165986 33554432 3 freebsd-zfs (16G) 58720418 8388608 4 freebsd-zfs (4.0G) 67109026 50122349 5 freebsd-zfs (23G) [root@ohlala ~]#
Edit /etc/fstab:
/dev/gpt/swap0 none swap sw 0 0
/dev/gpt/swap1 none swap sw 0 0

After reboot you should have an output similar to this:
[root@ohlala ~]# df -h
Filesystem          Size    Used   Avail Capacity  Mounted on
zroot                12G    345M     12G     3%    /
devfs               1.0k    1.0k      0B   100%    /dev
zroot/home           12G     46M     12G     0%    /home
zroot/tmp            12G     55k     12G     0%    /tmp
zroot/usr            15G    3.1G     12G    20%    /usr
zroot/var            12G     97M     12G     1%    /var
[root@ohlala ~]#



Part 2 - Configure basic monitoring.


Build a home server - part 8

Configure LVM for snapshots
I got it all to work; softwareRAID, LVM and Flashcache. Proven, stable techniques. But when I realize that a snapshot in LVM takes the same amount of disk space as the source... It will not work on a home server with, say, one or more 1 TB file systems. The techniques are good. No dought about it. But they have also grown old. I have to abandon this approach.

The end.

RSS 2.0