Home Fileserver: ZFS setup

The next step in setting up your own ZFS home fileserver is to set up your ZFS storage pool and file systems and then share them with other machines. The ZFS commands should work from any operating system where ZFS is available. I have used two machines in this example: a machine running Sun Solaris for the fileserver, and a Macintosh client machine.

Update 2009-05-10: Please see the post Home Fileserver: ZFS File Systems for more details on setting up a practical file system hierarchy.

Choose your operating system

The first step is to choose which operating system you will install on your machine. Personally, I can recommend Sun Solaris as I have it running well here and it is the original operating system that ZFS has been designed to run on. I believe it runs on FreeBSD 7.0, Linux under FUSE, and Mac OS X with the relevant download from developer.apple.com or here. If you are able to choose, and wish to have the most stable and reliable version of ZFS, I would personally recommend that you choose Sun Solaris as your operating system.

If you choose to install Sun Solaris, then the next hurdle is to find out which version. When I was looking to install Solaris I had to choose among the following:

  • Solaris: version 10 (current), 11, 12 etc is a major release of Solaris that occurs once every year or two, and is solid and heavily tested.
  • Solaris Express Developer Edition (SXDE): is a release that occurs once every 2 months or so, and is quite well tested and stable.
  • Solaris Express Community Edition (SXCE): is a release that occurs once every week or two, has some testing and may or may not be stable in all areas.
  • OpenSolaris Developer Preview: is a preview of a next-generation development of Solaris, and is part of a project called Indiana. It includes new technologies like APT for packaging, first seen in Debian Linux, plus many other new features which you can look up if of interest.

Solaris Express Developer Edition and Solaris Express Community Edition are part of the project called Nevada.

My above comments on each version of Solaris are a brief, but hopefully correct, understanding. Even some Sun insiders seem to think having all these choices is a bit confusing.

Anyway, for running Solaris and ZFS on a home fileserver, my opinion is that your best bet is to choose from the SXDE or SXCE editions.

For quicker bug fixes and new features, I have chosen to use SXCE.

Getting and installing Solaris

To get SXCE, go to http://www.opensolaris.org, click on the Download icon at the top right of the page, then select the DVD link under the text ‘Solaris Express CE’.

You’ll have to register, but it’s free. Then you can download the Solaris image.

You can then burn the ISO file using whatever DVD burning software you have — e.g. DiskUtility for the Mac, or Nero for Windows.

Then you can install Solaris by booting the burned DVD. Installation is simple to do, but fairly slow. After selecting your region and language etc and letting the installation get under way, it’s a good time to go and get a coffee and something to eat πŸ™‚

During installation or after booting the new Solaris installation, you can create your user account. You may wish to use the same user and group ids as you use on your other UNIX box, e.g. your Mac. This will simplify permissions hassles later if you are copying over data from your other machine.

Configuration

Once you have a system where ZFS is available, you can get to work setting up your ZFS storage pool.

You’ll need to decide what kind of setup you want (redundancy or no redundancy). Also which disks you will use. I will assume here that you have multiple disks of the same size and that you wish to setup a large storage pool with built-in redundancy. For redundancy, I will assume you want single-parity. Then our choice will be simple. We’ll setup a RAIDZ array, which is kind of equivalent to the old RAID level 5 setup, but with extra ZFS features not available with RAID level 5.

Once your disks are connected within the case, you need to get the ids of your disks, because you’ll need to specify these disk ids when creating the storage pool.

In Solaris, you can type, as root user:

# format

This will give the following output:

# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
       0. c0d0 (DEFAULT cyl 20007 alt 2 hd 255 sec 63)
          /pci@0,0/pci-ide@4/ide@0/cmdk@0,0
       1. c1t0d0 (ATA-WDC WD7500AAKS-0-4G30-698.64GB)
          /pci@0,0/pci1043,8239@5/disk@0,0
       2. c1t1d0 (ATA-WDC WD7500AAKS-0-4G30-698.64GB)
          /pci@0,0/pci1043,8239@5/disk@1,0
       3. c2t0d0 (ATA-WDC WD7500AAKS-0-4G30-698.64GB)
          /pci@0,0/pci1043,8239@5,1/disk@0,0
Specify disk (enter its number): ^C
#

Now hit CTRL-C to break out of it.

I have installed Solaris on disk 0, and we will now use the 750GB SATA disks with numbered 1 to 3 in this list. They are 750GB disks, but only about 692GB are actually usable thanks to the marketing con used by the disk industry to make the disk sizes appear bigger than they actually are.

Now we’ll issue the ZFS command to create a RAIDZ array of these three 750GB drives, which should give around 1.4TB for data (2 x 692GB), and the other 692GB is used by the RAIDZ array for parity data. This parity data gives the array the ability to (1) remain operational in the event that one of the three hard drives fails, and (2) seal-heal any ‘latent defects’ (aka silent errors or bit rot) so the space is not being wasted πŸ™‚

We’re going to call this data storage pool ‘tank’, as used in all the Sun ZFS demos and examples. Presumably tank refers to a large storage container like a pool:

# zpool create tank raidz1 c1t0d0 c1t1d0 c2t0d0

Now let’s check its status:

# zpool status tank
  pool: tank
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            c1t0d0  ONLINE       0     0     0
            c1t1d0  ONLINE       0     0     0
            c2t0d0  ONLINE       0     0     0

errors: No known data errors
# 

You can see that this pool is called ‘tank’, that it is online, that no scrub has been requested, and the configuration for the ‘tank’ pool is a RAIDZ1 (single-parity RAIDZ vdev), and you can see the id of each disk in the vdev (virtual device), and for each disk in the vdev, you can see that there have been no read, write or checksum errors found so far.

If you’re curious about ZFS you can discover a load of useful stuff here and here, amongst other places.

Now let’s see how much space we have (these figures are from an in-use pool, not a newly created one):

# zpool list tank
NAME   SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
tank  2.03T   996G  1.06T    47%  ONLINE  -
#
# zfs list tank
NAME   USED  AVAIL  REFER  MOUNTPOINT
tank   663G   701G  25.3K  /tank
# 

The first command ‘zpool list tank’ gives raw storage capacity data for the storage pool that includes capacity used to store parity data.

The second command ‘zfs list tank’ gives storage capacity data for the file systems created within the storage pool, that excludes capacity used for parity data — i.e. the figures only consider user data.

So we can see that the pool has around 2TB of capacity (including parity data), and that in the file systems created under the ‘/tank’ mountpoint we have used 663GB and have 701GB available. 663GB + 701GB = 1364GB, or around 1.3TB, so this seems about right, considering that an additional 692GB is used for parity data (~1.3TB + ~0.7TB = ~2TB).

So it’s looking good.

Setting up your file systems

Now we’ll move on to exploring what we can do with storage pools. So that I don’t risk messing up my existing storage pool, I’m going to create a new pool which will use a 4GB USB memory stick (thumbdrive for U.S. readers?).

Once the pool is created, all the ZFS commands will be identical to the ones I would use if I was working with standard hard drives. There will be no redundancy with the USB stick as I will only use one of them here, but that doesn’t matter for this example.

First I’ll plug the 4GB stick into the USB slot and then see what its ‘disk’ device id is:

# format -e < /dev/null
Searching for disks...

The device does not support mode page 3 or page 4,
or the reported geometry info is invalid.
WARNING: Disk geometry is based on capacity data.

The current rpm value 0 is invalid, adjusting it to 3600
done

c4t0d0: configured with capacity of 3.84GB


AVAILABLE DISK SELECTIONS:
       0. c0d0 
          /pci@0,0/pci-ide@4/ide@0/cmdk@0,0
       1. c1t0d0 
          /pci@0,0/pci1043,8239@5/disk@0,0
       2. c1t1d0 
          /pci@0,0/pci1043,8239@5/disk@1,0
       3. c2t0d0 
          /pci@0,0/pci1043,8239@5,1/disk@0,0
       4. c3t0100001E8C38A43E00002A0047C465C5d0 
          /scsi_vhci/disk@g0100001e8c38a43e00002a0047c465c5
       5. c4t0d0 < -USBFLASHDRIVE-34CE cyl 1965 alt 2 hd 128 sec 32>
          /pci@0,0/pci1043,8239@2,1/storage@a/disk@0,0
Specify disk (enter its number): 
# 

Now we’ll create a ZFS storage pool called ‘test’ that will use the USB stick to store its data. I had to use the ‘-f’ flag here because the USB stick previously had a UFS file system stored on it, and when I inserted it, Solaris mounted it as ‘/media/USB FLASH DRIVE’, so we’re forcing it to ignore errors here, and do what we want anyway:

# zpool create -f test c4t0d0

Check:

# zpool status test
  pool: test
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        test        ONLINE       0     0     0
          c4t0d0    ONLINE       0     0     0

errors: No known data errors
# 
# zpool list test  
NAME   SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
test  3.81G   584K  3.81G     0%  ONLINE  -
# 
# zfs list test
NAME   USED  AVAIL  REFER  MOUNTPOINT
test   106K  3.75G    18K  /test
# 

In this case, as there is no redundancy used (i.e. we’re not using RAIDZ or MIRROR), the figures shown in ‘zpool list test’ and ‘zfs list test’ match, more or less.

Now let’s create a file system for a user called simon:

# zfs create test/home
# zfs create test/home/simon
#
# zfs list
NAME             USED  AVAIL  REFER  MOUNTPOINT
test             186K  3.75G    19K  /test
test/home         57K  3.75G    21K  /test/home
test/home/simon   18K  3.75G    18K  /test/home/simon
# 

As these were created as root, now let’s change the owner and group ids for the test/home/simon file system to simon (owner id) and simon (group id):

# cd /test/home 
# ls -l
total 3
drwxr-xr-x   2 root     root           2 Mar  8 20:18 simon
# 
# chown simon simon
# chgrp simon simon
# ls -l
total 3
drwxr-xr-x   2 simon    simon          2 Mar  8 20:18 simon
# 

Now let’s create a file in /test/home/simon called ‘readme.txt’ and fill it with some text. As I’m still root user, I’ll change the owner and group ids to simon again:

# cd simon
# echo 'This is a test.' > readme.txt
# ls -l
total 2
-rw-r--r--   1 root     root          16 Mar  8 20:24 readme.txt
# chown simon readme.txt
# chgrp simon readme.txt
# ls -l 
total 2
-rw-r--r--   1 simon    simon         16 Mar  8 20:24 readme.txt
# 

Making the storage pool accessible from other machines

The next step here is to enable the file system at /test/home/simon to be accessible to another machine. With ZFS we have three possibilities:- sharing with SMB/CIFS (Samba), NFS or as an iSCSI target. Only volumes are shareable as an iSCSI target and our file system is not a volume so we can only share it with CIFS or NFS. For this example, I will share the file system using CIFS:

# zfs set sharesmb=on test/home/simon
# zfs get all test/home/simon
NAME             PROPERTY         VALUE                  SOURCE
test/home/simon  type             filesystem             -
test/home/simon  creation         Sat Mar  8 20:18 2008  -
test/home/simon  used             19.5K                  -
test/home/simon  available        3.75G                  -
test/home/simon  referenced       19.5K                  -
test/home/simon  compressratio    1.00x                  -
test/home/simon  mounted          yes                    -
test/home/simon  quota            none                   default
test/home/simon  reservation      none                   default
test/home/simon  recordsize       128K                   default
test/home/simon  mountpoint       /test/home/simon       default
test/home/simon  sharenfs         off                    default
test/home/simon  checksum         on                     default
test/home/simon  compression      off                    default
test/home/simon  atime            on                     default
test/home/simon  devices          on                     default
test/home/simon  exec             on                     default
test/home/simon  setuid           on                     default
test/home/simon  readonly         off                    default
test/home/simon  zoned            off                    default
test/home/simon  snapdir          hidden                 default
test/home/simon  aclmode          groupmask              default
test/home/simon  aclinherit       secure                 default
test/home/simon  canmount         on                     default
test/home/simon  shareiscsi       off                    default
test/home/simon  xattr            on                     default
test/home/simon  copies           1                      default
test/home/simon  version          3                      -
test/home/simon  utf8only         off                    -
test/home/simon  normalization    none                   -
test/home/simon  casesensitivity  sensitive              -
test/home/simon  vscan            off                    default
test/home/simon  nbmand           off                    default
test/home/simon  sharesmb         on                     local
test/home/simon  refquota         none                   default
test/home/simon  refreservation   none                   default
# 

Note that the ‘sharesmb’ property has the value ‘on’ now, so it is now shared as a CIFS share. We didn’t specify a name to use as a share, so let’s see which default share name ZFS has assigned for us:

# sharemgr show -vp
default smb=() nfs=()
zfs
    zfs/test/home/simon smb=()
          test_home_simon=/test/home/simon
# 

So we can see here that the share name assigned is ‘test_home_simon’. We could easily have specified our own preferred share name when we set the ‘sharesmb’ property to ‘on’ earlier, if we had wanted to.

Now let’s ensure that the Solaris SMB service is running, so that this share will be visible from any connected client machines:

# svcadm enable -r smb/server
svcadm: svc:/milestone/network depends on svc:/network/physical,
 which has multiple instances.

# svcs | grep smb
online         15:49:20 svc:/network/smb/server:default

I used CIFS to share the file system in this example. When I have made previous experiments using NFS, I have noticed disappointing write speeds from the Mac to the Solaris fileserver and, from what I could find out, it seems that there are some issues with NFS shares of ZFS filesystems, which result in fairly slow write speeds. I think the problem related to NFS requiring some kind of acknowledgement during write operations, and this caused slow performance. But don’t quote me on this, as I was unable to satisfy myself that I had found the definitive answer to this problem, and it may just be that the Mac OS X’s NFS implementation is flawed.

Update 30/03/2008: extra steps required for CIFS setup

I noticed some differences between OpenSolaris Nevada build 82 (SXCE) and build 85 (clean install) regarding CIFS shares.

Existing working CIFS shares from build 82 didn’t work after moving to build 85.

In order to try and see why, I looked up CIFS guide here:
http://docs.sun.com/app/docs/doc/820-2429/smbserver?a=view

where it says:

“The Samba and CIFS services cannot be used simultaneously on a single Solaris system. The Samba service must be disabled in order to run the Solaris CIFS service. For more information, see How to Disable the Samba Service.”

So, disable samba:

# svcs | grep samba
maintenance 20:35:42 svc:/network/samba:default
# svcadm disable svc:/network/samba
# svcs | grep samba
#

When I try to access the share from the Mac using autofs (smbfs:), I see the following message in /var/adm/messages:

Mar 27 20:38:48 solarisbox smbd[667]: [ID 653746 daemon.notice]
 SmbLogon[WORKGROUP\simon]: NO_SUCH_USER

So, something changed?

See here:
http://docs.sun.com/app/docs/doc/820-2429/configureworkgroupmodetask?a=view

# smbadm join -w WORKGROUP
Successfully joined workgroup 'WORKGROUP'
#

Edit the /etc/pam.conf file to support creation of an encrypted version of the user’s password for CIFS.
Add the following line to the end of the file:

# vi /etc/pam.conf

other password required pam_smb_passwd.so.1 nowarn

Specify the password for existing local users.

The Solaris CIFS service cannot use the Solaris encrypted version of the local user’s password for authentication. Therefore, you must generate an encrypted version of the local user’s password for the Solaris CIFS service to use. When the SMB PAM module is installed, the passwd command generates such an encrypted version of the password.

# passwd simon

Now it works again, after reinitialising the client’s autofs (on the Mac for me).

Configuring client machine access to the file system share

Now I’m going to configure my Mac so that it can access the CIFS share and read and write to it. As I’m using Mac OS X 10.5 (Leopard), autofs is available to auto mount specified file systems.

We’ll configure the autofs configuration files in /etc to use the share we created on the ZFS fileserver. Perform the following steps as root user on the Mac, or other OS/machine supporting autofs.

First create a mountpoint directory where our share will reside:

sh-3.2# mkdir /shares
sh-3.2# cd /etc
sh-3.2# ls -l auto*
-rw-r--r--  1 root  wheel    67 Oct 10 06:53 auto_home
-rw-r--r--  1 root  wheel   236 Feb 24 15:00 auto_master
-rw-r--r--  1 root  wheel   164 Oct 10 06:53 auto_master.org
-rw-r--r--  1 root  wheel   319 Mar  1 14:49 auto_smb
-rw-r--r--  1 root  wheel    89 Feb 19 15:36 auto_zfs
-rw-r--r--  1 root  wheel  1755 Feb 24 14:59 autofs.conf
-rw-r--r--  1 root  wheel  1759 Oct 10 06:53 autofs.conf.org
sh-3.2# 
sh-3.2# vi auto_master

Now add the following line to the end of the ‘auto_master’ file:

# simon's additions
/shares    auto_smb        -nobrowse

This specifies that all paths for shares specified in the ‘auto_smb’ file will be relative to the /shares directory. Now let’s create the ‘auto_smb’ file to specify the relative mountpoint for accessing the file system we shared from the fileserver:

# vi auto_smb

test_home_simon -fstype=smbfs ://simon:password@fileserver_ip_address/test_home_simon

Save the file.

Now be sure to check that the permissions are correct on this ‘auto_smb’ file, or you may leave passwords visible!

Now that autofs has been configured to access the shared file system from the fileserver using CIFS, we can cause the Mac to remount any specified file system shares:

sh-3.2# automount -vc
automount: /net updated
automount: /home updated
automount: /shares mounted
sh-3.2#

Looks good.

Within the Solaris file manager UI, you may need to set the attributes within the ‘Permissions’ and ‘Access List’ tabs of the properties for the /test/home/simon directory. After that, you may need to restart the Solaris machine (or probably just restart relevant services), and possibly the client machine to ensure it gets the new properties for the share.

Now let’s see if we can read the file ‘readme.txt’ that we created on the fileserver:

Macintosh:~ simon$ cd /shares
Macintosh:shares simon$ ls -l
drwx------  1 simon  wheel  16384 Mar  8 20:24 test_home_simon
Macintosh:shares simon$ cd test_home_simon
Macintosh:test_home_simon simon$ ls -l
total 1
-rwx------  1 simon  wheel  16 Mar  8 20:24 readme.txt
Macintosh:test_home_simon simon$ 
Macintosh:test_home_simon simon$ cat readme.txt
This is a test.
Macintosh:test_home_simon simon$ 

Voila, it’s worked. We successfully read the file hosted on the fileserver.

A little tip if you’re using the Mac’s Finder application to view your shares graphically is that you may need to restart it, as it seems to give a special ‘share’ icon to CIFS & NFS shares, and the icon seems not to be displayed correctly until you restart the Finder. Restart the Finder by holding down the ‘alt’ key and right-clicking on the Finder icon in the Dock. Then click the ‘Relaunch’ menuitem in the popup menu. Perhaps Apple needs to sync the ‘automount -vc’ with a repaint of the Finder app?

Conclusion

That has given you a simple overview of how to create a ZFS storage pool, how to create a file system within the pool, and how to share the file system with another machine across the network using CIFS. Time for a beer to celebrate! πŸ˜‰

Further reading

You may find the following links interesting:

And these inspiring blogs from some great Sun guys, which I learnt a lot from:

And last, but certainly not least, the two Sun guys who created the amazing ZFS:

The more you learn about ZFS, the more you appreciate what a true engineering marvel it is, so I have deep respect for Bill and Jeff, and all the other people that helped make ZFS a reality — congratulations to you all!

And I applaud Sun for encouraging their staff to create their own blogs, as it helps spread the good word about their products and projects, and it is done in a personal style that shows the blog authors’ enthusiasm for the subject.

And having started to know a bit about Solaris via ZFS, I am beginning to see what a great operating system it really is.

Now I’m off to look for a good hosting service that gives me a full root access Solaris account so I can snapshot my running system regularly and zfs send / recv snapshots of it to another geographical location for safety. I’ve seen Joyent.com, and if anyone knows of others, feel free to comment below.

For more ZFS Home Fileserver articles see here: A Home Fileserver using ZFS. Alternatively, see related articles in the following categories: ZFS, Storage, Fileservers, NAS.

Join the conversation

127 Comments

  1. Note that on Leopard you can also use the Directory Utility (Applications > Utilities) to set the automounts graphically.

  2. Cheers Simon, this is great stuff! I’ve used ZFS a fair amount on Solaris & Nevada, but never played with CIFS shares from ZFS or autofs. Something else to add to my list of things to try.

    Cheers z0mbix

  3. The issue with NFS is that a client requesting a sync ensures that the sync is committed to the server before returning (i.e. it’s semantically correct). Samba doesn’t, so you think you’ve written it successfully but it may fail. It’s more to do with how the client program is written; using ‘tar’ to untarball a lot of small files over NFS may be slower than SMB owing to this issue.

    That doesn’t mean that for single file writes (the majority of accesses) NFS is any slower, however. Google for NFS over ZFS to find out more.

    I’d be more interested if you could use NFS4 to mount the shares, as they’re supposed to maintain the extended attributes natively. At the moment, a Mac client mounting an NFS share and generating extended attributes will translate them to AppleDouble files, rather than use the remote system’s extended attribute support. NFS4 is supposed to provide that, but the NFS4 stuff is even more alpha than ZFS is on Mac OS X. Maybe when 10.6 comes out …

  4. @Patrick:

    No, I didn’t see any performance problems yet with viewing .mpg or .avi video file types yet from the Mac over the network when using a gigabit switch. Have you looked in the Network tab of the Mac’s ‘Activity Monitor’ app whilst viewing an .mpg file? How much RAM do you have on the fileserver? Assuming you have a fast network (not wireless), this sounds like it could be a resource problem on the fileserver — limited RAM, or some other problem perhaps?

    BTW, thanks for quoting a reference to my pages from your page at: http://schlaepfer.nine.ch/twiki/bin/view/Schlaepfer/SelfMadeNas

  5. @Alex Blewitt:

    Thanks for the info. Yes, I think I also read somewhere about NFS waiting for the ‘data written OK’ ack before continuing. When I revisit NFS sharing I will take another look into this subject.

    When using NFS sharing, I remember also getting very slow writes from the Mac to the ZFS fileserver on the Solaris box when I was using the Mac’s Finder app to drag and drop files to copy them to the fileserver. The speed was (sometimes?) abysmal (3-5MBytes/sec) when copying using the Finder. However, when I used the command line, I got a sustained 25-30MBytes/second, but I forget with which command: rsync or cp. So it seems there’s also some funny business going on with the Mac’s Finder app too.

    The trouble with quoting speeds here is that I didn’t keep a copy of all my tests and setups, and my pool configuration was also changing, so to get definitive speed comparisons I would need to do this scientifically, but I don’t have the time right now πŸ™‚

  6. Dear Sir:

    I like your treatise a lot, however, I would wonder if you would be willing to post some “howtoage” on accessing a ZFS managed and published volume via iSCSI, that would appear to MacOS as either a readable/writable ZFS volume (using the developer ZFS tools for MacOS), or a raw volume to MacOS that could be formatted as HFS+ after being mounted as a raw iSCSI device in need of formatting by MacOS?

    Thanks in advance!

  7. I was in a similar boat, setting up a fileserver, on existing hardware. This means I didn’t need to read the HCL! πŸ™‚

    I tried Nexenta (CD wouldn’t boot) and some Solaris flavours (OS wouldn’t boot) but then trialed FreeBSD, which worked beautifully.

    I gotta say this ZFS business is one classy affair. After much meddling an continuous awe, I decided to go for Linux, RAID and LVM. The main reason for this was the adding of disks to increase a pool. I couldn’t believe this was not possible in ZFS when using RAIDZ devices!

    With all the whizbang features of ZFS, you’d think this was a minor item to check for on the list.

    It pained me to do leave ZFS behind, perhaps we’ll meet up for a fling in the future.

    McP.

  8. @Joe: The problem with trying to use the Mac as an iSCSI initiator (client) is that Mac OS X does not include a built-in iSCSI initiator service. There are third party iSCSI initiator offerings available but I didn’t get these to work. Lookup ‘small tree iscsi manager’ or ‘globalSAN iSCSI Initiator for OS X’, and there are a couple of others. It seems like the market is still immature in this area.

    I got iSCSI working by using 2 Solaris boxes because Solaris supports both the iSCSI target and iSCSI initiator services out of the box. I think if you are persistent you will probably manage to get it working for the Mac using one of the available offerings — but only the globalSAN one was free of charge when I last looked.

    You can take a look at my ZFS backups page for info on how to setup Solaris for the iSCSI target side, and you’ll have to consult the docs for whichever Mac iSCSI initiator software you use to get the Mac side working. Good luck and let me know if you get something working!

  9. @McPop: Yes, I understand that Solaris is picky about the hardware due to relative lack of drivers. As you say, I think I heard that FreeBSD has wider hardware support, but I didn’t look much into it.

    You are right — it’s a great pity that there’s no support yet for adding or removing disks from a ZFS RAIDZ vdev, although I expect they are working on this, but who knows when it will appear? The way I overcame this issue is to give my pool much more capacity than I currently need. Hopefully, by the time I need more space, they will have added the ‘add disk’ feature to RAIDZ vdevs. Alternatively, even without the ability to add disks to a RAIDZ vdev, you can ‘zfs send/receive’ your pool to another pool, like a large backup pool, and then reconfigure your pool by destroying it and recreating it with the new disk(s), then ‘zfs send/receive’ to your new larger pool. I didn’t try this yet though, but I think that’s possible to do.

    Even if I did need more space now, I didn’t do the ‘zfs send/receive’ stuff, I could add new disks to the existing pool by adding a new vdev — i.e. a new RAIDZ or mirror vdev. Another alternative is to create a new pool with the new disks. So, you can see that there are already existing possibilities if you need more storage capacity, even before they add the new functionality to increase vdev capacity.

    Oh, and there’s yet another possibility: replace each disk in an existing vdev with a larger capacity drive, one by one. So, for example, you could replace four 500GB drives, one by one, with four 1TB drives to double your pool’s capacity. You can then utilise the old disks for a backup pool or just another storage pool.

  10. @Ruckus Ron: Thanks! That’s not an area I’ve really explored too much yet, but I believe it would be done using ZFS ACL. Go to opensolaris.org, click on Discussions and look into the storage or ZFS forums. Alternatively, click on Communities and look there under storage or ZFS. I’ll be looking there soon too πŸ™‚ And you could look at the ZFS Admin Guide under the ACL section.

  11. What is your transfer rate to and from your solaris machine and what is your CPU usage? I set up a similar system but my processor is a single core Sempron. CPU usage peaks during file transfers and that gives me 20-30 MB/sec to Solaris over SMB and 30-40 MB/sec from Solaris over SMB. If I use compression transfers to Solaris slow down quite a bit.

    I did an experiment with NFS and was able to get over 100 MB/sec from Solaris with around 30% CPU usage. Going to Solaris I sometimes get over 30 MB/sec followed shortly by 3 MB/sec so I am going to upgrade the CPU and use SMB until Mac OS X gets a better client or I figure out the work around.

    @Joe: I did some experiments with iSCSI and Mac OS X and came to the same conclusion about the iSCSI options. I never was able to have a solution that was totally stable. After the Mac went to sleep or sometime just on its own the Mac would freeze up and need to be hard rebooted. Even if it did work I don’t know if I would be too into iSCSI because you loose a lot of the ZFS benefits like flexible pools. It seems like a virtual drive is going backwards for the benefit of data protection.

  12. @Elvis: Using NFS I was getting quite disappointing speeds of around 25-30 MBytes/sec to and from the fileserver from the Mac Pro. When I switched to CIFS I got around 40+ MBytes/sec. Today I’m getting around 80 MBytes/sec with the same disks at each end, but by using dual Gigabit ethernet at each end and using an IEEE 802.3ad compliant Gigabit switch. See the page on Trunking for more details. However this will cost around $100+ for a switch that can do this. With this setup I’ve reached the limits of the disk read/write speeds on the Mac Pro and to go faster I’d need to get a faster disk like the Samsung Spinpoint F1, or to add an extra disk to the Mac and use a RAID 0 formation. This could potentially yield speeds of upto 150 MBytes/sec, but this is just a wild guess πŸ™‚

    CPU utilisation was minimal as I’m using a 64 bit dual-core AMD processor. Is the Sempron 64 bit or 32 bit? It seems the simple rule with ZFS is (1) give it loads of RAM (4GB is nice), and (2) use 64 bit processors.

  13. It is a 64bit Sempron. I ordered an Athlon X2 4050e and will try it out and max out the ram. It has been a really fun experience over all. I got interested in the project because I wanted to have a large storage server in the closet for video editing. I will keep playing and see how it goes. I will also try linking the two ethernet ports. Thanks for working most of it out before I even got interested.

  14. @Elvis: I think the Athlon X2 will give you a lot more power than the Sempron — it’s what I use here and I never saw any issue with processor overload. At least 4GB of RAM should give a nice setup. If you are able to, try to use ECC RAM, as it can detect and correct errors before they’re even sent to the disk and it’s only about a 10% price premium. A no brainer as far as I see it. If you’re thinking of video editing then aggregating multiple gigabit ethernet (2+) links will certainly give a boost to your throughput. Glad you’re having fun getting it all working. I had a lot of fun getting it all working here too — sometimes more ‘fun’ than I bargained for πŸ™‚ If you get time, let me know how it works out once you get your new gear installed and setup.

    If you do link ethernet ports you’ll need an IEEE 802.3ad capable switch. I used the Linksys SRW2008 as it is reasonably priced and allows flexible aggregation possibilities, but its downside is that the web interface requires Internet Explorer, which I used in a virtualised Windows environment on the Mac. With this switch, a limited number of options can be set via telnet access. On hindsight, I should probably have bought an HP ProCurve 1800-8G switch, as it has very good feedback and I think it works with Firefox too, so you don’t need to use Windows.

    Then you need to create a Link Aggregation Group (LAG) on the switch for the ethernet ports you wish to aggregate into a single fast link. E.g. On an 8-port switch, choose ports 1 and 2 for your fileserver and create LAG #1 for this and tell the switch that LAG #1 will use ports 1 and 2. Then select LACP (Link Aggregation Control Protocol) for the LAG. You’ll need to repeat this for the other machine where you’ll do your video editing, but this time create LAG #2 for this machine and use ports 3 and 4.

    If it’s a Mac, then in the System Preferences Network Panel, click on the gear icon under the network interfaces and select ‘Manage Virtual Interfaces’. There you’ll select ‘Ethernet 1’ and ‘Ethernet 2’ ethernet ports to aggregate, or bond. Then you can use DHCP like normal etc. And if you’re using Parallels Desktop, you’ll probably need to disable the 2 network interfaces it creates as these may interfere with your aggregated link.

    If you’re not using a Mac, then hopefully this comment will help someone else.

    Have fun!

  15. @chukaman: No luck, as I wasn’t prepared to pay Joyent.com $45 a month so I could get root access on Solaris. Reluctantly I’ve decided to go for a standard Linux-based hosting solution. This means no snapshots πŸ™

  16. I don’t suppose you know how to setup Appletalk networking for the shares on the Solaris box? Using netatalk under linux it is fairly easy.

  17. After Simon very kindly left a comment on my blog referring me to ZFS and his guide here, I’ve downloaded and tried SXCE on vmware. With no current knowledge of Solaris I’ve managed to setup ZFS and get it all shared nicely on my network. Thanks to Simon’s heads-up, I’m going to be migrating my Gentoo fileserver to Solaris on ZFS as soon as I get some new hardware delivered.

    Thanks again Simon

  18. I have a P4@2.4GHz and 1GB RAM. I get like 20MB/sec read/write speed. That is due to P4 beeing 32 bit. ZFS is 128 bit and doesnt like 32 bit CPU. With 64 bit dual core CPU you get speeds in excess 100MB/sec.

  19. Hi all. I managed to get iSCSI working under Mac OS X 10.5.3 with the OpenSolaris 2008.05 Release. iSCSI with ZFS without authentication. As a client I used the globalSAN iSCSI Initiator 3.3.0.35 beta.

  20. I also managed to format the iSCSI Volume as a HFS+ Volumen and got Timemachine working. Taking the iMac to sleep also worked perfectly.

  21. Marketing Con?

    More like “Not Knowing what the SI Prefixes Mean” User Error.

    A Gigabyte is a billion bytes, nothing more, nothing less. Get over it.

  22. Well, Lol, you’re a bit late to the party, but sit down and take it easy πŸ˜‰

    As we wouldn’t want the marketing guys to get a bad reputation, how about you suggest they use GiB instead of GB in their HD marketing? Good luck! πŸ˜‰

  23. OK, got it set up and running … but read/write is ridiculously slow.

    I’m using NFS from my Mac .. getting 2MB/s.

    Will try CIFS share.

    Don’t think it’s the network.

    Could be the SATA drivers/chipset? In the BIOS it’s set to AHCI. How can I be sure solaris is using ahci drivers I wonder?

    prtconf -D shows:

    pci8086,5044, instance #0 (driver name: ahci)
    disk, instance #0 (driver name: sd)
    disk, instance #1 (driver name: sd)
    disk, instance #2 (driver name: sd)
    disk, instance #3 (driver name: sd)

    what’s that sd driver I wonder?

    listing the devices shows that the drives are on:

    /devices/pci@0,0/pci8086,5044@1f,2:
    disk@0,0 disk@0,0:i,raw disk@0,0:r,raw disk@1,0:f disk@1,0:p disk@2,0:b,raw disk@2,0:l,raw disk@2,0:u,raw disk@5,0:i disk@5,0:r

    /devices/pci@0,0/pci8086,5044@1f,2/disk@0,0:

    /devices/pci@0,0/pci8086,5044@1f,2/disk@1,0:

    /devices/pci@0,0/pci8086,5044@1f,2/disk@2,0:

    /devices/pci@0,0/pci8086,5044@1f,2/disk@5,0:

    This guy has a similar setup and problem:

    http://mail.opensolaris.org/pipermail/zfs-discuss/2008-April/047039.html

    He fixed, he thinks, by manually zeroing out the start and ends of the drives … um, any idea how to do that?!

    thanks

    3am here, been at this since 6pm …. the day before yesterday. It shouldn’t be this hard! πŸ™‚

  24. Hi Shaky, sorry to hear of your initial problems. I don’t have time right now to look at this in any depth as it’s late now, but to get you started, perhaps you might want to consider the following:

    1. Regarding the slow speeds you’re getting across your network, I presume you are using a 100 Mbps ethernet link — i.e. you’re not using a gigabit switch / router? Also, if you ARE using gigabit ethernet, be sure you are using Category 5e or 6 ethernet cable on BOTH ends of your network connections — i.e. on Mac and ZFS server. To check your network speed settings, as root issue the ‘dladm show-dev’ command from the command line — see the output from my system below, which shows that I have dual gigabit links operational:

    # dladm show-dev
    LINK STATE SPEED DUPLEX
    nge0 up 1000Mb full
    nge1 up 1000Mb full

    2. I was unhappy with the write speeds using NFS sharing, so I switched to using CIFS sharing, and this did improve write speeds quite considerably. Using gigabit ethernet, category 6 cables and CIFS sharing, I was getting around 40 MBytes/sec sustained transfer speeds with a 3-drive array using a RAIDZ1 vdev to form the storage pool. When I used link aggregation to link 2 ethernet ports at each end, the sustained speed rose to around 80 MBytes/sec.

    3. Regarding AHCI, I can’t remember — will look tomorrow. Try turning it off. As I have an NVidia-based MCP chipset, my system is using the nv_sata driver, but I think you say you are using a different motherboard, so I can’t say any more right now.

    Hope this helps for now.

    Regards,
    Simon

  25. Ah, thought I had gigabit, but no:

    LINK STATE SPEED DUPLEX
    e1000g0 up 100Mb full

    Seems my router only 10/100. Wonder if I can plug directly into my Mac. Will it get an IP address? Hmmm. Will keep on trying!

    Yeah, I’m using Intel DG33TL motherboard with Intel gigabit nic and ICH9R sata

    thanks for this info!

  26. hmm, i see this in the messages log:

    Aug 8 03:31:32 opensolaris ahci: [ID 405770 kern.info] NOTICE: ahci0: hba AHCI version = 1.20
    Aug 8 03:31:32 opensolaris unix: [ID 954099 kern.info] NOTICE: IRQ21 is being shared by drivers with different interrupt levels.
    Aug 8 03:31:32 opensolaris This may result in reduced system performance.

  27. Hi Shaky, you might be able to plug the ethernet cable directly into the Mac’s ethernet port with a standard cable, or you might need to use a crossover cable. Then, unless your Mac is running a DHCP server service, you will probably need to manually set the IP address on the Solaris end. I would get a switch though if you want the higher speeds permanently.

    If you plan to connect other machines to the server your best best will be to use a gigabit switch. I can recommend a DLink DGS-1008D which is about 70 euros. This is a simple, fast and cheap 8-port unmanaged gigabit ethernet switch — just plug in cables and go, nothing else to do. It also has intelligence built-in that detects which ports are in use, and only uses electricity according to what it needs to power. See more here:
    DLink DGS-1008D green ethernet 8-port Gigabit switch

    However, if you have 2 gigabit ethernet ports on each machine, you might want to aggregate the links to get increased speed. If so, you’ll need a managed switch, which are about 150 euros. See here for more info: Home Fileserver: Trunking.

    Regarding AHCI, did you try turning it off in the BIOS to see if that works better? It’s not an area I know much about right now, but I think it’s disabled in my BIOS currently. Next time I reboot, I’ll take a look at my BIOS settings. There’s an AHCI wikipedia page, which says AHCI allows things like hot-plugging and NCQ.

    BTW, did you check your motherboard for compatibility with OpenSolaris by looking at the hardware compatibility list (HCL) that Sun have on their website? It not, it might be worth checking.

    Good luck!

  28. Yeah, mobo is on the list.

    Finally zeroed the drives. Took about 4 hours. Now booting to svn_95 and noticed the installer boots to 32 bit. Strange.

    thanks for all the help!

  29. Ah, just the installer shows 32 bit. The actual installed solaris is 64bit.

    Well, zeroing the disks helped a little. Can now get 5MB/s. Still not awesome!

    I haven’t changed to IDE from AHCI yet.

    I tried the CIFS share method you detail above, but cant seem to get the passwd command to work.

    this bit:

    # vi /etc/pam.conf

    other password required pam_smb_passwd.so.1 nowarn

    # passwd simon

    I’ve edited the pam.conf file, but the passwd command executes the normal passwd binary i.e. usr/bin/passwd

    “When the SMB PAM module is installed, the passwd command generates such an encrypted version of the password”

    how do you install the PAM module? Do you need to restart something after editing the pam.conf file?

    thanks

  30. Ah ignore that. it does work. adds entry to /var/smb/smbpasswd.

    sudo service com.apple.autofsd stop
    sudo service com.apple.autofsd start

    to restart the autofs and it works! Well I can cd to it. Can’t see in Leopard Finder, even after relaunching it. πŸ™

    Anyway, no faster πŸ™

  31. If you can ‘cd’ to the share directory, that sounds good. I presume you can also create a file and ‘cat’ it etc from the Mac?

    If so, then it might just be a problem that the Mac’s Finder doesn’t seem to be aware of the share, so try to restart the Finder by holding down the β€˜alt’ key and right-clicking on the Finder icon in the Dock. Then click the β€˜Relaunch’ menuitem in the popup menu.

    Hopefully that should do the trick.

    An initial access to a share might take a few seconds as autofs does its business. Shares will also timeout after a while, potentially causing short delays when re-accessing a little-used share.

  32. Well, changed to IDE. Reinstalled. Got it all working again. Still slow. Over the network I get about 3MB/s.

    trying: dd if=/dev/zero of=delete.me bs=65536

    I get 14MB/s on my main drive, 7MB/s on the zfs pool

    Sooooo … .. so could be net related. Still 14MB/s is slow right?

    In my messages file I see this:

    Aug 10 09:10:21 opensolaris pcplusmp: [ID 803547 kern.info] pcplusmp: pci8086,294c (e1000g) instance 0 vector 0x18 ioapic 0xff intin 0xff is bound to cpu 1
    Aug 10 09:10:21 opensolaris mac: [ID 469746 kern.info] NOTICE: e1000g0 registered
    Aug 10 09:10:21 opensolaris e1000g: [ID 766679 kern.info] Intel(R) PRO/1000 Network Connection, Driver Ver. 5.2.9
    Aug 10 09:10:22 opensolaris unix: [ID 954099 kern.info] NOTICE: IRQ21 is being shared by drivers with different interrupt levels.
    Aug 10 09:10:22 opensolaris This may result in reduced system performance.

    Then, for the drives, I see:

    Aug 10 09:10:23 opensolaris npe: [ID 236367 kern.info] PCI Express-device: ide@0, ata0
    Aug 10 09:10:23 opensolaris genunix: [ID 936769 kern.info] ata0 is /pci@0,0/pci-ide@1f,2/ide@0
    Aug 10 09:10:23 opensolaris unix: [ID 954099 kern.info] NOTICE: IRQ21 is being shared by drivers with different interrupt levels.
    Aug 10 09:10:23 opensolaris This may result in reduced system performance.

    not sure if that’s the problem.

  33. 3 MBytes/sec across your network sounds slow, but if you’re going through a 100 Mbps router you won’t get over about 10 MBytes/sec. Are you going through your router or are you using an ethernet patch cable to link your two machines directly? And what category ethernet cable are you using?

    I get this message for each pool disk, and you clearly see that it says sata, pci and NO ide:
    Aug 10 17:02:57 solarisbox sata: [ID 663010 kern.info] /pci@0,0/pci1043,8239@5,1 :

    Strangely, yours doesn’t mention sata, but does mention ide… doesn’t look right to me. Have you checked anything related to SATA in your BIOS?

    As for the messages relating to shared interrupts, I have similar ones in /var/adm/messages so it’s probably OK.

  34. So, if you want over 10 MBytes/sec, you’ll need to get a gigabit switch. Plug the switch into the 10/100 router. Plug both computers into the switch.

    Secondly, check that both computers use at least category 5e ethernet cable (I use category 6, but 5e should suffice).

    That will get you a bit further down the road to getting higher speeds across your network.

    Then you might need to research how to get a SATA driver running nicely for your mobo’s SATA chipset from users with the same mobo, and as it’s on the OpenSolaris HCL, it should be possible, hopefully. Good luck!

    Then I would use CIFS sharing (NFS write performance was not as good as CIFS).

  35. Got it set up. Checked Gigabit at each end – all good. However didn’t use Cat 5e cable all the way (too short!). Going to try that tonight.

    Still, it’s slow. About 2MB/s. Actually slower than it was yesterday – I changed the drive setup again back to AHCI from IDE.

    Read tests using some dude’s script show about 60MB/s. Which isn’t too far off what other testers get (70-80MB/s). So it’s just write speed, to a ZFS pool shared over NFS. Actually the NFS doesn’t really matter, the write speed is just slow.

    As mentioned above, this guy has same hardware and same problem, but his fix didn’t work for me:

    https://www.opensolaris.org/jive/thread.jspa?threadID=57647

    Options:

    0. Try the gigabit cable, but I dont think this is the problem now.
    1. Try Linux, see if it’s the drivers. Could try sofware RAID or my mobo supports hw RAID. Don’t know if Linux would drive it though.
    2. Go back to shop, change mobo and maybe processor if needed. Friend has Asus board that worked just fine. Super fast write speeds.
    3. Keep fiddling. There must be a way to get the drives to work faster…. it’s a supported mobo and SATA interface.

    I have to get it sorted. I can’t concentrate at work .. I’m just spending time reading forums!

  36. OK. So, using Linux. Just LVM, not RAID5, I get 50MB/s via Samba. Could not get NFS to work. Could the NFS share in Finder, but got error -43 when trying to access it. Can’t cd via terminal either. Perm denied.

    Couldn’t get the Samba share to display in finder either (using your method above). Could cd via terminal just fine.

    Going to try RAID5 on linux next. But I think this means that it’s the opensolaris ata drivers that are causing the slowness.

    So, as much as I’d like to use ZFS, I think I’ve had enough! I’ve been at this for almost a week now!

    I just want a resilient file store! πŸ™‚

  37. But then I re-read your previous posts in this series:

    “He says: β€œI’m using it [ZFS] because I’m fed up with losing data to weird RAID issues with Linux, and I believe that OpenSolaris with ZFS will be substantially more reliable long-term.”

    Ah, I really do want ZFS

  38. Yes, I think you really *do* want to use ZFS for all the super advantages it offers.

    However, as you saw from your Linux experiment, the hardware seems to be working fine. The issue seems likely to be a driver issue, or possibly a BIOS misconfiguration causing the driver to operate sub-optimally.

    Oh, and you did check that you’re now using category 5e cable or greater on all computer ethernet to switch connections now?

    Did you find any useful info when you searched on Google for other ZFS users using your motherboard? They must have the answers that will help you either fix the current (driver) issue, or abandon the motherboard and choose a different one that is definitely known to work (like mine if you have to).

    If you decide to get my mobo, note that I am not getting very good power consumption figures (120W with 3 drives in the pool, plus an OS boot drive), due to the fact that I have an AMD processor of family 15 (see here for a full explanation: http://breden.org.uk/2008/03/02/home-fileserver-zfs-hardware/#comment-2130)

    So it is my feeling that currently some Intel CPU’s will give you lower power consumption figures than AMD processors, as OpenSolaris has better support for CPU frequency scaling on Intel processors than AMD ones, and when the processor is idle (95% of the time probably) this is important. OpenSolaris supports CPU frequency scaling on AMD processors of family 16 plus, but that means the Barcelona processors and these appear to use between 80W and 120W — ouch!

    For the sharing problem, for simplicity, ensure the user and group ids match on your Mac and Solaris boxes. If they don’t currently, then try deleting the user from Admin panel, and recreate the user with this line (replace ‘username’ and ‘groupname’ with your preferred name, of course):

    On the mac, in Terminal type: id your_user_name

    Record the ‘uid’ and ‘gid’ numbers (user id and group id).

    As root at Solaris command line, create group for ‘groupname’ and user for ‘username’ (uid and gid to match Mac’s ids):

    # groupadd -g gid groupname
    # useradd -g gid -u uid -s /bin/bash -d /export/home/username -c username -m username
    -g gid: adds user to group gid
    -u uid: creates the userid uid for this user
    -s /bin/bash: assigns the default shell to be bash for this user
    -d /export/home/username: defines the home directory
    -c username: creates the comments/notes to describe this user as required
    -m: creates the home directory for the user username

    And for your shared directory do: (1) ‘chmod 755 shared_directory’ to give your user on the mac read/write access, (2) ‘chown username shared_directory’ and (3) ‘chgrp groupname shared_directory’

    Good luck.

  39. Yup, cat 5e all the way now.

    Not really any useful info on Google for my mobo and solaris.

    Going to try to find an SATA card and try that.

    Oh, I just read that on some chipsets, when there are more than 4 SATA ports on the board, only the first four are usable properly. The others are hidden behind some ‘raidgoofyness’.

    Mine are plugged in to 1,2,3 … and for some reason 5.

    Could try 4!

  40. Slot 4 didnt work.

    SATA card wasn’t recognised by my install of Solaris. I used svn_93. It’s a bug fixed in svn_94. Tried the workaround, that didn’t work. Could try it again, but I decided I want my onboard SATA ports to work.

    I’ve found a board that works (my friend has it – the Asus M2N-VW HDMI). Means spending about #80 GBP (need new CPU too), rebuilding the parts and reinstalling.

    Tried harware raid5 and Fedora 9 linux last night. Didn’t work.

    So, my choices are:

    1. Spend, wait (can’t get mobo until tomorrow), rebuild (or let the shop do it, but that means lugging it down there)
    2. Go with sotware RAID5 on a Fedora 9 linux install.
    3. No RAID, just go with ext3 in a big LVM on a Fedora 9 linux install.

    I am a little concerned with the lack of resources for troubleshooting OpenSolaris. fedora seems a lot slicker.

  41. Well, sorted, almost.

    I got the Asus M2N-VM HDMI. Rebuilt the machine and installed OpenSolaris svn_93. No network. Apparently the nVidia chipset doesn’t work via the Solaris drivers. I tried the nfo 3rd party drivers but couldn’t get those to work. So I popped out and bought an Intel PCI gigabit ethernet card. All good.

    BTW – my friend gave me the wrong mobo model name! He had the M2A-VM

    Created and shared zfs pool via NFS and CIFS. I get about 20MB/s over NFS and 30MB/s over CIFS. I’m not using my new 5e cable yet though (need the long cable so I can have box in living room plugged into TV)

    So I decided to go via CIFS. I couldn’t get the /shares mount to show up in Finder like you, so I mounted in /Network/Servers via the fstab file. Lovely, worked fine.

    My user/group was set properly, however, any files copied via Finder to the share were written without any permissions at all on Solaris box.

    I noted my zfs pool had aclinherit=restricted whereas yours has ‘secure’ I tried changing that, restarting the smb service, but the share just disappeared. Mac log said it was dead.

    So .. still trying!

  42. Tried svn_95. Now getting 35-40MB/s. Didn’t read all the comments, someone mentioned a problem with svn_93 and CIFS.

    Fast enough for me.

    Finally.

    Many thanks for all your help during my ordeal!

  43. Congratulations on getting it working shaky! Those speeds sound a lot more what I would expect to see.

    You certainly had some ‘fun’ getting it all to work πŸ™‚

    And thanks for reminding me — I was going to write up some stuff on ACLs and never got round to it yet.

    Hope your system works nicely now.

  44. Where can I find good info on choosing a CPU for OpenSolaris?

    I want a dual core, low wattage, 64 bit CPU.

    Should I go Intel or AMD?

    Forget price/watt/performance ratios.

    Which CPU would perform best in Opensolaris?

  45. Hi,

    I am getting ready to put together a computer (well, all of the parts that i’ve purchased over a few months) that will be my ZFS file server/media server. What are the main differences between SXCE and opensolaris 2008.05? Is one better/easier to use than the other?

    Are there any good books out yet pertaining to ZFS or opensolaris?

    Thanks,

    Brian

  46. Hi Brian,

    I’ve not used OpenSolaris 2008.05 yet as I’m still running on SXCE (Nevada build 87, which is old now), so I can’t give you a definitive answer. However, SXCE is unsupported, not that that probably bothers you if it’s for a home fileserver/NAS. I do intend to try out 2008.11 though at some point in the future.

    OpenSolaris 2008.05 has paid-for support available if that’s required, and it also has IPS (Image Packaging System) included, which is like the APT (advanced packaging tool) from Debian Linux. However, if I recall, IPS only allows you to update packages (apart from security fixes) if you buy an annual subscription which costs around $324 (see http://www.sun.com/service/opensolaris/index.jsp). For me, this was unacceptable for a home fileserver, although perhaps businesses consider that acceptable. See more on this here:
    http://breden.org.uk/2008/03/02/home-fileserver-zfs-hardware/#comment-1523
    http://www.sun.com/service/opensolaris/faq.xml

    See here for more info, where Sun describe the differences between their offerings:
    http://www.opensolaris.org/os/downloads/

    I’m not aware of any currently available ZFS books, but as it’s quite simple, as far as setting up a box to serve files from a ZFS file system, you can take a look at the ZFS Administration Guide: http://opensolaris.org/os/community/zfs/docs/zfsadmin.pdf

    I’ve described the basic steps you need to perform in the post above, but the ZFS Administration Guide goes into a lot more detail, so it is very useful.

    Good luck with your build!

    Simon

  47. hey Simon,

    Thanks for your help. I have another technical question, as the computer is built(hardware wise). I have an
    -asus M3N78 – PRO mother board
    -AMD 2.6ghz X2 5000+ CPU
    -4 GB G-Skill ddr2 RAM
    -4 x 1500 GB seagate drives
    -1 X 250GB seagate drive (BOOT disk)
    -650W PSU
    -case
    -DVD-RW

    My question is this: The MB has 6 SATA ports. I have three modes it can be in: AHCI, RAID, or IDE. The only way that the # 5 and 6 ports can be seen are if it is in AHCI or RAID mode (i need them for all the discs). I do not want to RAID the disks, as I will be using ZFS. and I am not sure what AHCI is. But SXCE (b99) (or opensolaris) will not boot in either mode, It hangs after I choose the SCXE install option and it says ‘SunOS release 5.11 Version….’. Do you have any ideas?
    I can get opensolaris to install in AHCI mode, but then, randomly, I will get an error saying “ahci_port_reset: port (x) BSY/DRQ still set after device reset port_task_file = 0x180”.
    port (x) is whatever port is geting the ahci error. The drives are fine, i ran seatools long test on them all. Is this a driver issue?

    thanks.

  48. Simon,

    sorry to post again. I got my problem figured out ( i think). I installed SXCE B99, but am only able to boot into the ‘solaris express xVM’ option at bootup. The standard option just makes the machine reboot, and failsafe does not work. Is xVM fine to use regularly? I set it as the default boot option for now so i dont have to choose it every time. I was also able to recognize the other disks, and create a zpool. My next problem is this:
    how do i share over NFS to my mac leopard machine? I set sharenfs=on for the share, but when i try to mount it in OS X it tells me incorrect username or password. I have created a new user on the SXCE box for brian, made the UID the same one as my mac, changed the folder permissions to be owned by my user. I’m not sure what else i can try? i also set SMB on, and the MAC dialog that asks for username and password keeps telling me it is incorrect (and i’ve tried a few different users that i’ve made on the server).

    thanks.

  49. Hi Brian, glad you managed to sort the problem out. I’m not sure about the ‘solaris express xVM’ option you mention as I’ve not used it. However, it does sound strange that the standard option makes the machine reboot! I assume you checked the Sun hardware compatibility list (HCL) before choosing the hardware, or found other people reporting success with Solaris and ZFS?

    Regarding the sharing, I used CIFS and to get the sharing working, check the following sections above:
    – Making the storage pool accessible from other machines
    – Update 30/03/2008: extra steps required for CIFS setup
    – Configuring client machine access to the file system share

    If it still fails then see if you can ‘tail -f /var/adm/messages’ and then try to connect to the share from the Mac again and see if any failure messages are displayed on the console. That’s how I found out why my earlier attempts to get it working were failing. I saw this on the console:

    When I try to access the share from the Mac using autofs (smbfs:), I see the following message in /var/adm/messages:

    Mar 27 20:38:48 solarisbox smbd[667]: [ID 653746 daemon.notice]
     SmbLogon[WORKGROUP\simon]: NO_SUCH_USER
    

    Also, from memory, I use AHCI, and this should enable hot swap of a failed drive, in theory.

    Good luck, and let me know if you (1) get the connect working and (2) if you solve the boot issue.

    Cheers,
    Simon

  50. Simon,

    i am currently using the xVM option (i was able to make it the default and it seems to run normal in every aspect).
    I was able to get NFS working on the macs, now I am just trying to figure out how to set permissions on the Solaris box to get a folder up for everyone in my house that is read/write. I am able to set the group ownership but i’m not sure how to get the owner to be everyone. Is there a way to do this?

    I did check the HCL and although it does not list my exact MB, it does list the M3N78 HDMI and others with the M3N78 model, so i’ll just wait to see if an update will fix it. For now i just use th xVM.

    Thanks so much for your help, it really has been awesome. I have never used solaris and between this blog and the opensolaris help forums i’ve got this thing up and running in under a week and hopefully soon I can get it fully implemented on my network to serve up files and (hopefully) get time machine to work with it. Unfortunately i had to send my macbook pro in for repair today so I am using the server as my desktop until it comes back next week.

    Brian

  51. Simon,

    Thanks for the great help. I was able to get NFS working, i might work with CIFS later on but for now ill stick to NFS. I also have been able to get a shared folder up for everyone to read/write to which is nice. I’m slowly transferring (over rsync) close to 700 GB of data from my iMac and external FW drive to the zpool to give the drives and the machine a workout. So far it has been on for about a day now and the fans on the case are still pushing out cold air! the hard drives are cool to the touch and i’ve had it transferring this data all morning and into the afternoon. I even tried it as a scratch disk for Final Cut Express and it worked flawlessly editing HD footage from my canon HF10. What a machine!

    I have not had any time to diagnose the boot problem, but i am able to do everything booting to xVM so i will use it for now. Next step after i get all the data over and attempt time machine backups, i’ll work on getting an ubuntu virtual machine up for some media streaming. Do you know of any good media streaming programs I could use under SXCE to stream to a PS3 and xbox360? This is one of the goals of this machine, but im sure it will be further down the road.

    Thanks for the help again and the great articles. Between here and the solaris forums I’ve come from having never used solaris before to feeling quite comfortable in it in about a weeks time. So far I am loving it.

    brian

  52. Hi Brian,

    Thanks for the compliments, and I’m glad to hear you managed to get NFS working, and your setup is working nicely. What sustained write-speed are you seeing with NFS?

    Also, good to see it works nicely editing HD video. I intend to get some new MiniDV tape-based HD camera soon — possibly the Sony HDR-HC9, but I need to do a bit more research first… 3 CCD or 1 CCD etc?…

    A standard XBOX (not 360) running XBMC appears to be quite a neat solution for viewing video streamed from your ZFS fileserver. It is known to play audio and video compressed with almost any video/audio codecs: MPEG2 (DVD), DivX, Xvid, MP3 etc. However, the standard XBOX is not powerful enough to play MPEG4 videos as the processor doesn’t seem powerful enough to be able to decode MPEG4 fluidly in real time. Perhaps the XBOX 360 might be better — assuming that XBMC runs on it.

    For the PS3 you might like to look at these posts:
    http://blogs.sun.com/constantin/entry/mediatomb_on_solaris
    http://blogs.sun.com/constantin/entry/twonkymedia_on_solaris

    Ironically, I now have a boot problem with SXCE b87, which I think I caused by turning the machine off mid-boot as there seems to be some bug in the Asus M2N-SLI Deluxe motherboard’s BIOS relating to the NIC initialization process, causing the NICs not to work sometimes when booting…

    Good luck,
    Simon

  53. I wonder, would you mind writing a new article for your excellent ZFS series? (BTW, I have posted links to your site on several Linux/Solaris forums). In the article I would like you to wrap up your experiences, now that you have used ZFS and solaris for a while, as someone coming from Linux. I think many would like to read a thorough article on that, just as all your other well researched articles. Are you a convert? If yes, why? If no, why not? Etc. For the coming flame wars between Solaris and Linux, I can always link to your site as someone who has tried both. ;o)

    I dont mind if you delete this post. I just wanted to ask you this question. You can delete this post, and just write such an article. Pliiiz?

  54. Simon,

    I’ll check out those posts as I have used Mediatomb with linux before. I did get XBMC up on an old XBOX a while back, i’ll just have to pull it out of closet to see if it still runs, but its nice knowing i can stream to it.
    Next on the list is pulling ethernet all over the house, better do it now while its getting cool (relatively) down here in sunny Florida!

    With NFS, while transferring any one large file i’ll get around 30 mbps, with peaks above 50. However, when transferring a bunch of small files (i’m using rsync, the Finder is too slow), it barely gets and stays over 2-3 mbps. This could get annoying because i’m moving over 750 GB of family movies and pictures from summer trip over the years, and some backups of documents from way back, and i have a feeling it will take all night and into tomorrow (hopefully not longer, this for some reason is eating the RAM on my mac, rendering it useless when transferring files).

    The problem is not apparent using Final Cut express, when i was importing the footage it would be at about 5 mbps which is due to the video having to be processed before it gets stored (using AVCHD, has to get transcoded to AIC). When editing, I can play back in real time the timeline or any other clip without lag at all. The video shot is from a Canon HF-10, which records at it’s highest 17 mbps. This is pretty much what i see when i’m playing a clip back over the network, so this does not seem like a problem. I’d like to see what kind of speed I get (when and if) i get time machine setup to backup over the network.

    Is there a way to update b99 to the most recent build, b101? I downloaded the DVD iso, and i was reading about the live update feature, but i also remember when i was re-installing over a current installation last week, i had the option to upgrade it. Is this another viable option? Can i just boot from the DVD, choose upgrade, and i’ll be on b101? I’m trying to see if I can get this reboot bug worked out, as I am still booting into the xVM option.

    Sorry to hear about the boot problem, wish i had some kind of advice to give in return but im still getting used to this new system! Good luck working with that. Are you using the built-in NICs on the ASUS MB? Mine is not recognized by solaris, and i’m having to use a gigabit NIC pulled from another box. Is there a driver I can install?

    Thanks again for the advice,
    Brian

  55. Hi Kebabbert,

    thanks again and I think that would make an interesting post, like you say.

    I had been thinking of writing a post like this for a while, but due to my pedestrian immersion into the world of Solaris and ZFS etc, I have still not really grappled successfully with all of the areas of interest relating to this fileserver/NAS. So I still have a few areas to explore or return to, and I will write some posts on these subjects soon. After that, I will write the kind of summarizing post that you mention.

    Thanks for posting links to these ZFS articles — I had noticed sudden huge increases in web traffic, mostly from Sweden and other Scandinavian countries πŸ™‚

    One thing though — I’m not really a die-hard Linux user, but I ran it at home for a few years. I did like using Linux a lot after I stopped using Windows at home around 2000. I used a Linux LAMP configuration to run this website prior to moving it over to a Solaris SAMP configuration. Around 2004 I bought a PowerMac and then Mac OS X became my preferred OS for home use. I found its apps and especially the multimedia apps had great usability and made using them a pleasure. Of course, with the Mac’s command line I could continue with using UNIX commands. If of interest, I wrote a brief comparison of Windows, Linux and Mac OS X here (bear in mind that it was written a couple of years ago and Linux is always improving, so…):
    http://breden.org.uk/2006/11/12/windows-linux-and-mac-os-x-shootout/

    Personally, as I am involved in photography (macro, event, family etc) I find the Aperture 2.0 software running on Mac OS X is a superb piece of software that helps me organise my big photo library, which is growing rapidly due to shooting in RAW format.

    Then there’s GarageBand (low end), Logic Express (midrange) and Logic Studio (high end) for music production.

    And for managing video productions you have great software from iMovie/iDVD (low end), Final Cut Express (midrange) and Final Cut Studio (high end).

    And for software development, a Mac is great too.

    So for me, for most of my needs, Mac OS X is my OS of choice on the client side, and Solaris for the server side: 1. ZFS fileserver/NAS, and 2. web server.

    Cheers,
    Simon

  56. Hi Brian,

    Greetings to you in Florida! I was living there for 6 months in 1988 at Delray Beach. Had a great time in Florida, although speed cycling on Saturday mornings northwards along the A1A up to West Palm Beach and back in July & August was more than hot enough for me and my buddy πŸ™‚

    Yes, I think that the XBOX running XBMC is a superb combination, and should work nicely.

    I found speed variations too with large/small files and using rsync via the command line versus the Finder.

    Yeah, 17MBytes/sec is around 60GBytes per hour of HD video. Currently I only have an SD camera but even this produces imports of around 13GBytes per hour. This is why having a massive storage area that ZFS facilitates is so important. So far though, I have only edited video from data held locally on the Mac’s hard drive, using the ZFS server as archive storage after editing is complete, and to store the full video import files from the camera. But I might give editing from ZFS storage across the network a try.

    I’m not quite sure of the current situation regarding using a ZFS filesystem as a target for Time Machine backups. Last time I looked, Apple had nobbled it not to work in the migration from 10.5.1 to 10.5.2. Think they wanted to push their Time Capsule hardware.

    I think there should be an upgrade option when you run the SXCE snv_b101 installer. Hopefully it should work smoothly, although when I last tried it around b80 or so, it failed, and since then I do a full install.

    I do use the built-in NICs on the M2N-SLI Deluxe motherboard. I have snv_b101 installed now and I’m just sorting out the networking (trunking) which can be a real PITA to get working correctly πŸ™‚

    BTW, if you’d like to chat about video editing or ZFS etc, send me a comment with just your email and I’ll write back to you (and delete the post without publishing it). I’m in need of some info on HD cameras (1 CCD or 3 CCD), and also which Mac app to use, as I think I might do better investing in something like Final Cut Express instead of using iMovie/iDVD.

    Cheers,
    Simon

  57. Simon,
    I’ve upgraded to the latest version of opensolaris 2008.11 and i’d like to keep my old ZFS pool. I install the OS on a separate disc than the others that the zpool is on, so I know the pool is still there. How do I mount the old zpool, and then get it to mount every time I boot up the machine?
    thanks.

  58. Hi Jake,

    From memory, I think you issue the following command:

    zpool import zpool_name

    If it complains then I think you can force it by specifying the -f switch:
    zpool import -f zpool_name

    Cheers,
    Simon

  59. Awesome! It worked perfectly.
    have you tried the new 2008.11? I’m enjoying the slider to control snapshots. I like using the command line, but the gui that time slider adds to snapshots is neat and easy for anyone else in my home to use if they need to.

    Thanks!

  60. Good news!
    No, I haven’t tried 2008.11 yet, but I will do very soon, and I also intend to try out the Time Slider GUI and underlying service that manages the snapshotting according to the schedule set in Time Slider. When they also build in the functionality to send the differences between snapshots to another machine for incremental backups, it will really be cool, and I know they have stated this as their goal, so we should have that great functionality soon πŸ™‚

    I already have this sending of diffs between snapshots working from the command line but, like you say, when they have it built into a nice easy-to-use GUI so that anybody can use it, then it will be really accessible to more people.

  61. Simon,
    I’m getting an error on my zpool. It does not seem fatal, but I cannot pin down exactly what it means. Have you had this before? I set it to scrub. I am currently creating an image of my imac (450GB) to the server, because I need to do an archive and install on the imac. Could it just be that there was an interruption in the data flow? Is that all the error means and I just need to do a zpool clear?
    Thanks.

    pool: chuck
    state: DEGRADED
    status: One or more devices are faulted in response to persistent errors.
    Sufficient replicas exist for the pool to continue functioning in a
    degraded state.
    action: Replace the faulted device, or use ‘zpool clear’ to mark the device
    repaired.
    see: http://www.sun.com/msg/ZFS-8000-K4
    scrub: scrub in progress for 0h2m, 0.43% done, 8h41m to go
    config:

    NAME STATE READ WRITE CKSUM
    chuck DEGRADED 0 0 0
    raidz1 DEGRADED 0 0 0
    c3t1d0 ONLINE 0 0 0
    c3t2d0 ONLINE 0 0 0
    c3t3d0 ONLINE 0 0 0
    c3t4d0 FAULTED 5 12.0M 0 too many errors

    errors: No known data errors

  62. Hi Jake,

    I just took a quick look here: http://www.sun.com/msg/ZFS-8000-K4

    It might be worth checking that the SATA cables are securely plugged in to the drives and the motherboard too. Also check the power connectors to the drive are properly secured. So it sounds like it could be caused by a loose SATA or power connector to the c3t4d0 drive perhaps.

    I recommend you spend some time to read through the info at the URL above, and check through all the details to see if it helps.

    Good luck. I would be interested to hear how you resolve this.

    Cheers,
    Simon

  63. Simon,
    Unplugging and replugging the SATA and power cables did the trick to rid of the read/write errors. Now I’m having a checksum error on the same disk. I had time slider turned on, but i’d like to delete the snapshot taken of a 450GB .DMG file because i believe this file is corrupted (per the read/write error earlier). I’m now getting this:

    pool: chuck
    state: ONLINE
    status: One or more devices has experienced an unrecoverable error. An
    attempt was made to correct the error. Applications are unaffected.
    action: Determine if the device needs to be replaced, and clear the errors
    using ‘zpool clear’ or replace the device with ‘zpool replace’.
    see: http://www.sun.com/msg/ZFS-8000-9P
    scrub: none requested
    config:

    NAME STATE READ WRITE CKSUM
    chuck ONLINE 0 0 0
    raidz1 ONLINE 0 0 0
    c3t1d0 ONLINE 0 0 0
    c3t2d0 ONLINE 0 0 0
    c3t3d0 ONLINE 0 0 0
    c3t4d0 ONLINE 0 0 31

    errors: No known data errors

    Can you tell me how to locate and destroy the snapshot taken? (I have deleted the actual file already, but the snapshot remains, taking up 450GB of space)

    Thanks for the help.

  64. Hi Jake,

    To list the snapshots, try something like:

    # zfs list -t snapshot

    Then once you’ve identified the snapshot you wish to zap, try something like:

    # zfs destroy mypool/myfs@1

    where ‘mypool’ is the name of the pool (‘chuck’ in your case), ‘myfs’ is the name of the file system, and ‘1’ is the name of snapshot — substitute as appropriate.

    Later you can do another scrub of the pool and see what status you get:

    # zpool scrub mypool
    # zpool status -v mypool

    Check your last post for the info on ‘zpool clear’ as well: e.g. ‘zpool clear chuck c3t4d0’, to clear errors in your ‘chuck’ pool for drive ‘c3t4d0’.

    These commands are off the top of my head as I’m in Christmas mode right now, so apologies for any inaccuracies πŸ™‚

    Good luck!

    Merry Christmas,
    Simon

  65. Simon,

    I’ve built this monster now. I’ve got the RAIDZ pool up and running and I can access the data remotely. This would seem like a success but I have many problems that I was wondering if you could assist with some?

    1. I can’t access the Java Web Administration console from any machine except localhost (even though my firewall is disabled.

    2. Using the Samba shares the user priviliges are obeyed but group priviges are not. Is this a samba limitation? I tried NFS too but couldn’t figure out how to set custom share names. Why do you use Samba as apposed to NFS?

    3. Any data that I create locally on the server in these file systems has incompatible permissions as cannot be accessed.

    the other problems are issues with things that you have not attempted to cover here so I won’t bring them up.

    Thanks,
    Kev

  66. Hi Kevin,

    Congratulations on building your beast! πŸ™‚

    In answer to each of your questions:

    1. I’ve always used the command line for issuing ZFS commands, as they are one-liners, so I must admit I’ve never used the Web Administration Console you refer to. But if it’s a URL access/web server issue, could it be a port or HTTP server configuration issue?

    2. and 3. I use CIFS sharing, as NFS gave terribly low write speeds. In order to achieve the access privileges you seek, look at the chapter entitled ‘Using ACLs and Attributes to Protect ZFS Files’ within the ZFS Administation Guide.

    For setting custom share names on the ZFS fileserver end, try this:

    # zfs set sharesmb=name=myshare sandbox/fs2
    

    where ‘myshare’ is the name of your share, and ‘sandbox/fs2’ is the poolname/filesystem-name

    Good luck!

    BTW, what hardware did you finally buy, and what are your power meter readings when the system is idle? (if you have a power meter, that is)

    Cheers,
    Simon

  67. Recently i found that there is an error 50 appear when using QuickTime to watch video file from a Windows XP to access the ZFS CIFS share.
    some people suggest that adding
    mangled names = no
    in smb.conf
    however, it required to use Samba instead of Solaris CIFS.
    Did anyone try any alternative?
    Thanks for any advise

  68. Hi Fai,

    Personally, I never saw an error -50 using CIFS sharing, but I did a quick search and a couple of matches were from almost a year ago when CIFS sharing was still quite new in Solaris, so could you mention which version of Solaris/OpenSolaris you are using?

    Cheers,
    Simon

  69. I am using Solaris Express build 105
    Actually the CIFS works well except opening the video file using QuickTime from Windows (using Mac is alright)
    All Quicktime is latest version.
    And i am able to watch video file using Windows Media player and listen to mp3 files as well.

    I’ve did a research and no idea too.
    Anyway, thanks for your help
    πŸ™‚

  70. Hi Simon,
    Thanks for your suggestion. However, the error occur only when the file is located on a Solaris CIFS share, its working when the file is located on a Windows Share.
    Really thanks for your advise.

    Cheers,
    Fai

  71. For some reason, I was not able to create the zpool unless I typed ‘raidz1’ in lowercase as opposed to caps in your instructions:

    zpool create tank RAIDZ1 c1t0d0 c1t1d0 c2t0d0

  72. I’ve installed OpenSolaris 2008.11 onto a 160GB using the full capacity of the drive. I have three 250GB drives. When I run the ‘format’ command, those three 250GB drives say “unknown drive type”. They were pulled from working systems (two had windows on it). I deleted the partitions and formatted them to Solaris2 (and they are not active partitions). Now, (and even before I did this), I am getting a message after trying to do #zpool create tank RAIDZ1 .. .. .. that says this: cannot open ‘RAIDZ1’: no such device in /dev/dsk must be a full path or shorthand device name. Oh, I should note that I logged in as the user I created, but then did a “su -” and entered the root password. I am totally new to Solaris/unix so maybe I did something wrong. Any help is appreciated greatly!

  73. @Simon,

    I’m getting an error when trying to change the group using chgrp. It is saying:

    chgrp: invalid group ‘kristian’

    I was able to successfully change the owner using the ‘chown’ command. Any ideas? “kristian” is the name of my user account. I opened a terminal window, typed “su -“, entered my root password, and followed your steps above but used “kristian” since it was my user account name. Thanks in advance!

  74. Hi Kristian,

    Sounds like the group has not been created. Try logging in as kristian and then typing ‘id’ — see below:

    -bash-3.2$ id
    uid=501(kristian) gid=501(kristian)
    

    You should see something like the stuff above. The values are not important, but the ‘gid’ entry must be present.
    If you have not yet created the group called kristian, then to create the group and make the group the primary group of user kristian, type:

    # groupadd kristian
    # usermod -g kristian kristian
    

    To change the user & group for a file at the same time, try:

    -bash-3.2$ chown kristian:kristian filename
    

    or to change the user & group of all files and directories recursively, try this:

    -bash-3.2$ chown -R kristian:kristian *
    

    Hope it helps.

  75. Hi Simin,

    Thanks for the reply. Here is the output when I try your suggestions:

    kristian@solarnas:~$ groupadd kristian
    UX: groupadd: ERROR: Cannot update system files – group cannot be created.

    I am logged in as kristian.

  76. Hi … I’m back πŸ™‚

    Have you upgraded to Snow Leopard? Any issues?

    I’ve upgraded and can no longer write to my SMB share. I get an error -36.

    Can delete files. Cannot write.

    Can copy via Terminal, but then cannot open the file I copied via Finder (it’s greyed out). If I add that file to iTunes, I can edit metadata and can also play in iTunes. Just seems like Finder isn’t working right.

    Still investigating….

  77. Hi Shaky, that sounds bad. Let me know if you find the solution. I didn’t upgrade to Snow Leopard yet, but might do sometime soon-ish. What’s your impression of it so far? Regarding the CIFS issue, it sounds like it could perhaps be an ACL issue. You could try looking at the ACL info for a troublesome file (note upper and lower case ‘v’):

    -bash-3.2$ ls -V testfile1
    -rwxrwx---+  1 fred    fred          0 May 10 21:30 testfile1
                     owner@:rwxpdDaARWcCos:------I:allow
                     group@:rwxpdDaARWcCos:------I:allow
                  everyone@:rwxpdDaARWcCos:------I:deny
    -bash-3.2$
    
    or:
    
    -bash-3.2$ ls -v testfile1
    -rwxrwx---+  1 fred    fred          0 May 10 21:30 testfile1
         0:owner@:read_data/write_data/append_data/read_xattr/write_xattr/execute
             /delete_child/read_attributes/write_attributes/delete/read_acl
             /write_acl/write_owner/synchronize:inherited:allow
         1:group@:read_data/write_data/append_data/read_xattr/write_xattr/execute
             /delete_child/read_attributes/write_attributes/delete/read_acl
             /write_acl/write_owner/synchronize:inherited:allow
         2:everyone@:read_data/write_data/append_data/read_xattr/write_xattr
             /execute/delete_child/read_attributes/write_attributes/delete
             /read_acl/write_acl/write_owner/synchronize:inherited:deny
    -bash-3.2$
    

    More ACL info here:
    http://breden.org.uk/2009/05/10/home-fileserver-zfs-file-systems/

    Cheers,
    Simon

  78. First impressions are good. I like the ‘intra-app’ expose. Seems fast. Mail is super fast.

    I didn’t have too much time to check it all out last night as I first tested the link to my ZFS server, which didn’t work, so I spent the night troubleshooting that!

    Anyway. ACLs look good. Mine just grant everything to everyone!

    To recap:

    I cannot copy from Mac SL to the ZFS pool shared via smb. Get the -36 error.
    I can delete files via Finder
    I can create new folders via Finder
    I can copy via Terminal
    Perms etc look good on the file copied via Terminal from Mac and doing an ls -V on the Solaris box. I can read and write according to perms.
    That file, when viewed in Finder cannot be opened, but can be played in the “Get Info” preview box
    I can drag the greyed out file into iTunes
    I can change metadata on that file via iTunes (i.e. write to the file)
    I can play it through iTunes and AppleTV
    I can write a file via TextEdit, for example, and it all looks good.
    I can mount, read, copy, play from my MacBook on 10.5.8.
    The time on my Solaris box had slipped to more than 5 mins from the time on my Mac, so I updated that. Can cause issues apparently.
    I’ve turned the firewall off. No change.
    Only error I can see in the Mac logs that might be related:

    smb_maperr32: no direct map for 32 bit server error (0xc00000e5)

    Not sure what that is. Google pointed me at the time issue, which I’ve fixed, but still get the error.

    I found this:

    http://spiralbound.net/2005/09/22/macintosh-finder-copy-to-samba-share-problem

    Which seems to describe exactly what I am seeing.

    Says that he set posix locking = no on his smb share and it was all good.

    But I’m not sure what this means:

    “Samba share is mounted over NFS on the server”

    I’ve never really understood this output:

    # sharemgr show -vp
    default nfs=()
    zfs nfs=()
    zfs/sandbox/fs1 smb=()
    sandbox_fs1=/sandbox/fs1

    They have just set sharesmb … but why is nfs=() on? Is that what they mean by a samba share over NFS?

    Anyway…

    For that fix, you need to write an smb.conf file. i.e. not use sharesmb=yes via zfs commands. Not sure how they interact. i.e if you need to disable in zfs first?

    I tried creating an smb.conf file and restarting the services. No idea how to tell if they read the new conf. So I rebooted. Now the mac wont connect at all. But, I’ve come to the conclusion that if you change anything on the server side, it’s best to reboot the mac.

    By this time it was a little bit late!

    To try:

    zfs set sharesmb=off and test my smb.conf setup with public = yes first (read that this fixed one guys issue)
    If that doesn’t work, try with locking off
    Try using samba instead of smb. They are not the same apparently.
    Finally, try NFS again, but couldn’t get that to work last night. It worked a year ago though.

  79. Looks like Snow Leopard has changed some security aspects, esp. in Finder, because I assume you changed nothing on your Solaris ZFS NAS, just migrated from Leopard to Snow Leopard?

    Once you debug the CIFS access problem it would be interesting to hear from you. In the meantime, thanks a lot for the info — I’m sticking with Leopard for now πŸ˜‰

    If Apple had moved to ZFS root boot, you could’ve just done a rollback to Leopard from Snow Leopard… with a one liner, well… assuming that Leopard itself had also used ZFS… come on Apple, time to move on up!

  80. Hi Simon,
    Greeting from Hong Kong again. Recently, I’ve upgrade my MacBookPro to 10.6 (more specific 10.6.1). But I found that I am not able to write files to my ZFS file server via SMB anymore. I can mount it, read it, but when i try to write files to the share via Finder, it said the disk cannot be write. But actually I can create folder, and save file to there via other program (e.g. TextEdit)

    So I have try to do it in Terminal, an error comes to me, said “ould not copy extended attributes to /Volumes/share/filename: Input/output error
    The file actually wrote to the share, but seems the resource file cannot be create. I did some google search but no luck. You are one of the active blogger that using Mac + ZFS, so did you face this issue as well? Thanks for your feedback and advise.

    Cheers,
    Fai

  81. hey simon, guys.
    long-time listener, first-time caller πŸ˜‰
    i’m having the same problems with opensolaris shares (OS 2008.11), 10.5.8 works, snow leopard (10.6.1) fails.
    Mounting a share from the OS server works ok, i can browse and read files but when i try to copy a file to the share i get
    the “error -36” from the finder as well as “smb_maperr32: no direct map for 32 bit server error (0xc00000e5)” in the console. and it leaves a greyed-out “foobar.txt” file with 512 bytes size.
    copying via terminal on the other hand works, no problems there.
    things like setting the “DSDontWriteNetworkStores” default or mounting with “cifs://” instead of “smb://” make no difference.
    i also tried switching to NFS, but no joy either – same symptoms as with cifs sharing – the mounted share can be browsed and read (even opening and editing an existing file works), but copying a file to the share via finder fails with the “-36”.
    again, copying via terminal works.
    shaky, you said NFS works for you? could you share your settings?

    cheers,
    jay

  82. Hi Jay,

    I’m still using Leopard, so I haven’t yet experienced this problem, luckily.

    To me, it sounds like they changed the way the permissions are working through the Finder application.

    If you have time, give this a try, as this could be an ACL issue. Create a test file system such as tank/test, share it using the usual methods and then create a file in it from the NAS, and then try to view the file from the Mac’s Finder application, then try to open the file via the Finder and edit the file, then try to save it. What happens? For setting a fully permissive ACL on the test file system, try the following, and let me know what you discover:

    # zfs set aclinherit=passthrough tank/test
    # zfs set aclmode=passthrough tank/test
    # chmod A=\
    owner@:rwxpdDaARWcCos:fd-----:allow,\
    group@:rwxpdDaARWcCos:fd-----:allow,\
    everyone@:rwxpdDaARWcCos:fd-----:allow \
    /tank/test
    
  83. Well guys, it seems there’s a long thread on this problem of accessing SMB/CIFS shared files via Snow Leopard’s Finder application here: Topic : SMB -36 ioErr when opening files.

    In that forum thread, some people are mentioning trying to connect to SMB shares on non-ZFS systems, and some also mention use of the Samba software application, just to make things more complicated. I say this because, from a ZFS perspective, ZFS does not use Samba, but CIFS instead, which is much better than Samba, as its implementation of the SMB protocol is exactly the same as Microsoft uses, so emulates a Windows box much more accurately than the Samba software. At least this is my understanding based on the CIFS reading I have done. It makes use of CIFS preferable to using Samba software. Unfortunately, it seems many people use the terms Samba, SMB and CIFS interchangeably and this can also cause confusion.

    Another thing that becomes apparent is that in Snow Leopard, Apple have modified the way it calculates a file’s size. Now it seems they use the size on disk rather than the actual bytes consumed within the file itself, if the reports on that forum post are accurate… lots of separate problems seem be present, and I guess it will be a while before Apple is able/willing to devote resources to fixing all of these different issues. Glad I stuck with Leopard, and for anyone able to do so, it seems rolling back might be a sensible option. Unfortunately though, most people probably just upgraded their existing Leopard boot disk, so will be unable to “roll back”.

  84. Hey guys,
    it’s been a while for me, but thanks Simon for the ACL hint. Turns out, I forgot to set aclinherit and aclmode on the test pool I was using :).
    Anyway, I came across this in my search for a solution to the Snow Leopard/CIFS/Finder -36 problem:
    http://blogs.confusticate.com/jeremy/archives/2009/09/27/snow-leopard-and-opensolaris-nas-problem-solved/

    It seems OS 2008.11 and 2009.06 differ somehow in the CIFS sharing, but I now upgraded my OpenSolaris box to 2009.06 and I am happy again πŸ˜‰
    The Snow Leopard Mini can once again read and write normally to the shares, everything looks normal again.

    Cheers,
    Jay

  85. Thanks for the heads up Jay! So for any others who had this problem — if you’re using Snow Leopard, also upgrade your NAS to use OpenSolaris 2009.06 and everything should work fine. Good to know I can upgrade to Snow Leopard one day and not see this problem.

  86. Jay – for NFS, I didn’t do anything other than use Disk Utility to mount the server

    (in 10.6 you mount NFS via Disk Utility)

  87. I finally stumbled onto your blog, having search-educated myself into deciding I needed zfs, and that solaris/opensolaris was the way to get it.

    I’m a user-level *nix user from a ways back, and I’m willing to do the brain work needed to get a server going, although I have no interest in using solaris or zfs as a hobby. I’m trying to build a tool, and willing to do the work to get it. I have worked in the computer industry for a couple of decades, just not in this particular sub-part. As an uninformed person in this particular niche, knowing only what I’ve read on line, could I get a quick glance over my shoulder for “Will this work??”

    I’m decided on a system like this:
    – 6 1-TB SATA disks for the main pool
    – 2 mirrored disks to boot the OS
    – Xeon E3110 processor
    – Supermicro X7SBL-LN1 motheboard
    – add memory to budget

    What I’m looking for is some intelligent, experienced advice on the following questions, largely because I’m a wimp about tossing out $1K without first checking to see if I have some chance of success. These are all in the “I think it works, can you help me verify?” category.
    1. Is the mirrored boot disks separated from the main pool workable?
    2. Are six SATAs enough for raidz2?
    3. Will the onboard XGI Z95 graphics serve to get the install/etc. done?
    4. Does the Intel chipset (Intel 3200 + ICH9R + Intel 82573L)work with supported Solaris drivers?

    I’ve been hacking on this for about a month, and I … think… this is a reasonable setup, but I’s sure like some expert advice about any hidden gotchas before I start counting out the cash for it.

    I’d be eternally grateful, and owe any helpers much beer… 😎

  88. Hi R.G.,

    I don’t regard myself as a UNIX/ZFS expert at all, but I’m a self-taught user of Solaris & ZFS, and have found that it does what I need very well, seems to be the best current storage solution, and it works — oh yes, and it’s free to those willing to learn and spend the time required to learn how to use it.

    I can’t say whether your proposed hardware will definitely work or not, as I’m not familiar with the hardware.

    You can check for your hardware on the Solaris HCL (hardware compatibility list).

    However, 6 drives should be good for a data pool comprising capacity of 4 for data and 2 for parity (one RAID-Z2 vdev).

    And 2 mirrored drives for a boot pool is also a good idea.

    In general, Supermicro make excellent motherboards, and Intel chipsets are well-supported by Solaris.

    As well as the HCL, try Googling for the various components including motherboard and solaris and see if there are any happy/sad users.

    Just to answer directly your questions:
    1. Yes, 2 pools are a good idea: (1) 2-drive mirror boot pool, and (2) multi-drive data pool using a single RAID-Z2 vdev for strong redundancy protection and simplicity of maintenance.
    2. Six SATA ports for a RAID-Z2 vdev for your data pool are fine — see 1.
    3. You need to verify Solaris has a driver for that graphics hardware — check the HCL or Google.
    4. In general Intel ICHxx chipsets are well-supported by Solaris, as are their ethernet cards, although I have no personal experience with them, having gone for an AMD/NVidia setup with ECC memory.

    Also, I would strongly recommend you use ECC memory, as ZFS is all about data integrity, and I don’t think you want to write garbage, due to corrupted flipped bits in memory, to disk.

    If any of the ZFS stuff on this site helps you to get a good working ZFS system up and running soon, then feel free to thank me by clicking on a “Buy me a beer” button I shall add soon πŸ™‚

    Good luck!

    Cheers,
    Simon

  89. Hi

    re this part:

    “Within the Solaris file manager UI, you may need to set the attributes within the β€˜Permissions’ and β€˜Access List’ tabs of the properties for the /test/home/simon directory. After that, you may need to restart the Solaris machine (or probably just restart relevant services), and possibly the client machine to ensure it gets the new properties for the share.”

    Is there any way to do that without the UI? My OSol box is not near a display πŸ™

    Did the smb config etc and nothing is showing up in my /shares directory …

  90. For setting permissions / ACLs on new file systems, via SSH or VNC on your client machine, you can use chown to change owner, and use ‘zfs set’ to change the ACL properties — e.g. something like:

    # zfs set aclinherit=passthrough tank/home/fred/photo
    # zfs set aclmode=passthrough tank/home/fred/photo
    # chmod A=\
    owner@:rwxpdDaARWcCos:fd-----:allow,\
    group@:rwxpdDaARWcCos:fd-----:allow,\
    everyone@:rwxpdDaARWcCos:fd-----:deny \
    /tank/home/fred/photo
    # chown fred:fred /tank/home/fred/photo
    # zfs set sharesmb=name=photo tank/home/fred/photo
    
  91. Can’t get auto mount to work … I came up with a workaround. I created an Automator workflow that connects to the shares, then you save the workflow as an application. Then add that application to login items.

  92. Hi Shaky,

    On my Mac, I have setup autofs to mount my NAS CIFS shares using the following method:

    Add the following line to /etc/auto_master :

    /- auto_direct
    

    Then create or edit /etc/auto_direct and include the following line for each share you wish to mount :

    # photo library
    /Users/simon/nas/photo/photo -fstype=smbfs ://simon:password@192.168.1.3/photo
    

    In my home directory (/Users/simon) on the Mac, I created a directory called ‘nas’ and within that I created the ‘photo’ directory. This ‘photo’ directory is where autofs will create a mountpoint called ‘photo’.

    I could have put all the mountpoints directly into the ‘nas’ directory, but this had the nasty effect that any access to the /Users/simon/nas directory via the Finder caused the Mac’s autofs to mount all CIFS shares listed in /etc/auto_direct, resulting in an annoyingly long delay before the Finder became responsive. The same problem occurred when opening or saving a file to the NAS from any application via the Finder open/save dialog window. Thus to eliminate this problem, I mounted each of the shares in separate individual directories, each one being one level below the ‘nas’ directory.

    Later on, I had the idea to force autofs to mount all the shares when I logon to my Mac user account. I have a script which runs when I logon to my Mac user account. I called the script ‘mount_shares’ and it’s something like this:

    #!/bin/bash
    
    printf "\n" >> /Users/simon/log/mount.log
    date >> /Users/simon/log/mount.log
    
    printf "\nAttempt to mount photo:\n" >> /Users/simon/log/mount.log
    ls -l /Users/simon/nas/photo/photo >> /Users/simon/log/mount.log
    

    The ‘ls’ causes autofs to mount the share. Add a line like this to your ‘mount_shares’ script and then make the script run automatically when you login by adding the script to the ‘Login items’ of your user account in the ‘Accounts’ section of System Preferences.

    Finally, in order to ensure the autofs mounts stay mounted longer than the default 10 minutes, edit the /etc/autofs.conf and modify the timeout to something like this:

    Macintosh:etc simon$ cat autofs.conf
    #
    # This file is used to configure the automounter
    #
    
    # The number of seconds after which an automounted file system will
    # be unmounted if it hasn't been referred to within that period of
    # time.  The default is 10 minutes (600 seconds).
    # This is equivalent to the -t option in automount(8).
    AUTOMOUNT_TIMEOUT=360000
    

    This causes autofs to mount the shares for 360000 seconds, or 100 hours.

    Hope this helps.

    Cheers,
    Simon

  93. Hey everyone,

    I am slogging it out with my Solaris Express 11 box connected to a Windows 7 machine via Infiniband (drool). The storage pool on the Solaris machine has 3 X 2tb Deskstar 7K3000, I 5gb log and 30 gb cache (cache and log are both on a sandforce ssd). Reads are incredibly fast and over much more that 150MB/s but writes are depressingly slow at 30mb/s (max). I have tried disabling de-dupe (no luck) and setting log priority to through put but neither of these settings help much.

    When I attempt a large write it starts of being quite speedy (100mb+) then drops off after a while (<10gb).

    The system is running an i3 cpu with 8gb of ram. Running top show very little CPU usage (~20%), all but 800mb worth of ram but no swap.

    Any advice would be greatly appreciated.

    Kind Regards

    Jim

  94. Hi Jim,

    Are you using NFS or CIFS to make the shares available? I seem to recall that NFS writes (synchronous IIRC) are slow but I’ll wait until you say which share method you are using.

    Cheers,
    Simon

  95. It has been a couple years since anyone has posted but I came across this blog entry and would like to thank the author for the great tutorial. I am just learning but plan to put together a NAS server with zfs within a few months. I’m thinking of using openindiana with 3 red wd 3tb drives in raidz1 and a second usb external pool. I will be accessing the shares with windows, Mac, and android machines. Mounting Cifs seems to be difficult on Android 4.2,not sure the best way to go.

    Any new developments or suggestions?

Leave a comment

Your email address will not be published. Required fields are marked *