Home Fileserver: OpenSolaris 2009.06

After running SXCE for the last year, due to a failed hardware upgrade, it was time to install OpenSolaris 2009.06.

I was due to leave on a trip the next morning, but foolishly I decided to install a 3.5″ drive into a 5.25″ drive bay with some anti-vibe rubber grommets installed. When re-attaching the IDE cable onto the back of the drive in an awkward place, I broke one of the IDE interface pins and the drive failed to be recognised at POST after rebooting. After several attempts to reboot, it was time to face facts. The IDE boot drive that had served me faithfully since these days over a year ago, had reached the end of the line. Game Over.

I didn’t want to leave my fileserver in a non-bootable state before going away, so I quickly estimated the time needed to getting it working again. I had already downloaded an install image of OpenSolaris 2009.06 Preview a couple of weeks before, so I could just boot that and install onto a new drive. I quickly located an old 160GB IDE drive which had “died” in 2004 when trying to boot my Windows system — I knew that as I had written in black permanent marker on the drive: “Dead: 14/12/2004″. This was the drive that had contained my photo collection, that I miraculously managed to recover, due to the use of 2 PCs on a home LAN, plus use of ‘Bart’s Universal Boot CD’ and some great software called ‘GetDataBackNT’, if I remember correctly. It was an Hitachi 160GB drive, that was made shortly after the ‘DeathStar’ fiasco. So, I thought this drive would make a perfect boot drive :) Ideally I would have used another drive, of course, but all my SATA ports were in use, so the only option was the IDE interface, and that was my only remaining IDE drive, so it was a no-brainer :)

Installing OpenSolaris 2009.06 Preview was quick and easy. After a short time, I rebooted the system and I was running Solaris again.

The next step was to get the system back to how it was before. At a minimum, that meant recreating the users and groups with the same ids as used before with my previous setup.

Restoring user and group ids

After taking a look at some directories, I saw which user ids and group ids were needed.

Restore user simon and group simon to their original values:

# ls -l /tank/home
drwxr-xr-x   5 501      501            5 May 10 20:40 simon
# groupadd -g 501 simon
# useradd -g simon -u 501 -s /bin/bash -d /export/home/simon -c simon -m simon

Explanation of useradd parameters used above:
-g simon: adds user to primary group ‘simon’ (which has groupid 501)
-u 501: creates the userid 501 for this user
-s /bin/bash: assigns the default shell to be bash for this user
-d /export/home/simon: defines the home directory
-c simon: creates the comments/notes to describe this user as required
-m: creates the home directory for the user
simon: this is the login name for this user

Now time to fix the media user and group ids:

simon@outerspace:~# cd /tank/media
simon@outerspace:/tank/media# ls -l
total 15
drwxrwx---+  5 503      502            6 Apr 14 17:59 music
drwxr-x---+  5 503      502            6 Apr 30 22:52 photos
drwxr-xr-x   5 503      502            5 Apr  7 00:14 video
simon@outerspace:/tank/media#
# groupadd -g 502 media
# useradd -g media -u 503 -s /bin/bash media

Packages and shares

OpenSolaris 2009.06 includes the Image Packaging System (IPS), and as OpenSolaris 2009.06 is a relatively light install, you will probably find that stuff that you need is not available, necessitating a find and install session of whatever you need.

In order to get my system functional again, I needed to get IPS operational:

# pkg set-authority -O http://pkg.opensolaris.org/dev/ opensolaris.org

Then I needed to install the CIFS server code:

# pkg install SUNWsmbskr
# pkg install SUNWsmbs

Then the usual stuff to get CIFS shares working:

# echo other password required pam_smb_passwd.so.1 nowarn >> /etc/pam.conf
# smbadm join -w WORKGROUP
# svcadm enable -r smb/server
svcadm: svc:/milestone/network depends on svc:/network/physical, which has multiple instances.
root@outerspace:~# svcs | grep milestone/network
online         18:42:40 svc:/milestone/network:default
(ignore warning message if the service grep shows the milestone/network service is online)
# svcadm restart smb/server

All the shares were already setup within my ZFS storage pool, so this didn’t need setting up, luckily.

The next step was to import the ZFS storage pool, which needed have the -f option specified as the pool was never exported:

# zpool import -f tank

After rebooting, everything was back working again. Total time from breaking system to being operational again: two to three hours — not too bad.

simon@outerspace:~$ su
Password:
simon@outerspace:~# zpool list
NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
rpool   153G  6.57G   146G     4%  ONLINE  -
tank   4.06T  2.00T  2.07T    49%  ONLINE  -
simon@outerspace:~#
simon@outerspace:~# zpool status tank
  pool: tank
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
        still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
        pool will no longer be accessible on older software versions.
 scrub: none requested
config:

        NAME         STATE     READ WRITE CKSUM
        tank         ONLINE       0     0     0
          raidz2     ONLINE       0     0     0
            c8t0d0   ONLINE       0     0     0
            c8t1d0   ONLINE       0     0     0
            c9t0d0   ONLINE       0     0     0
            c9t1d0   ONLINE       0     0     0
            c10t0d0  ONLINE       0     0     0
            c10t1d0  ONLINE       0     0     0

errors: No known data errors
simon@outerspace:~#

During the head-scratching caused by seeing OpenSolaris for the first time and wondering how to get CIFS sharing working again, I found the following blog posts helped me out:

http://jmlittle.blogspot.com/2008/03/step-by-step-cifs-server-setup-with.html
http://opensolaris.org/jive/thread.jspa?threadID=101641

Thanks guys!

For more ZFS Home Fileserver articles see here: A Home Fileserver using ZFS. Alternatively, see related articles in the following categories: ZFS, Storage, Fileservers, NAS.

Popularity: 16% [?]

Share and Enjoy:

  • RSS
  • del.icio.us
  • StumbleUpon
  • Digg
  • Twitter
  • Mixx
  • Slashdot
  • Technorati
  • Facebook
  • NewsVine
  • Reddit
  • Google Bookmarks
  • LinkedIn
  • Yahoo! Buzz
  • email

10 Responses to “Home Fileserver: OpenSolaris 2009.06”

  1. Hi Simon,

    I used a zfs home fileserver like yours since 18 month. i used iscsi targets with opensolaris. on my imac i use the globalSAN ISCSI Initiator and had a good perfomance. today i switched to 2009.06 and my iscsi perfomace is very bad:

    https://opensolaris.org/jive/thread.jspa?messageID=388492

    any ideas?

    thanks for your home filer blog!!

  2. Hi Peter,

    Indeed, the slow write speed to iSCSI targets now with OpenSolaris 2009.06 seem to be caused by the fix for bug 6770534, which uses synchronous writes — i.e. the writes require acknowledgements, rather than just squirting all the write data into a large cache. It looks like the solution will be found by following the thread you mention, as other people certainly seem to have encountered the same problem. Good luck! (I don’t use iSCSI at the moment)

    Cheers,
    Simon

  3. Hi Simon,

    for your info:

    I upgraded my system to SXCE build 116 and now everything works fine. I switched to COMSTAR and i cood use my old volumes. Take a look at COMSTAR!

    What I did:

    activate COMSTAR:
    svcadm enable stmf

    everything ok?:
    svcs stmf
    stmfadm list-state

    create LUN:
    sbdadm create-lu /dev/zvol/rdsk/tank/vol_1
    (vol_1 is my old volume on zfs pool tank)

    ok?:
    sbdadm list-lu
    stmfadm list-lu -v

    add view:
    stmfadm add-view Note: <- GUID is output of ’sbdadm list-lu’

    disable old scsi-target:
    zfs set shareiscsi=off tank/vol_1

    is old iscsitgtd running?:
    svcs iscsitgt

    yes, then disable iscsitgt:
    svcadm disable iscsitgt

    enable new service:
    svcadm enable -r svc:/network/iscsi/target:default

    ok?:
    svcs -a | grep -i iscsi

    create new target:
    itadm create-target

    ok?:
    itadm list-target
    itadm list-target -v

    Then i use globalSAN iSCSI Initiator 3.3.0.43 on my iMac.

    COMSTAR: http://wikis.sun.com/display/OpenSolarisInfo/COMSTAR+Administration

    Cheers,
    Peter

  4. Hi Peter,

    Glad you got the iSCSI working, and thanks for the COMSTAR info — I’ll have to take a look sometime!

    Cheers,
    Simon

  5. Hi Peter,

    When you created the LUN
    create LUN:
    sbdadm create-lu /dev/zvol/rdsk/tank/vol_1
    (vol_1 is my old volume on zfs pool tank)

    Didn’t it kill your old zvol?

    I did exactly the same thing trying to migrate from iscsitgt to COMSTAR and it destroyed my zvols contents.

    I have been following a few posts and it seems that creating a new LUN writes data into the first 64 bytes of the zvol, destroying it’s contents.

    Did you do anything special in order not to destroy the zvols contents?

    Cheers,
    Marcio

  6. Good technical Solaris book for free:
    http://www.c0t0d0s0.org/pages/lksfbook.html

  7. Thank you so much for writing this tutorial.

    As a tip for others who follow this, NEVER USE UPPERCASE LETTERS IN YOUR Solaris username if you want to make CIFS/smb shares work easily.

    e.g. user Andrew –> 10 hours of wasted time in changing settings, ending in a reinstall.
    e.g. user andrew –> works straight away (but you don’t appreciate it as much! :-] )

  8. Are you going to upgrade to Solaris 11 Express?

  9. Thanks Andrew. Sorry for long delay in replying.

    Cheers,
    Simon

  10. Hi Paul,

    I haven’t yet considered whether to upgrade to Solaris 11 Express.
    However, in general I prefer a more community-orientated approach to development/releases, and so it is quite possible I will choose OpenIndiana instead of Oracle’s offerings. But I will make that decision when I feel a need to upgrade to get new features, which is not now: Build 134 of Open Solaris is still serving my needs well.

    Cheers,
    Simon

Leave a Reply