After running SXCE for the last year, due to a failed hardware upgrade, it was time to install OpenSolaris 2009.06.
I was due to leave on a trip the next morning, but foolishly I decided to install a 3.5″ drive into a 5.25″ drive bay with some anti-vibe rubber grommets installed. When re-attaching the IDE cable onto the back of the drive in an awkward place, I broke one of the IDE interface pins and the drive failed to be recognised at POST after rebooting. After several attempts to reboot, it was time to face facts. The IDE boot drive that had served me faithfully since these days over a year ago, had reached the end of the line. Game Over.
I didn’t want to leave my fileserver in a non-bootable state before going away, so I quickly estimated the time needed to getting it working again. I had already downloaded an install image of OpenSolaris 2009.06 Preview a couple of weeks before, so I could just boot that and install onto a new drive. I quickly located an old 160GB IDE drive which had “died” in 2004 when trying to boot my Windows system — I knew that as I had written in black permanent marker on the drive: “Dead: 14/12/2004″. This was the drive that had contained my photo collection, that I miraculously managed to recover, due to the use of 2 PCs on a home LAN, plus use of ‘Bart’s Universal Boot CD’ and some great software called ‘GetDataBackNT’, if I remember correctly. It was an Hitachi 160GB drive, that was made shortly after the ‘DeathStar’ fiasco. So, I thought this drive would make a perfect boot drive Ideally I would have used another drive, of course, but all my SATA ports were in use, so the only option was the IDE interface, and that was my only remaining IDE drive, so it was a no-brainer
Installing OpenSolaris 2009.06 Preview was quick and easy. After a short time, I rebooted the system and I was running Solaris again.
The next step was to get the system back to how it was before. At a minimum, that meant recreating the users and groups with the same ids as used before with my previous setup.
Restoring user and group ids
After taking a look at some directories, I saw which user ids and group ids were needed.
Restore user simon and group simon to their original values:
# ls -l /tank/home drwxr-xr-x 5 501 501 5 May 10 20:40 simon # groupadd -g 501 simon # useradd -g simon -u 501 -s /bin/bash -d /export/home/simon -c simon -m simon
Explanation of useradd parameters used above:
-g simon: adds user to primary group ‘simon’ (which has groupid 501)
-u 501: creates the userid 501 for this user
-s /bin/bash: assigns the default shell to be bash for this user
-d /export/home/simon: defines the home directory
-c simon: creates the comments/notes to describe this user as required
-m: creates the home directory for the user
simon: this is the login name for this user
Now time to fix the media user and group ids:
simon@outerspace:~# cd /tank/media simon@outerspace:/tank/media# ls -l total 15 drwxrwx---+ 5 503 502 6 Apr 14 17:59 music drwxr-x---+ 5 503 502 6 Apr 30 22:52 photos drwxr-xr-x 5 503 502 5 Apr 7 00:14 video simon@outerspace:/tank/media# # groupadd -g 502 media # useradd -g media -u 503 -s /bin/bash media
Packages and shares
OpenSolaris 2009.06 includes the Image Packaging System (IPS), and as OpenSolaris 2009.06 is a relatively light install, you will probably find that stuff that you need is not available, necessitating a find and install session of whatever you need.
In order to get my system functional again, I needed to get IPS operational:
# pkg set-authority -O http://pkg.opensolaris.org/dev/ opensolaris.org
Then I needed to install the CIFS server code:
# pkg install SUNWsmbskr # pkg install SUNWsmbs
Then the usual stuff to get CIFS shares working:
# echo other password required pam_smb_passwd.so.1 nowarn >> /etc/pam.conf # smbadm join -w WORKGROUP # svcadm enable -r smb/server svcadm: svc:/milestone/network depends on svc:/network/physical, which has multiple instances. root@outerspace:~# svcs | grep milestone/network online 18:42:40 svc:/milestone/network:default (ignore warning message if the service grep shows the milestone/network service is online) # svcadm restart smb/server
All the shares were already setup within my ZFS storage pool, so this didn’t need setting up, luckily.
The next step was to import the ZFS storage pool, which needed have the -f option specified as the pool was never exported:
# zpool import -f tank
After rebooting, everything was back working again. Total time from breaking system to being operational again: two to three hours — not too bad.
simon@outerspace:~$ su Password: simon@outerspace:~# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT rpool 153G 6.57G 146G 4% ONLINE - tank 4.06T 2.00T 2.07T 49% ONLINE - simon@outerspace:~# simon@outerspace:~# zpool status tank pool: tank state: ONLINE status: The pool is formatted using an older on-disk format. The pool can still be used, but some features are unavailable. action: Upgrade the pool using 'zpool upgrade'. Once this is done, the pool will no longer be accessible on older software versions. scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz2 ONLINE 0 0 0 c8t0d0 ONLINE 0 0 0 c8t1d0 ONLINE 0 0 0 c9t0d0 ONLINE 0 0 0 c9t1d0 ONLINE 0 0 0 c10t0d0 ONLINE 0 0 0 c10t1d0 ONLINE 0 0 0 errors: No known data errors simon@outerspace:~#
During the head-scratching caused by seeing OpenSolaris for the first time and wondering how to get CIFS sharing working again, I found the following blog posts helped me out:
Popularity: 16% [?]