All entries for January 2011

January 25, 2011

Automated Installer 'How To'

This article will take you through the steps I followed recently to create my own Automated Installer for Solaris 11 Express. The new IPS packaging system and the Automated Installer are central features of Solaris 11 express that are destined to replace both Jumpstart and SVR4 packaging found in Solaris 10. Knowing this technology is a must for all Solaris 11 would-be System Admins.

Each Automated Installer client needs access to a repository (where all the software is kept and catalogued), an AI boot image and manifests which describe the client configuration and a DHCP server to begin the boot process.

AI Overview Diagram

Creating a local repository

Actually, this is optional, it is possible to simply use the default Oracle repository at http://pkg.oracle.com/solaris/release . In the real world, though, most organisations will want to keep their own local repository for performance or security reasons.

First, you need to create a ZFS filesystem to hold the spooled repository;

root@sol-esx01:~# zpool create repopool c8t1d0s2
root@sol-esx01:~# zfs set mountpoint=/repo repopool

The following 2 commands are optional – they enable ZFS deduplication and compression on the new pool;

root@sol-esx01:~# zfs set dedup=on repopool
root@sol-esx01:~# zfs set compression=on repopool

Next, create a lofi device from the .iso (you need to download Solaris 11 Express Repository Image from here ) and mount it in a convenient location:

root@sol-esx01:~#  lofiadm -a /var/spool/pkg/sol-11-exp-201011-repo-full.iso
root@sol-esx01:~#  mount -F hsfs /dev/lofi/1 /repocd

You can use rsync, as suggested in the AI documentation to copy the contents of this DVD to your local repository filesystem;

root@sol-esx01:~# rsync -aP /repocd/ /repo

Next, setup the parameters for your service and enable it;

root@sol-esx01:~# svccfg -s application/pkg/server setprop pkg/inst_root=/repo/repo
root@sol-esx01:~# svccfg -s application/pkg/server setprop pkg/readonly=true

root@sol-esx01:~# svcadm refresh application/pkg/server
root@sol-esx01:~# svcadm enable application/pkg/server

Set your publisher to the local host;

root@sol-esx01:~# pkg set-publisher -O http://localhost solaris

You will need to index/refresh the repository if you want the search to work….

root@sol-esx01:~# pkgrepo refresh -s /repo/repo

After that completes, search (and install) should work;

root@sol-esx01:~# pkg search xclock
INDEX           ACTION VALUE                                 PACKAGE
basename        file   usr/share/X11/app-defaults/XClock     pkg:/x11/xclock@1.0.4-0.151
basename        file   usr/bin/xclock                        pkg:/x11/xclock@1.0.4-0.151
basename        link   usr/X11/bin/xclock                    pkg:/x11/xclock@1.0.4-0.151
pkg.description set    xclock is the classic X Window Syst.. pkg:/x11/xclock@1.0.4-0.151
pkg.fmri        set    solaris/x11/xclock                    pkg:/x11/xclock@1.0.4-0.151
pkg.summary     set    xclock - analog / digital clock for X pkg:/x11/xclock@1.0.4-0.151

root@sol-esx01:~# pkg install xclock
               Packages to install:     1
           Create boot environment:    No
DOWNLOAD                                  PKGS       FILES    XFER (MB)
Completed                                  1/1         6/6      0.0/0.0

PHASE                                        ACTIONS
Install Phase                                  43/43

PHASE                                          ITEMS
Package State Update Phase                       1/1
Image State Update Phase                         2/2
root@sol-esx01:~#

By default the publisher will listen on port 80;

root@sol-esx01:~# netstat -an| grep '\.80'
      *.80                 *.*                0      0 128000      0 LISTEN
root@sol-esx01:~# 

You should be able to check your work with ‘pkg publisher’

root@sol-esx01:~# pkg publisher
PUBLISHER                             TYPE     STATUS   URI
solaris                  (preferred)  origin   online   http://localhost/

That’s it, you are now running your own local publisher from your own local pkg server repository. Now onto the configuration of the automated installer.

Setting up the AI installer

This section moves step by step through the configuration of the Automated Installer.

First, create a zfs filesystem for the AI storage area;

root@sol-esx01:~# zfs create repopool/ai

If desired, turn on dedup, compression (again, both optional) and set your mountpoint;

root@sol-esx01:~# zfs get -r dedup repopool
NAME         PROPERTY  VALUE          SOURCE
repopool     dedup     on             local
repopool/ai  dedup     on             inherited from repopool

root@sol-esx01:~# zfs get -r compression repopool
NAME         PROPERTY     VALUE     SOURCE
repopool     compression  on        local
repopool/ai  compression  on        inherited from repopool

root@sol-esx01:~# zfs get mountpoint repopool/ai
NAME         PROPERTY    VALUE       SOURCE
repopool/ai  mountpoint  /repo/ai    inherited from repopool
root@sol-esx01:~# zfs set mountpoint=/ai repopool/ai

Copy the AI .iso into this filesystem, do not unpack it, the installadm command will do that for you.

root@sol-esx01:~# cp /var/spool/pkg/sol-11-exp-201011-ai-x86.iso /ai/
root@sol-esx01:~# ls /ai
sol-11-exp-201011-ai-x86.iso

Check your netmasks file contains the necessary, this is for DHCP to operate properly.

root@sol-esx01:~# vi /etc/netmasks
root@sol-esx01:~# tail -2 /etc/netmasks
#
192.168.100.0   255.255.255.0

The DNS multicast service is apparently required, although to be honest, I’ve never tried without it – so make sure this is enabled;

root@sol-esx01:~# svcs -a|grep dns
disabled       16:34:08 svc:/network/dns/install:default
disabled       16:34:10 svc:/network/dns/multicast:default
disabled       16:34:11 svc:/network/dns/server:default
online         16:34:20 svc:/network/dns/client:default

root@sol-esx01:~# svcadm enable dns/multicast
root@sol-esx01:~# svcs -a|grep dns

disabled       16:34:08 svc:/network/dns/install:default
disabled       16:34:11 svc:/network/dns/server:default
online         16:34:20 svc:/network/dns/client:default
online         21:26:48 svc:/network/dns/multicast:default

Running installadm

You can now run installadm to create your install service. Installadm has a really nice feature in that it will create the DHCP service for you if you want it to. It is possible to use a replacement DHCP service on the same server, or even a local DHCP service somewhere else on the subnet. The -i option tells installadm the starting address for your DHCP addresses and the -c option tells installadm how many addresses will be available for lease. The following example will offer addresses in the range 192.168.100.10 – 192.168.100.20.

root@sol-esx01:~# installadm create-service -n sol-11-exp-x86 -i 192.168.100.10 -c 10 \
    -s /ai/sol-11-exp-201011-ai-x86.iso /ai/sol-11-exp-x86-target
Setting up the target image at /ai/sol-11-exp-x86-target ...
Registering the service sol-11-exp-x86._OSInstall._tcp.local
Creating DHCP Server
Created DHCP configuration file.
Created dhcptab.
Added "Locale" macro to dhcptab.
Added server macro to dhcptab - sol-esx01.
DHCP server started.
Unable to determine the proper default router
or gateway for the 192.168.100.0 subnet. The default
router or gateway for this subnet will need to
be provided later using the following command:
   /usr/sbin/dhtadm -M -m 192.168.100.0 -e  Router=<address> -g
Added network macro to dhcptab - 192.168.100.0.
Created network table.
adding tftp to /etc/inetd.conf
Converting /etc/inetd.conf
copying boot file to /tftpboot/pxegrub.I86PC.Solaris-1
Service discovery fallback mechanism set up
Service discovery fallback mechanism set up

I forgot the default router in this example, but the router can easily be added as recommended in the output above;

root@sol-esx01:~# /usr/sbin/dhtadm -M -m 192.168.100.0 -e  Router=192.168.100.1 -g

Check the results;

root@sol-esx01:~# installadm list
Service Name   Status       Arch  Port  Image Path
------------   ------       ----  ----  ----------
sol-11-exp-x86 on           x86   46501 /ai/sol-11-exp-x86-target

Check DHCP too; I really like the fact that installadm setups up DHCP for you, sooo much pain saved.

root@sol-esx01:~# pntadm -P 192.168.100.0

Client ID       Flags   Client IP       Server IP       Lease Expiration  Macro           Comment

00              00      192.168.100.19  192.168.100.1   Zero              dhcp_macro_sol-11-exp-x86       
00              00      192.168.100.18  192.168.100.1   Zero              dhcp_macro_sol-11-exp-x86       
00              00      192.168.100.17  192.168.100.1   Zero              dhcp_macro_sol-11-exp-x86       
00              00      192.168.100.16  192.168.100.1   Zero              dhcp_macro_sol-11-exp-x86       
00              00      192.168.100.15  192.168.100.1   Zero              dhcp_macro_sol-11-exp-x86       
00              00      192.168.100.14  192.168.100.1   Zero              dhcp_macro_sol-11-exp-x86       
00              00      192.168.100.13  192.168.100.1   Zero              dhcp_macro_sol-11-exp-x86       
00              00      192.168.100.12  192.168.100.1   Zero              dhcp_macro_sol-11-exp-x86       
00              00      192.168.100.11  192.168.100.1   Zero              dhcp_macro_sol-11-exp-x86       
00              00      192.168.100.10  192.168.100.1   Zero              dhcp_macro_sol-11-exp-x86    

Creating a client

Creating a client is also achieved using the installadm command;

root@sol-esx01:~# installadm create-client -e 00:0c:29:05:31:75 -n sol-11-exp-x86
Service discovery fallback mechanism set up
Service discovery fallback mechanism set up

root@sol-esx01:~# installadm list
Service Name   Status       Arch  Port  Image Path
------------   ------       ----  ----  ----------
sol-11-exp-x86 on           x86   46501 /ai/sol-11-exp-x86-target

The -c option will list the client details;

root@sol-esx01:~# installadm list -c
Service Name   Client Address    Arch  Image Path
------------   --------------    ----  ----------
sol-11-exp-x86 00:0C:29:05:31:75 x86   /ai/sol-11-exp-x86-target

The installadm service comes with a default.xml that will serve any client that attempts to boot / install from this server. It is likely, of course that you will want to change this. There is also an example ‘static_network.xml’ in net_install_image_path/auto_install/sc_profiles that suggests a starting point for statically addressed clients and this can be copied and change as required. I copied this to sc_manifest1.xml (as described in the AI admin guide) and which is sourced as a follow up from ai.xml.

root@sol-esx01:/ai/sol-11-exp-x86-target/auto_install/sc_profiles# ls -lrth
total 7.0K
-r--r--r-- 1 root sys  4.7K 2010-11-05 14:13 static_network.xml
-rw-r--r-- 1 root root  788 2011-01-14 14:02 ai.xml
-r--r--r-- 1 root root 4.7K 2011-01-14 14:25 sc_manifest1.xml

Below is a summary of the changes I made, there are not really that many, I changed the default non-root user (because I don’t know jack, but I do recall that root is a role in Solaris 11 express, so you can’t login if you don’t), I took password hashes from an existing system to store as the root passwords, changed the hostname, network details and that was about it.

You should now create similar XML files for your clients, making sure you change the parameters as required. Below are the differences between my attempts and the default;

root@sol-esx01:/ai/sol-11-exp-x86-target/auto_install/sc_profiles# diff sc_manifest1.xml static_network.xml29,31c29,31
<                 <propval name="login" type="astring" value="cuspdx"/>
<                 <propval name="password" type="astring" value="$5$4X$wATzoqQD8pAxPErqESs3z0r9ypHkHeVsprsBgjmR3sD"/>
<                 <propval name="description" type="astring" value="Paul Eggleton"/>
---
>                 <propval name="login" type="astring" value="jack"/>
>                 <propval name="password" type="astring" value="9Nd/cwBcNWFZg"/>
>                 <propval name="description" type="astring" value="default_user"/>
40c40
<                 <propval name="password" type="astring" value="$5$Z7$GdeY1gSCF........jcuFI4"/>
---
>                 <propval name="password" type="astring" value="$5$VgppCOxA$ycFmY.......niNCouC"/>
46c46
<                 <propval name="hostname" type="astring" value="sol-esx02"/>
---
>                 <propval name="hostname" type="astring" value="solaris"/>
73c73
<                 <propval name='name' type='astring' value='e1000g0/v4'/>
---
>                 <propval name='name' type='astring' value='net0/v4'/>
75,76c75,76
<                 <propval name='static_address' type='net_address_v4' value='192.168.100.5/24'/>
<                 <propval name='default_route' type='net_address_v4' value='192.168.100.1'/>
---
>                 <propval name='static_address' type='net_address_v4' value='x.x.x.x/n'/>
>                 <propval name='default_route' type='net_address_v4' value='x.x.x.x'/>
80c80
<                 <propval name='name' type='astring' value='e1000g0/v6'/>
---
>                 <propval name='name' type='astring' value='net0/v6'/>
89c89
<         <instance name='default' enabled='false'>
---
>         <instance name='default' enabled='true'>
93c93
<                         <value_node value='192.168.100.1'/>
---
>                         <value_node value='x.x.x.x'/>

Following this, create the initial ai.xml. This specifies the boot device, a target name and publisher details (uses the local repository, not the Oracle one). Finally, it sources the sc_manifest1.xml mentioned above;

root@sol-esx01:/ai/sol-11-exp-x86-target/auto_install/sc_profiles# more ai.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE auto_install SYSTEM "file:///usr/share/auto_install/ai.dtd">
<auto_install>
  <ai_instance name="sol-esx02">
    <target>
      <target_device>
      <disk>
          <disk_keyword key="boot_disk"/>
      </disk>
      </target_device>
    </target>
 <software>
   <source>
   <publisher name="solaris">
     <origin name="http://192.168.100.1"/>
   </publisher>
   </source>
  <software_data action="install" type="IPS">
     <name>pkg:/server_install</name>
  </software_data>
  <software_data action="uninstall" type="IPS">
     <name>pkg:/server_install</name>
  </software_data>
 </software>
 <add_drivers>
    <search_all/>
 </add_drivers>
 <sc_manifest_file name="AI" URI="./sc_manifest1.xml"/>
 </ai_instance>
</auto_install>

To add a new manifest file, use installadm add-manifest as shown below. The -n specifies the service name and the -c allows you to specify criteria for matching clients that should use this manifest. Any clients that do not match the criteria are still catered for by the default.xml manifest which resides in net_install_image_path/auto_install/default.xml, in this case: /ai/sol-11-exp-x86-target/auto_install/default.xml

root@sol-esx01:~# installadm add-manifest -m ai.xml -n sol-11-exp-x86 -c MAC="00:0C:29:05:31:75"

The criteria above uses the client mac address, but there are other valid criteria with which to distinguish your clients. Criteria options include in addition to MAC address, the platform type (as per the platform type delivered from uname -i), the IPv4 address, the CPU architecture (such as ARCH=i86pc, think uname -p), or even system memory range values (think particular, cut-down installs for low memory machines). There is also -C option which expects an xml file containing the description of the criteria.

Example;

# installadm add-manifest -m ai.xml -n sol-11-exp-x86 -C /tmp/criteria.xml

Where criteria.xml may contain:

     <ai_criteria_manifest>

         <ai_criteria name=MAC>
             <value>00:0C:29:05:31:75</value>
         </ai_criteria>
     </ai_criteria_manifest>

The current criteria and manifests can be seen using installadm list -c and -m:

root@sol-esx01:/ai/sol-11-exp-x86-target/auto_install/sc_profiles# installadm list -c -m
Service Name   Client Address    Arch  Image Path
------------   --------------    ----  ----------
sol-11-exp-x86 00:0C:29:05:31:75 x86   /ai/sol-11-exp-x86-target

Service Name   Manifest
------------   --------
sol-11-exp-x86 target1.xml

Client boot

You should now be good to go, you can attempt a network boot from your client. Out of interest, the contents of the intended client grub menu will (should) be viewable in /tftpboot/menu.lst. ,

example;

root@sol-esx01:~# more /tftpboot/menu.lst.sol-11-exp-x86
default=0
timeout=30
min_mem64=1000
title Oracle Solaris 11 Express snv_151a boot image
      kernel$ /I86PC.Solaris-1/platform/i86pc/kernel/$ISADIR/unix -B install_media=http://192.168.100.1:5555/ai/sol-11-exp-x86-
target,install_service=sol-11-exp-x86,install_svc_address=192.168.100.1:46501
      module$ /I86PC.Solaris-1/platform/i86pc/$ISADIR/boot_archive
title Oracle Solaris 11 Express snv_151a Automated Install
      kernel$ /I86PC.Solaris-1/platform/i86pc/kernel/$ISADIR/unix -B install=true,install_media=http://192.168.100.1:5555/ai/so
l-11-exp-x86-target,install_service=sol-11-exp-x86,install_svc_address=192.168.100.1:46501
      module$ /I86PC.Solaris-1/platform/i86pc/$ISADIR/boot_archive
root@sol-esx01:~# 

The client should fairly quickly receive a PXE boot offer and display the grub boot menu with an option for Automated Install. The client managed to transfer and boot the miniroot.

The lease / DHCP details can be viewed with pntadm to confirm the address offered and the lease duration;

root@sol-esx01:~# pntadm -P 192.168.100.0

Client ID       Flags   Client IP       Server IP       Lease Expiration   Macro           Comment

01000C29053175  00      192.168.100.19  192.168.100.1   01/15/2011         dhcp_macro_sol-11-exp-x86       
00              00      192.168.100.18  192.168.100.1   Zero               dhcp_macro_sol-11-exp-x86       
00              00      192.168.100.17  192.168.100.1   Zero               dhcp_macro_sol-11-exp-x86       
00              00      192.168.100.16  192.168.100.1   Zero               dhcp_macro_sol-11-exp-x86       
00              00      192.168.100.15  192.168.100.1   Zero               dhcp_macro_sol-11-exp-x86       
00              00      192.168.100.14  192.168.100.1   Zero               dhcp_macro_sol-11-exp-x86       
00              00      192.168.100.13  192.168.100.1   Zero               dhcp_macro_sol-11-exp-x86       
00              00      192.168.100.12  192.168.100.1   Zero               dhcp_macro_sol-11-exp-x86       
00              00      192.168.100.11  192.168.100.1   Zero               dhcp_macro_sol-11-exp-x86       
00              00      192.168.100.10  192.168.100.1   Zero               dhcp_macro_sol-11-exp-x86 

Choose the Automated Install option when you see the grub menu. You should see the client dhcp boot, download the miniroot boot image and begin installing. Snoop will, of course show you the HTTP traffic during install.

root@sol-esx01:~# snoop -d e1000g1
Using device e1000g1 (promiscuous mode)
   sol-esx01 -> 192.168.100.19 HTTP HTTP/1.1 200 OK
   sol-esx01 -> 192.168.100.19 HTTP (body)
   sol-esx01 -> 192.168.100.19 HTTP (body)
192.168.100.19 -> sol-esx01    HTTP C port=54454
   sol-esx01 -> 192.168.100.19 HTTP HTTP/1.1 200 OK

After a fairly short period of time, and ZERO interaction, you should have your newly installed client available to use

cuspdx@sol-esx02:~$ hostname
sol-esx02
cuspdx@sol-esx02:~$ uname -a
SunOS sol-esx02 5.11 snv_151a i86pc i386 i86pc Solaris
cuspdx@sol-esx02:~$
cuspdx@sol-esx02:~$
cuspdx@sol-esx02:~$ uptime
 22:11pm  up 5:02,  1 user,  load average: 0.00, 0.00, 0.00
cuspdx@sol-esx02:~$
cuspdx@sol-esx02:~$
cuspdx@sol-esx02:~$ df -h
Filesystem            Size  Used Avail Use% Mounted on
rpool/ROOT/solaris     17G  2.6G   14G  17% /
swap                  1.4G  396K  1.4G   1% /etc/svc/volatile
/usr/lib/libc/libc_hwcap1.so.1
                       17G  2.6G   14G  17% /lib/libc.so.1
swap                  1.4G  4.0K  1.4G   1% /tmp
swap                  1.4G   40K  1.4G   1% /var/run
rpool/export           14G   32K   14G   1% /export
rpool/export/home      14G   32K   14G   1% /export/home
rpool/export/home/cuspdx
                       14G   34K   14G   1% /export/home/cuspdx
rpool                  14G   93K   14G   1% /rpool

Something that is worthy of note, is the fact that the software distribution is far from the burgeoning megapack that was included in Solaris 10. For example, the SUNWCXall distribution set for Solaris 10 update 9 consisted of around 6gb of software when installed.

-bash-3.00$ df -h
Filesystem             size   used  avail capacity  Mounted on
rpool/ROOT/s10u9       134G   6.0G    72G     8%    /

Whereas the Solaris 11 Express install contains just 2.6gb

Filesystem            Size  Used Avail Use% Mounted on
rpool/ROOT/solaris     17G  2.6G   14G  17% /

This is a good thing, because the majority of software installed under Solaris 10 would have never been used, and is simply there because of the coarse grained package distribution model offered by the installer. Minimisation in Solaris 10 with SVR4 packaging was difficult to achieve or to implement with confidence knowing that everything will work as expected and be patchable with expected results. Less software means less scope for security vulnerabilities to be introduced. Less software is also likely to mean fewer unnecessary daemons running, less memory consumed and better overall performance. In addition, a dataset this size will be much easier to maintain, backup and replicate, consuming both less space and time.

In addition, if you find you need some of the missing software, you can simply use the pkg framework to search the repository and install the software you need AND any required dependencies in one fell swoop. For example, gcc3 can be installed and a couple of prerequisite packages such as lint by simply typing ‘pkg install gcc-3’ ;

root@sol-esx01:~# pkg install gcc-3               Packages to install:     3
           Create boot environment:    No
DOWNLOAD                                  PKGS       FILES    XFER (MB)
Completed                                  3/3     779/779    30.1/30.1

PHASE                                        ACTIONS
Install Phase                              1214/1214

PHASE                                          ITEMS
Package State Update Phase                       3/3
Image State Update Phase                         2/2
root@sol-esx01:~#
 

In summary then, AI is not all that difficult to configure, it is incredibly easy to use once setup and the benefits for package and system management are huge. In this article I’ve shown you how to

  • Create your own local repository area (standard ZFS commands)
  • Create a spool of the repo media (lofiadm) and set up the pkg/server SMF service (using svcadm and svccfg properties).
  • Set the default publisher to use this repository (pkg set-publisher)
  • Create a ZFS area for the auto installer (standard ZFS commands)
  • Create an install service (installadm create-service)
  • Creat a manifest and apply the XML manifest (installadm add-manifest) to a client
  • Install that client in a hands-off, automated but custom manner straight out of the box.
  • Described some of the benefits over SVR4 of the IPS / AI model

The real power and flexibility will come in the customisation of clients using multiple xml manifests, and matching these in pretty much any way you please to clients within your environment. I think it is fair to say though, it is easy to see the features and benefits that Auto Install and the new IPS repository based packaging tools will bring to Solaris 11. What we all need to work on in the short term, to get the most out of this technology is the transition from Solaris 10, Jet, Jumpstart, flash archives and other familiar technologies to IPS and AI.

I hope you found this useful, comments welcome.

Paul.


January 19, 2011

Failed Disk in x4500 and hdtool

Writing about web page http://download.oracle.com/docs/cd/E19962-01/820-1120-22/chapter2.html#d0e1462

Today we had a disk failure in one of the x4500s. These machines hold 48 500gb SATA disk drives, and the units are what Oracle term a CRU – Customer Replaceable Units. Which, of course means there has to be a straightforward and reliable way to identify the disk location of any given disk from the information given in the messages file. There is a very useful tool on the ‘x4500 Tools and Drivers CD” called ‘hd’ or hdtool.

The Tools and Drivers CD can be downloaded from My Oracle Support (MOS), under Patches and Updates, search using “Product or Family”, set the product to seach for to ‘x4500’ and you should find the Tools and Drivers CD in the list.

Once downloaded, unzip into /var/spool/pkg and find the .iso contained within;

bash-3.00# unzip p10335199_160_Generic.zip
Archive:  p10335199_160_Generic.zip
   creating: Tools_and_Drivers/
  inflating: Tools_and_Drivers/X4500_Tools_And_Drivers_common_47001.zip 
  inflating: Tools_and_Drivers/license_agreement1.html 
  inflating: Tools_and_Drivers/MD5SUM-SoftwareAndDocumentation.txt 
  inflating: Tools_and_Drivers/readme.html 
  inflating: Tools_and_Drivers/X4500_Tools_And_Drivers_linux_47001.tar.bz2 
  inflating: Tools_and_Drivers/X4500_Tools_And_Drivers_windows_47001.zip 
  inflating: Tools_and_Drivers/X4500_Tools_And_Drivers_read_me.html 
  inflating: Tools_and_Drivers/X4500_Tools_And_Drivers_solaris_47001.tar.bz2 
  inflating: Tools_and_Drivers/X4500_Tools_And_Drivers_CD_47001.iso  

Lofi mount it;

bash-3.00# lofiadm -a /var/spool/pkg/Tools_and_Drivers/X4500_Tools_And_Drivers_CD_47001.iso
/dev/lofi/1
bash-3.00# mount -F hsfs /dev/lofi/1 /mnt
bash-3.00# cd /mnt

Then pkgadd from mnt/solaris/tools/hdtool/SUNWhd-1.07.pkg

bash-3.00# pkgadd -d SUNWhd-1.07.pkg
The following packages are available:
  1  SUNWhd     Sun Fire X4500/X4540 Hard Disk Suite
                (i386) 1.07

Select package(s) you wish to process (or 'all' to process
all packages). (default: all) [?,??,q]:

Processing package instance <SUNWhd> from </mnt/solaris/tools/hdtool/SUNWhd-1.07.pkg>

Sun Fire X4500/X4540 Hard Disk Suite(i386) 1.07
Copyright 2007 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.
Using </opt> as the package base directory.
## Processing package information.
## Processing system information.
## Verifying package dependencies.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
## Checking for setuid/setgid programs.

This package contains scripts which will be executed with super-user
permission during the process of installing this package.

Do you want to continue with the installation of <SUNWhd> [y,n,?] y

Installing Sun Fire X4500/X4540 Hard Disk Suite as <SUNWhd>

## Installing part 1 of 1.
/opt/SUNWhd/hd/bin/hd
/opt/SUNWhd/hd/bin/hd.html
/opt/SUNWhd/hd/bin/hdadm
/opt/SUNWhd/hd/bin/hdadm.html
/opt/SUNWhd/hd/bin/read_cache
/opt/SUNWhd/hd/bin/write_cache
[ verifying class <none> ]
## Executing postinstall script.

Installation of <SUNWhd> was successful.
bash-3.00#

You can now use this tool to view information regarding the disks in your system;

bash-3.00# ./hd -?./hd: illegal option -- ?
 Usage: hd [ -c(olor mode) ] [ -s(ummary) ] [ -p(latform) ]
 [ -b(ypass) to print SunFireX4500 map ]
 [ -d(iagnose)] [-f { syslog_file } ]
 [ -m { adjacent | cross | front2back | diagonal } Mapping pairs ]
 [ -w { <pci_disk_device_path> } ]
 [ -a (fdisk pArtition type) ]
 [ -q (list drive slot number in seQuential list) ]
 [ -g (list drive slot number in seQuential list with temperature ) ]
 [ -l (List SunFireX4500/X4540 available disk in physical orders) ]
 [ -r (List SMART data for all disks in drive slot number) ]
 [ -R (List SMART data's indivdual id in landscape view for all disks) ]
 [ -e <cXtY> (List SMART data for specified disk) ]
 [ -E <cXtY> (List raw hex SMART data for specified disk) ]
 [ -j (List SunFireX4500/X4540 HBA controller numbers and pci nodes) ]
 [ -T (List vtoc for all drives for SunFireX4500/X4540 platform) ]
 [ -t (List vtoc for specified drives) ]
 [ -i (List cXtY, sd# and PCI path) ]
 [ -o (List LSI HBA#, Drive Target# and cXtY) ]
 [ -x (Generate hd_map.html) ]

The HD map it generates, includes, amongst other detail a simple way of showing the status of each drive:

++: Device is present and accessible.
Red: Device not enumerated or no drive in physical slot/location.
--: Device is not accessible, absent/empty or down.

For example, the following shows the failure of c4t4;

---------------------SunFireX4500------Rear----------------------------
36:   37:   38:   39:   40:   41:   42:   43:   44:   45:   46:   47:  
c3t3  c3t7  c2t3  c2t7  c5t3  c5t7  c4t3  c4t7  c1t3  c1t7  c0t3  c0t7 
^++   ^++   ^++   ^++   ^++   ^++   ^++   ^++   ^++   ^++   ^++   ^++  
24:   25:   26:   27:   28:   29:   30:   31:   32:   33:   34:   35:  
c3t2  c3t6  c2t2  c2t6  c5t2  c5t6  c4t2  c4t6  c1t2  c1t6  c0t2  c0t6 
^++   ^++   ^++   ^++   ^++   ^++   ^++   ^++   ^++   ^++   ^++   ^++  
12:   13:   14:   15:   16:   17:   18:   19:   20:   21:   22:   23:  
c3t1  c3t5  c2t1  c2t5  c5t1  c5t5  c4t1  c4t5  c1t1  c1t5  c0t1  c0t5 
^++   ^++   ^++   ^++   ^++   ^++   ^++   ^++   ^++   ^++   ^++   ^++  
 0:    1:    2:    3:    4:    5:    6:    7:    8:    9:   10:   11:  
c3t0  c3t4  c2t0  c2t4  c5t0  c5t4  c4t0  c4t4  c1t0  c1t4  c0t0  c0t4 
^b+   ^b+   ^++   ^++   ^++   ^++   ^++   ^--   ^++   ^++   ^++   ^++  
-------*-----------*-SunFireX4500--*---Front-----*-----------*----------

More complete documentation can be found here

bash-3.00# cd /opt/SUNWhd/hd/bin/
bash-3.00# ./hd

platform = Sun Fire X4500

Device    Serial        Vendor   Model             Rev  Temperature    
------    ------        ------   -----             ---- -----------    
c0t4d0p0  F400P6G4ES5F  ATA      HITACHI HUA7250S  A90A 26 C (78 F)
c0t3d0p0  F400P6G4XGYF  ATA      HITACHI HUA7250S  A90A 32 C (89 F)
c5t6d0p0  F400P6G4VU4F  ATA      HITACHI HUA7250S  A90A 30 C (86 F)
c5t1d0p0  F400P6G4LH8F  ATA      HITACHI HUA7250S  A90A 28 C (82 F)
c1t4d0p0  F400P6G50N6F  ATA      HITACHI HUA7250S  A90A 26 C (78 F)
c1t3d0p0  A570P6G4LSBF  ATA      HITACHI HUA7250S  A90A 31 C (87 F)
c6t7d0p0  F400P6G4WKDF  ATA      HITACHI HUA7250S  A90A 31 C (87 F)
c6t0d0p0  F500P6G4VJDF  ATA      HITACHI HUA7250S  A90A 25 C (77 F)
c4t6d0p0  F400P6G4XX2F  ATA      HITACHI HUA7250S  A90A 32 C (89 F)
c4t1d0p0  F400P6G4ZH6F  ATA      HITACHI HUA7250S  A90A 30 C (86 F)
c3t5d0p0  F400P6G4RBWF  ATA      HITACHI HUA7250S  A90A 29 C (84 F)
c3t2d0p0  F400P6G4X16F  ATA      HITACHI HUA7250S  A90A 31 C (87 F)
c5t0d0p0  F400P6G0JWTF  ATA      HITACHI HUA7250S  A90A 26 C (78 F)
c5t7d0p0  A570P6G4G25F  ATA      HITACHI HUA7250S  A90A 31 C (87 F)
c0t2d0p0  F400P6G4UY0F  ATA      HITACHI HUA7250S  A90A 30 C (86 F)
c0t5d0p0  F400P6G4KVDF  ATA      HITACHI HUA7250S  A90A 30 C (86 F)
c3t3d0p0  A570P6G4LA4F  ATA      HITACHI HUA7250S  A90A 32 C (89 F)
c3t4d0p0  F500P6G4XWYF  ATA      HITACHI HUA7250S  A90A 26 C (78 F)
c4t0d0p0  F500P6G531WF  ATA      HITACHI HUA7250S  A90A 27 C (80 F)
c4t7d0p0  A570P6G4R5VF  ATA      HITACHI HUA7250S  A90A 32 C (89 F)
c6t1d0p0  F400P6G4WW1F  ATA      HITACHI HUA7250S  A90A 28 C (82 F)
c6t6d0p0  F400P6G4XEHF  ATA      HITACHI HUA7250S  A90A 30 C (86 F)
c1t2d0p0  F400P6G4Z54F  ATA      HITACHI HUA7250S  A90A 30 C (86 F)
c1t5d0p0  F400P6G4SL0F  ATA      HITACHI HUA7250S  A90A 28 C (82 F)
c3t7d0p0  F400P6G4MAZF  ATA      HITACHI HUA7250S  A90A 32 C (89 F)
c3t0d0p0  F500P6G5538F  ATA      HITACHI HUA7250S  A90A 27 C (80 F)
c4t4d0p0  F400P6G0JSGF  ATA      HITACHI HUA7250S  A90A 28 C (82 F)
c4t3d0p0  F400P6G4YUUF  ATA      HITACHI HUA7250S  A90A 34 C (93 F)
c6t5d0p0  F400P6G4SLEF  ATA      HITACHI HUA7250S  A90A 27 C (80 F)
c6t2d0p0  F400P6G4Z4KF  ATA      HITACHI HUA7250S  A90A 30 C (86 F)
c1t6d0p0  F400P6G4X5BF  ATA      HITACHI HUA7250S  A90A 30 C (86 F)
c1t1d0p0  F400P6G0JYUF  ATA      HITACHI HUA7250S  A90A 29 C (84 F)
c5t4d0p0  F400P6G42U3F  ATA      HITACHI HUA7250S  A90A 26 C (78 F)
c5t3d0p0  F400P6G4X8KF  ATA      HITACHI HUA7250S  A90A 31 C (87 F)
c0t6d0p0  F400P6G4X9XF  ATA      HITACHI HUA7250S  A90A 32 C (89 F)
c0t1d0p0  F400P6G4KGYF  ATA      HITACHI HUA7250S  A90A 29 C (84 F)
c1t0d0p0  F400P6G4XVSF  ATA      HITACHI HUA7250S  A90A 25 C (77 F)
c1t7d0p0  F400P6G49X1F  ATA      HITACHI HUA7250S  A90A 32 C (89 F)
c6t3d0p0  A570P6G4G5LF  ATA      HITACHI HUA7250S  A90A 31 C (87 F)
c6t4d0p0  F400P6G4W3TF  ATA      HITACHI HUA7250S  A90A 25 C (77 F)
c4t2d0p0  F400P6G4WK6F  ATA      HITACHI HUA7250S  A90A 32 C (89 F)
c4t5d0p0  F400P6G4UHMF  ATA      HITACHI HUA7250S  A90A 29 C (84 F)
c3t1d0p0  F400P6G0K0HF  ATA      HITACHI HUA7250S  A90A 29 C (84 F)
c3t6d0p0  F400P6G4U62F  ATA      HITACHI HUA7250S  A90A 30 C (86 F)
c0t0d0p0  F400P6G0JVZF  ATA      HITACHI HUA7250S  A90A 26 C (78 F)
c0t7d0p0  F400P6G4X75F  ATA      HITACHI HUA7250S  A90A 32 C (89 F)
c5t2d0p0  F400P6G4U47F  ATA      HITACHI HUA7250S  A90A 30 C (86 F)
c5t5d0p0  F400P6G4N7LF  ATA      HITACHI HUA7250S  A90A 27 C (80 F)

---------------------SunFireX4500------Rear----------------------------

36:   37:   38:   39:   40:   41:   42:   43:   44:   45:   46:   47:  
c4t3  c4t7  c3t3  c3t7  c6t3  c6t7  c5t3  c5t7  c1t3  c1t7  c0t3  c0t7 
^++   ^++   ^++   ^++   ^++   ^++   ^++   ^++   ^++   ^++   ^++   ^++  
24:   25:   26:   27:   28:   29:   30:   31:   32:   33:   34:   35:  
c4t2  c4t6  c3t2  c3t6  c6t2  c6t6  c5t2  c5t6  c1t2  c1t6  c0t2  c0t6 
^++   ^++   ^++   ^++   ^++   ^++   ^++   ^++   ^++   ^++   ^++   ^++  
12:   13:   14:   15:   16:   17:   18:   19:   20:   21:   22:   23:  
c4t1  c4t5  c3t1  c3t5  c6t1  c6t5  c5t1  c5t5  c1t1  c1t5  c0t1  c0t5 
^++   ^++   ^++   ^++   ^++   ^++   ^++   ^++   ^++   ^++   ^++   ^++  
 0:    1:    2:    3:    4:    5:    6:    7:    8:    9:   10:   11:  
c4t0  c4t4  c3t0  c3t4  c6t0  c6t4  c5t0  c5t4  c1t0  c1t4  c0t0  c0t4 
^b+   ^b+   ^++   ^++   ^++   ^++   ^++   ^++   ^++   ^++   ^++   ^++  
-------*-----------*-SunFireX4500--*---Front-----*-----------*----------

As for the disk failure, I have logged a call with Oracle and am waiting to hear back from dispatch. Fortunately, the disk that failed was being used as a SPARE drive in the main datapool. We have 4 other SPARES at the moment, so replacement isn’t urgent, although we are running with slightly lower resilience to failure in the meantime.

Paul.


January 2011

Mo Tu We Th Fr Sa Su
Dec |  Today  | Feb
               1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31                  

Search this blog

Tags

Galleries

Most recent comments

  • Started sorting out new vers for sparc: http://blogs.warwick.ac.uk/mariamaccallum/entry/apache_249_i… by Maria MacCallum on this entry
  • Solaris 11.1 is slightly different, I only had to do this before starting ipfilter: svccfg –s setpro… by Maria MacCallum on this entry
  • Really useful information, thanks a lot! I do a NAT using IPFILTER and all was working good, until I… by Nilton on this entry
  • Paul, Thanks for your information. It got me started quickly. I have discovered , thought I've not s… by Tom C on this entry
  • Are you familiar with the Monty Python sketch? by Ian Eiloart on this entry

Blog archive

Loading…
Not signed in
Sign in

Powered by BlogBuilder
© MMXIX