Text preview for : newadmin_2[1].pdf part of muk muki unix/linux



Back to : newadmin_2[1].pdf | Home

Using hwmgr

Replacing a SCSI Disk
To replace a disk (for example, a disk that fails) so that the replacement disk takes on the hardware characteristics of the failed disk, such as ownership of the same device special files, use the hwmgr -redirect scsi command.
Example 6-8 Replacing a Failed SCSI Disk # hwmgr -show scsi SCSI DEVICE DEVICE DRIVER NUM DEVICE FIRST HWID: DEVICEID HOSTNAME TYPE SUBTYPE OWNER PATH FILE VALID PATH ---------------------------------------------------------------------30: 0 canary disk none 2 1 dsk0 [0/0/0] . . . . . . . . . 36: 6 canary disk none 0 1 dsk5 [1/3/0] 37: 7 canary disk none 0 1 dsk6 [1/4/0] 41: 8 canary disk none 0 1 dsk8 [1/5/0] # hwmgr -redirect scsi -src 7 -dest 8 hwmgr: Redirect operation was successful # hwmgr -show scsi SCSI DEVICE DEVICE DRIVER NUM DEVICE FIRST HWID: DEVICEID HOSTNAME TYPE SUBTYPE OWNER PATH FILE VALID PATH ---------------------------------------------------------------------30: 0 canary disk none 2 1 dsk0 [0/0/0] . . . . . . . . . 36: 6 canary disk none 0 1 dsk5 [1/3/0] 37: 7 canary disk none 0 1 dsk6 [1/5/0] # hwmgr -scan scsi hwmgr: Scan request successfully initiated # hwmgr -show scsi SCSI DEVICE DEVICE DRIVER NUM DEVICE FIRST HWID: DEVICEID HOSTNAME TYPE SUBTYPE OWNER PATH FILE VALID PATH ---------------------------------------------------------------------30: 0 canary disk none 2 1 dsk0 [0/0/0] . . . . . . . . . 36: 6 canary disk none 0 1 dsk5 [1/3/0] 37: 7 canary disk none 0 1 dsk6 [1/5/0] 42: 8 canary disk none 0 1 dsk9 [1/4/0] #

1. Use hwmgr -show scsi to display SCSI devices. 2. Replace dsk6 (failing disk with DID = 7) with spare disk dsk8 (DID = 8). You should have a backup copy of dsk6. 3. Verify that the hardware characteristics of disk dsk6 have been given to the disk at bus/target/LUN 1/5/0. You can now restore to dsk6 from your backup copy. 4. When convenient, physically replace the failed disk at bus/target/LUN 1/4/0 with a new disk and use hwmgr -scan scsi to register it.

6 ­ 30

Managing Disks

Creating a User-Defined SCSI Device Name
Some older disk devices do not provide a unique WWID. For such disks, the system will create a WWID using valid path bus/target/lun data. If the disk is shared in a cluster, each system creates its own unique WWID, leading to the possibility of concurrent access and data corruption. You can use hwmgr to create a unique user-defined name for the older disk which is then used by each system to create a common WWID and one set of device special file names for the disk.
Example 6-9 Creating a User-Defined SCSI Device Name # hwmgr -show scsi -did 8 -full SCSI DEVICE DEVICE DRIVER NUM DEVICE FIRST HWID: DEVICEID HOSTNAME TYPE SUBTYPE OWNER PATH FILE VALID PATH ---------------------------------------------------------------------42: 8 canary disk none 0 1 dsk9 [1/4/0] WWID:0410003a:"DEC HDA=000041563084" RZ28 (C) DECPCB=ZG52462664 ;

BUS TARGET LUN PATH STATE -----------------------------1 4 0 valid # hwmgr -edit scsi -did 8 -uwwid "test disk" hwmgr: Operation completed successfully # hwmgr -show scsi -did 8 -full SCSI DEVICE DEVICE DRIVER NUM DEVICE FIRST HWID: DEVICEID HOSTNAME TYPE SUBTYPE OWNER PATH FILE VALID PATH ---------------------------------------------------------------------43: 8 canary disk none 0 1 dsk10 [1/4/0] WWID:0410003a:"DEC RZ28 HDA=000041563084" WWID:ff100009:"test disk" BUS TARGET LUN PATH STATE -----------------------------1 4 0 valid (C) DECPCB=ZG52462664 ;

1. Display full information for the SCSI disk to be given a user-defined name. 2. Use hwmgr -edit scsi to give the disk the user-defined name "test disk". 3. Display full information for the disk again. Repeat step 2 for each system, specifying the DID of the disk as seen by each system.

6 ­ 31

Using dsfmgr

Using dsfmgr
Overview
The dsfmgr utility enables you to manage device special files. At boot time, the system script file /sbin/dn_setup runs and uses dsfmgr command options to poll the system for new devices and, if any are found, creates device special files for those devices. When the system is up, you can use dsfmgr to: Recreate or reassign device names (if a device fails or is replaced) Create device special files for legacy (old) devices Display, verify, and fix device special file data Add a new class of devices You can also run /sbin/dn_setup to verify the consistency of device special files and the directory hierarchy. The dn_setup script runs automatically at system start up to create device special file names. Normally, you will not need to use dn_setup options, however they can be useful if you need to troubleshoot device name problems or restore a damaged special device file directory or database files. The most commonly used dn_setup option is as follows:
# /sbin/dn_setup -sanity_check

If the check is successful, the message Passed is displayed. Table 6-6 shows some dsfmgr commands.
Table 6-6 Some dsfmgr Commands
Command
dsfmgr -n node_name... dsfmgr -K dsfmgr -o node_name... dsfmgr -O dsfmgr -s dsfmgr -v dsfmgr -v -F dsfmgr -e bname_1 bname_2 or dsfmgr -m bname_1 bname_2 dsfmgr -a class parameters [options] dsfmgr -a category parameters [options]

Description
Creates device special files for node Creates all device special files Creates legacy device special files for node Creates all legacy device special files Displays device special file data Verifies device special file data Verifies and fix device special file data Reassigns device special files between devices Creates a new class of device Creates categories for a class of device

For a complete description of dsfmgr commands and options, see dsfmgr(8).

Old and New Device Names
Some points to note for old and new device names are as follows: During the initial installation, new device names are created for every disk and tape device found on the system. In the case of a complete (full) installation, only the
6 ­ 32

Managing Disks

new device special files are created. If the system was updated from a previous release using the update installation procedure, both the new device special files and the old device files will exist. Any new devices added to the system are automatically detected on reboot after the hardware installation. On the first reboot after installation of the new device, dsfmgr is called automatically during the boot sequence to create the new device special files for that device. To support applications that will only work with old device names, you may need to manually create the old device special files, either for every existing device or for new devices. Use the -O option to create all device special files in the previous (old) format, such as rz* or tz*.
# dsfmgr -O

Displaying Device Special File Data
To display device special file data, including device classes and categories, use the following command:
Example 6-10 Displaying Device Special File data
# dsfmgr -s dsfmgr: show all datum for system at / Device Class # scope -- --1 l 2 l 3 c 4 c 5 c . . Directory Default Database: mode name ---- ----------0755 . 0755 none 0755 cport 0755 disk 0755 rdisk . . iw t mode prefix -- - ---- ------1 b 0600 cdrom 1 c 0600 cdrom 1 b 0600 floppy 1 1 b 0600 dsk c 0600 dsk

Category to Class-Directory, Prefix Database: # category sub_category type directory -- -------------- -------------- -------------- -------------1 disk cdrom block disk 2 disk cdrom char rdisk 3 disk floppy block disk . . . . . . . . . 7 disk generic block disk 8 disk generic char rdisk . . . . . . . . . Device Directory Tree: 9 8 drwxr-xr-x 1849 8 drwxr-xr-x 1688 8 drwxr-xr-x 1689 8 drwxr-xr-x 1756 8 drwxr-xr-x . . Dev Nodes: 1870 0 crw------1871 0 crw------1872 0 crw------1692 0 brw------1759 0 crw------1693 0 brw------1760 0 crw------1694 0 brw------. . . 6 2 2 2 2 1 1 1 1 1 1 1 1 root root root root root . root root root root root root root root . system system system system system . . system system system system system system system system . . . 8192 8192 8192 8192 8192 .

Oct 17 14:56 /dev/. Sep 1 11:22 /dev/none Sep 1 11:20 /dev/cport Oct 17 14:58 /dev/disk Oct 16 16:22 /dev/rdisk 6 14:45 /dev/scp_scsi 14:45 /dev/kevm 14:45 /dev/kevm.pterm 14:42 /dev/disk/dsk0a 14:42 /dev/rdisk/dsk0a 14:42 /dev/disk/dsk0b 14:42 /dev/rdisk/dsk0b 14:42 /dev/disk/dsk0c

.

.

81,1048575 Sep 83, 0 Sep 6 83, 2 Sep 6 19, 1 Sep 6 19, 2 Sep 6 19, 3 Sep 6 19, 4 Sep 6 19, 5 Sep 6 .

6 ­ 33

Using dsfmgr

Use dsfmgr -a class to add a class of device to the Class Directory database. Use dsfmgr -a category to add categories for the class. Each device name potentially has a prefix (for example, dsk), an instance number (for example, 2), and a suffix (for example, b). This gives a device name dsk2b. Note that the prefix plus instance number is referred to as the base name (dsk2). To add a directory to the device directory tree, you use dsfmgr -c, and to create device special files, you can use dsfmgr -n. In the following example, a new class of device called sound is added to the device special files database. The device prefix is snd. The directory /dev/sound is also created.
Example 6-11 Adding a New Class of Device # dsfmgr -a class sound c 755 # dsfmgr -a category sound generic block sound 1 b 0600 snd # dsfmgr -c sound

Verifying and Fixing Device Special File Data
Suppose you have a disk dsk6 on your system. Device special files dsk6a, dsk6b, and so on will exist in /dev/disk (actually /devices/disk). Suppose further that the device special file dsk6a is accidently deleted. You can verify the consistency of the device special file data and recreate the device special file dsk6a with a single dsfmgr command, as shown in the following example.
Example 6-12 Verifying and Fixing the Device Special File Database # ls /dev/disk/dsk6a /dev/disk/dsk6a # rm /dev/disk/dsk6a # ls /dev/disk/dsk6a ls: /dev/disk/dsk6a not found # dsfmgr -v -F dsfmgr: verify with fix all datum for system at / Default File Tree: OK. Device Class Directory Default Database: OK. Device Category to Class Directory Database: OK. Dev directory structure: OK. Device Status Files: OK. Dev Nodes: WARNING node does not exist: /dev/disk/dsk6a OK. Total warnings: 1 # ls /dev/disk/dsk6a /dev/disk/dsk6a
6 ­ 34

Managing Disks

The dsfmgr command with the -v option alone would detect the inconsistency of the missing file, and flag it as an error. Including the -F option on the command line fixes the inconsistency by recreating the file dsk6a and issuing a warning. If you give the dsfmgr -v command at this point, no errors or warnings will be reported.

Reassigning or Reusing Device Special File Names
After you perform certain activities with hwmgr, for example, replacing a failed disk, you may no longer have a device with the former device special file name. The system assigns new names in order. To reassign or reuse a device special file name, you can give the dsfmgr -m command, as shown in the following example.
Example 6-13 Reassigning a Device Special File Name # hwmgr -show scsi SCSI DEVICE DEVICE DRIVER NUM DEVICE FIRST HWID: DEVICEID HOSTNAME TYPE SUBTYPE OWNER PATH FILE VALID PATH ---------------------------------------------------------------------30: 0 canary disk none 2 1 dsk0 [0/0/0] 31: 1 canary disk none 0 1 dsk1 [0/2/0] 32: 2 canary disk none 0 1 dsk2 [0/3/0] 33: 3 canary cdrom none 0 1 cdrom0 [0/6/0] 34: 4 canary disk none 0 1 dsk3 [1/1/0] 35: 5 canary disk none 0 1 dsk4 [1/2/0] 36: 6 canary disk none 0 1 dsk5 [1/3/0] 37: 7 canary disk none 0 1 dsk6 [1/5/0] 43: 8 canary disk none 0 1 dsk10 [1/4/0] # dsfmgr -m dsk10 dsk7 dsk10a=>dsk7a dsk10b=>dsk7b dsk10c=>dsk7c dsk10d=>dsk7d dsk10e=>dsk7e dsk10f=>dsk7f dsk10g=>dsk7g dsk10h=>dsk7h dsk10a=>dsk7a dsk10b=>dsk7b dsk10c=>dsk7c dsk10d=>dsk7d dsk10e=>dsk7e dsk10f=>dsk7f dsk10g=>dsk7g dsk10h=>dsk7h # hwmgr -show scsi SCSI DEVICE DEVICE DRIVER NUM DEVICE FIRST HWID: DEVICEID HOSTNAME TYPE SUBTYPE OWNER PATH FILE VALID PATH ---------------------------------------------------------------------30: 0 canary disk none 2 1 dsk0 [0/0/0] 31: 1 canary disk none 0 1 dsk1 [0/2/0] 32: 2 canary disk none 0 1 dsk2 [0/3/0] 33: 3 canary cdrom none 0 1 cdrom0 [0/6/0] 34: 4 canary disk none 0 1 dsk3 [1/1/0] 35: 5 canary disk none 0 1 dsk4 [1/2/0] 36: 6 canary disk none 0 1 dsk5 [1/3/0] 37: 7 canary disk none 0 1 dsk6 [1/5/0] 43: 8 canary disk none 0 1 dsk7 [1/4/0] #

In this example, the disk device name dsk10 is reassigned to be dsk7, and all the device special files for dsk10 are removed in the process.

6 ­ 35

Using the scu Utility

Using the scu Utility
The scu utility provides commands necessary for normal maintenance and diagnostics of SCSI peripheral devices and the CAM I/O subsystem. You can use scu to: Format disks Reassign a defective disk block Reserve and release a device Display and set device and program parameters Enable and disable a device The following example shows some scu utility commands.
# scu scu> set nexus bus 0 target 0 lun 0 Device: RZ28, Bus: 0, Target: 0, Lun: 0, Type: Direct Access scu> show capacity Disk Capacity Information: Maximum Capacity: 4110480 (2007.070 megabytes) Block Length: 512 scu> show scsi status 0 SCSI Status = 0 = SCSI_STAT_GOOD = Command successfully completed scu> show edt CAM Equipment Device Table (EDT) Information: Bus/Target/Lun Device Type -------------- ----------0 0 0 Direct 0 1 0 Direct 0 2 0 Direct 0 5 0 Sequential 0 6 0 CD-ROM scu> q # ANSI Vendor ID ------ --------SCSI-2 DEC SCSI-2 DEC SCSI-2 DEC SCSI-2 DEC SCSI-2 DEC Product ID Revision N/W ------------ -------- --RZ28 (C) DEC 442D N RZ28 (C) DEC 442D N RZ26 (C) DEC T386 N TLZ07 (C)DEC 4BE0 N RRD43 (C) DEC 1084 N

For more information, see /sbin/scu.hlp.

6 ­ 36

Managing Disks

Using the devswmgr Utility
The devswmgr utility enables you to manage the device switch table by displaying information about the device drivers in the table. You can also use this command to release device-switch table entries. Typically, you release the entries for a driver after you have unloaded the driver and have decided not to reload it later. Releasing the entries frees them for use by other device drivers. The following example shows how the devswmgr utility is used.
# devswmgr -display Device switch information device switch database read from primary file device switch table has 200 entries # devswmgr -getnum Device switch reservation list (*=entry in use) driver name instance major -----------------------------------------pfm 1 71* fdi 2 58* xcr 2 57 kevm 1 56* cam_disk 2 55* emx 1 54 TMSCP 2 53 MSCP 2 52 xcr 1 44 LSM 4 43 . . .

6 ­ 37

Configuring Disk Devices Manually

Configuring Disk Devices Manually
Overview
Although most disk device management is automatic, there are times when you need to add devices that cannot be detected and added automatically. Dynamic Device Recognition (DDR) is a framework that describes the operating parameters and characteristics of SCSI devices to the SCSI CAM I/O subsystem. You can use DDR to include new and changed SCSI devices into your environment without having to reboot the operating system. You do not disrupt user services and processes, as happens with static methods of device recognition. Beginning with Tru64 UNIX V4.0, DDR is the preferred method for recognizing SCSI devices. When you use this new method, DDR modifies the SCSI CAM system so as to extract as much information as possible from a device rather than depending on statically compiled tables. This provides the following benefits: Devices are automatically recognized. Static configuration is not required. The system does not require a reboot. Enhanced user system availability occurs. Devices you add to the system should conform to the Small Computer System Interface (SCSI-2) standard, as specified in SCSI-2 (X3.131-1994). If your devices do not comply with the standard, or if they require exceptions from the standard, store information about these differences in the DDR database. If the devices comply with the standard, there is no need to modify the database.

Adding a SCSI Disk
When you add a SCSI disk or tape device to the system, the new device will be located automatically, added to the hardware management databases, and its device special files will be created. On the first reboot after installation of the new device, dsfmgr is called automatically during the boot sequence to create the new device special files for that device. Alternatively, you can use the following sequence: 1. Plug in the new device 2. Issue the hwmgr -scan scsi command 3. Issue the dsfmgr -k command

6 ­ 38

Managing Disks

Changing the DDR Database
When you make a change to the operating parameters or characteristics of a SCSI device, you must describe the changes in the /etc/ddr.dbase file. Two common reasons for changes are: Your device deviates from the SCSI standard or reports something different from the SCSI standard. You want to optimize device defaults, most commonly the TagQueueDepth parameter, which specifies the maximum number of active tagged requests the device supports. Use the ddr_config -c command to compile the /etc/ddr.dbase file and produce a binary database file, /etc/ddr.db. When the kernel is notified that the file's state has changed, it loads the new /etc/ddr.db file. In this way, the SCSI CAM I/O subsystem is dynamically updated with the changes you made in the /etc/ddr.dbase file. The contents of the on-disk database are synchronized with the contents of the in-memory database. Use the following procedure to compile the /etc/ddr.dbase database: 1. Log in as root or become the superuser. 2. Enter the ddr_config -c command:
# /sbin/ddr_config -c # Note Refer to Tru64 UNIX System Administration, Chapter 5, for procedural task details to modify the binary database.

6 ­ 39

Learning Check

Learning Check
1. For the placement of data, disk surfaces are divided into ___________, ___________, and ____________ by ______________. 2. The smallest contiguous region on a disk that can be accessed with a single I/O operation is a ____________. 3. The a and b disk partitions are overlapped by the _____ partition. 4. A list of supported disks and default partition sizes is found in the _________________ file. 5. ___You can change partition sizes using what command?
a. disklabel b. diskconfig

c. Either of these commands d. Neither of these commands 6. You add a new swap partition by changing the value of the __________ attribute in the _____ subsystem. 7. To edit the /etc/sysconfigtab file, you use the __________ command. 8. _________ allow access to a file or directory by a single name, regardless of whether the name represents a clusterwide file or directory or a member-specific file or directory. 9. The ____ utility enables you to manage device special files.

6 ­ 40

Managing AdvFS File Systems
Module 7 Introduction
This module introduces you to UNIX file systems, and to the AdvFS file system. It describes directories, file types, the directory hierarchy, and overviews the organization of system files. It demonstrates how to mount and dismount the file system. This module explains how to set up and manage AdvFS, and describes the Advanced File System Utilities product. The older UFS file system is covered in Appendix A. Lab exercises let you practice using the mount, and umount commands, and the commands used to create and manage AdvFS file systems.

Objectives
To manage AdvFS file systems, you should be able to: Describe the organization of system files Mount and unmount a file system Describe the AdvFS storage model, and explain the AdvFS concepts of file domains and filesets. Create an AdvFS file domain and fileset and mount the fileset. Manage AdvFS file systems, including quotas and backups. Use AdvFS Utilities to enhance the file system. Use the Advanced File System Manager graphic user interface to manage file domains and filesets. Use the Advanced File System with the Logical Storage Manager.

Resources
For more information on the topics in this module, see the following: UNIX System Administration Handbook, Nemeth, Snyder, & Seebass, Chapters 4 and 12 Tru64 UNIX System Administration, Chapter 6 Tru64 UNIX AdvFS Administration

7­1

Introduction to File Systems

Introduction to File Systems
Although Tru64 UNIX V5.1 supports several different file systems, the Advanced File System (AdvFS) is now the default file system used at installation time, and also the file system that must be used for TruCluster Server. For these reasons, AdvFS is now the principal file system that a system administrator must manage. Other file systems, most notably the older UNIX File System (UFS), are discussed in Appendix A. You can still create and use UFS for your applications if you do not need the advanced features of AdvFS.

Directories and File Types
There are several different file types that can exist in a file system. A directory is one of these types. Other types include regular files, device special files, and symbolic links, as well as a special type of symbolic link called a context-dependent symbolic link or CDSL. Some of these have already been introduced in earlier modules of this course.

Directory Hierarchy
Any file system, whether local or remotely mounted, is part of the total directory hierarchy of a system or cluster. Viewing the directory hierarchy as a tree growing from the root file system (/), a file system, whether AdvFS or UFS, can be added as a new branch to the existing tree of directories by mounting it on an existing directory or mount point.

Mounting and Unmounting File Systems
After creating a file system and selecting a mount point, you mount the file system on the directory hierarchy using the mount command. To unmount a file system, you use the umount command. To ensure that certain file systems are automatically mounted when the system comes up, you can place entries in a file system table called /etc/fstab.

Creating File Systems
The method you use to create a file system depends on the file system type. For AdvFS or UFS file systems, you must first select a logical volume to use. Logical Volumes You were introduced to logical volumes in the Managing Disks module. Starting with the simplest form, a logical volume can be: A disk partition on a single physical disk, such as /dev/disk/dsk3b. An entire physical disk, such as /dev/disk/dsk3c (the c partition or entire disk). A group of physical disks attached to an HSZ or HSG controller. The entire group of disks is seen by the operating system as if it were a single disk with the customary partitions and a device special file name. This is the hardware implementation of RAID technology. An LSM volume created from one or more disks or disk partitions. This is a software implementation of RAID technology.
7­2

Managing AdvFS File Systems

An LSM volume is referenced by a form of device special file, such as /dev/vol/rootdg/vol-01 (block device) or /dev/rvol/rootdg/vol-01 (raw or character device). rootdg is the name of a disk group and vol-01 is the name of the LSM volume. For information on how to create an LSM volume, see Appendix B. You can use any of these different forms of logical volumes when creating an AdvFS or UFS file system. AdvFS For AdvFS, you first create a file domain using the mkfdmn command, specifying a single logical volume. You can then (or later) add additional logical volumes to the AdvFS file domain to extend its capacity. Next you create a fileset within the file domain using the mkfset command. The AdvFS fileset is the file system that you mount on a mount point within the directory hierarchy. The name of the AdvFS file system derives its name from the domain name and the fileset name. For example, if you create a domain called domain1, and a fileset within that domain called fileset1, the name of the file system you will specify in the mount command is domain1#fileset1. Creating and managing an AdvFS file system is discussed more fully in later sections of this module. UFS For UFS, you use the newfs command with the raw device name, such as /dev/rdisk/dsk3b to create a file system, as discussed in Appendix A.

7­3

Directories and File Types

Directories and File Types
A file in the UNIX system is a sequence, or stream of bytes. The Tru64 UNIX operating system supports the file types listed in Table 7-1.
Table 7-1 Types of Files
File Type
Regular file Directory Character device file Block device file UNIX domain socket Named pipe Symbolic link CDSL

Symbol
d c b

Function
Contains executable program, shell script, ASCII text, or source code Contains the names of files or other directories References a device driver that does its own input and output buffering, such as a terminal References a device driver that performs I/O in large chunks (1024 or more bytes) and the kernel provides buffering support, such as a disk Provides a communication connection between processes Provides a communication connection between processes Contains the name of another file (or directory) to which it points Context-dependent symbolic links allow access to a file or directory by a single name, in cases where that name must resolve to a member-specific file or directory.

s p l l

The symbol column shows the file type symbol that appears in a long directory listing. UNIX systems provide access to devices through special device files. For example, you can use the character or block disk device file name in several commands. For more information on CDSLs, see Tru64 UNIX System Administration, Chapter 6.

7­4

Managing AdvFS File Systems

Directory Hierarchy
System files are organized by function and use. Figure 7-1 highlights the main system directories.
Figure 7-1 Tru64 UNIX System Directory Hierarchy
/

cluster

dev

devices

etc

opt

sbin

sys

tmp

usr

var

vmunix

members

nls

init.d

rc0.d

rc2.d

rc3.d

bin

ccs

cluster

sbin

share

adm cluster

spool

run

man
sm0221

Note

The structure below /cluster/members varies according to whether a system is standalone or exists within a cluster. On a standalone system, /cluster/members/member always evaluates to member0, whereas on a system within a cluster /cluster/members/member evaluates to memberx, where x is the member id in the generic subsystem of sysconfigtab.

A goal in the UNIX system directory hierarchy is efficient organization. Files are separated by function and intended use. Commonly used command files should be in the normal search path in users'.profile or.login files. Use var to separate a file system or a directory under /usr.

7­5

Directory Hierarchy

Organizing System Files
Table 7-2 highlights the main system directories.
Table 7-2 UNIX System Directory Hierarchy
Directory
/ /cluster/ members/ /dev/ /devices/ /etc/ nls/ /opt/ /sbin/ init.d/ rcn.d/ /subsys/ /sys/ /tmp/ /usr/ bin/ ccs/ cluster/ include/ sbin/ share/ man/ shlib/ sys/ /var/ adm/ crash/ cron/ sendmail/ syslog/ cluster/ run/ spool

Description
The root directory for the root file system of the operating system Directory for a cluster of which this system could be a member Root directory for cluster member0, this system, whether or not it is in a cluster Block and character device special files Directory for device special files (new method) System configuration files and databases; nonexecutable files National language support databases Optional for layered products, such as applications and device drivers Commands to boot and initialize the system in single-user mode Application specific scripts for startup and shutdown The rc files executed for system state n Dynamically configured kernel modules required in single-user mode Links to those files in /usr/sys/ that are source-code based System-generated temporary files, usually not preserved across a system reboot Most user utilities and applications Common utilities and applications C compilation system; tools and libraries used to generate C programs Directories for cluster members Program header files System administration utilities and other system utilities Architecture-independent ASCII files Online reference pages Binary loadable shared libraries System configuration files Multipurpose log, temporary, varying, and spool files [An alternate location for this directory is /usr/var.] Common administration files and databases Used for saving kernel crash dumps Files used by cron daemon Configuration and database files for sendmail Files generated by syslog Directory for cluster members Files created when daemons are running Miscellaneous printer and mail system spooling directories

7­6

Managing AdvFS File Systems

Mounting and Unmounting File Systems
Overview
Visualize the root directory and its files as a tree. By grafting a branch (or file system) to the tree, you can create a larger tree. However, you need a point at which to attach the file system. In UNIX, this grafting point is a directory. When created, the directory /usr is an ordinary directory, but when a file system is grafted to this directory, /usr becomes an entry point to the new file system. This act of grafting is called mounting. The result is that a user can move easily from file system to file system as if moving from directory to directory. To make the files in a file system available to users, mount the file system into the directory hierarchy. Figure 7-2 and Figure 7-3 illustrate this concept.
Figure 7-2 File Systems Before Mounting

Figure 7-3 File Systems After Mounting

sm0223

Disk 1 is the system disk containing the root file system. Disk 2 is another disk containing a file system. Before it is mounted, the files on it are not accessible. After it is mounted, the files are accessible through the directory mount point, in this case /usr.
7­7

Mounting and Unmounting File Systems

This figure illustrates another important point. Usually you do not mount a file system on a directory that contains files. However, you can do so without deleting or damaging the files. The files are invisible and inaccessible until the file system is unmounted. The mount command is available in the standalone environment. The root file system is mounted automatically when the system is booted, and cannot be unmounted.

Using the mount Command
To mount a file system on an existing directory, use the mount command.
mount [ options ] [ device ] [ mountpoint ]
options
-a -t -r -o device mountpoint

Some options include:
Mounts all file systems listed in /etc/fstab Specifies a type of file system to mount; for example: mount -a -t ufs Specifies mount with read-only access Specifies file system specific options The special device name The existing directory to mount the file system under

To display the currently mounted file systems, use the mount command without arguments, as shown in Example 7-1.
Example 7-1 Using the mount Command # mount -t advfs domain1#fileset1 /test # mount root_domain#root on / type advfs (rw) /proc on /proc type procfs (rw) usr_domain#usr on /usr type advfs (rw) usr_domain#var on /var type advfs (rw) /usr/users@nfsusers on /usr/users type nfs (v3, rw, udp, hard, intr) /usr/public@nfspublic on /usr/public type nfs (v3, rw, udp, hard, intr) /dev/disk/dsk1c on /usr/local type ufs (ro,nosuid) domain1#fileset1 on /test type advfs (rw)

For file-system-specific options, see mount(8). If you are mounting a remote file system (using NFS), use one of the following formats in place of the device:
host:remote_directory remote_directory@host

If the device and mount point are listed in /etc/fstab, you can shorten the command to mount mountpoint. The system will find the device in /etc/fstab.

7­8

Managing AdvFS File Systems

Using the fstab File
Some file systems, such as /usr and /usr/users, are used frequently. To avoid mounting file systems each time the system starts up, you can maintain a file system table called /etc/fstab. The fstab file contains the names of file systems that are frequently used. The fstab file is read and the file systems are mounted when the system goes into multiuser mode. If you boot to single-user mode, the root file system is mounted read-only. The fstab file provides the system with information about the mounting procedure: The file systems to mount -- for UFS file systems using a disk partition, this is the block special file name The directories to mount them on The mounting order -- they are mounted in the order listed The root directory (/) should be listed first, and any file system should be listed above a file system to be mounted under it. Example 7-2 shows a sample /etc/fstab file.
Example 7-2 An /etc/fstab File # cat /etc/fstab root_domain#root /proc usr_domain#usr usr_domain#var /usr/users@nfsusers /usr/public@nfspublic /dev/disk/dsk5c / /proc /usr /var /usr/users /usr/public /tmp advfs procfs advfs advfs nfs nfs ufs rw rw rw rw rw,bg rw,bg rw 0 0 0 0 0 0 0 1 0 2 2 0 0 0

1. Special device name, domain/fileset or remote file system to be mounted 2. Directory where the file system is mounted 3. Type of file system: ufs, nfs, procfs, advfs, swap, or cdfs 4. Mount options for the type of file system (a comma-separated list). Common options are:
ro rw xx bg Read-only access Read-write access Ignore this file system entry Retries in the background if the first mount attempt fails

5. Backup frequency for dump: 0 indicates the file system is not backed up 6. fsck (for UFS only) and quotacheck consistency checks pass order. root is 1. 0 indicates the file system is not checked You must be superuser to edit the /etc/fstab file. For file system specific options, see mount(8), fstab(8).

7­9

Mounting and Unmounting File Systems

Using the umount Command
Use the umount command to unmount a file system. The format for umount is:
umount [ options ] [ mountpoint ]
options -A -a -t -h mountpoint Some options include: Attempts to unmount all file systems currently mounted Unmounts all file systems listed in /etc/fstab Specifies a type of file system to unmount; for example: umount -a -t ufs Specifies a host for unmounting all file systems remotely mounted in /etc/fstab Existing directory where the file system is mounted

You must unmount a file system if you want to check it with fsck (UFS only) or change its partition size with disklabel. You cannot unmount a file system if one of the files or directories is in use. To determine what processes are using files or file systems, use the fuser command. You cannot unmount the root file system. Example 7-3 shows how to use the umount command.
Example 7-3 Using the umount Command # mount root_domain#root on / type advfs (rw) /proc on /proc type procfs (rw) usr_domain#usr on /usr type advfs (rw) usr_domain#var on /var type advfs (rw) /usr/users@nfsusers on /usr/users type nfs (v3, rw, udp, hard, intr) /usr/public@nfspublic on /usr/public type nfs (v3, rw, udp, hard, intr) domain1#fileset1 on /test type advfs (rw) # # umount /usr /usr: Device busy # umount /test # mount root_domain#root on / type advfs (rw) /proc on /proc type procfs (rw) usr_domain#usr on /usr type advfs (rw) usr_domain#var on /var type advfs (rw) /usr/users@nfsusers on /usr/users type nfs (v3, rw, udp, hard, intr) /usr/public@nfspublic on /usr/public type nfs (v3, rw, udp, hard, intr) # umount -h nfspublic

The following example shows the mount and umount commands.
Example 7-4 Mounting and Unmounting File Systems # who root tty00 Apr 29 13:57 # ls /mnt # pwd / # mount root_domain#root on / type advfs (rw)
7 ­ 10

Managing AdvFS File Systems

/proc on /proc type procfs (rw) usr_domain#usr on /usr type advfs (rw) usr_domain#var on /var type advfs (rw) # mount domain1#fileset1 /mnt # mount root_domain#root on / type advfs (rw) /proc on /proc type procfs (rw) usr_domain#usr on /usr type advfs (rw) usr_domain#var on /var type advfs (rw) domain1#fileset1 on /mnt type advfs (rw) # ls /mnt .#clipboard.clp .@today's_test.doc System.cab [email protected] .DesktopState clipboard.clp [email protected] .deskprint today's_test.doc # umount domain1#fileset1 # mount root_domain#root on / type advfs (rw) /proc on /proc type procfs (rw) usr_domain#usr on /usr type advfs (rw) usr_domain#var on /var type advfs (rw) # ls /mnt #

1. Superuser ensures that no one is logged in and using the directory /mnt. 2. Directory /mnt is empty. 3. Superuser ensures that the working directory is not the directory /mnt. 4. Superuser wants to mount domain1#fileset1 and displays the list of currently mounted file systems. The file system domain1#fileset1 is not mounted. 5. Superuser mounts the file system domain1#fileset1 under the directory /mnt. 6. The mount table shows that domain1#fileset1 is now mounted under /mnt. 7. The directory /mnt was empty before the mount. Now it is an access point to the files contained in the file system domain1#fileset1. 8. Superuser unmounts domain1#fileset1. 9. The mount table shows that domain1#fileset1 is no longer mounted. 10. The directory /mnt is empty and is no longer an access point to a mounted file system.

7 ­ 11

Introducing AdvFS Concepts

Introducing AdvFS Concepts
Overview
The Advanced File System (AdvFS) is a local file system that uses journaling to recover from unplanned system restarts significantly faster than UFS. AdvFS provides a flexible structure, decoupling file systems from physical disks, thus allowing a growth path. The separately licensed AdvFS Utilities product provides multidisk capabilities and management utilities. AdvFS is transparent to users and programmers. It maintains compatibility with the standard file system interface. AdvFS provides a number of utility programs for system administrators to manage the system. As your system requirements change, AdvFS allows you to easily adjust your storage up or down to meet your requirements. The minimum configuration needed for an active AdvFS file system is one file domain and one mounted fileset. The /etc/fdmns directory defines the file domains on your system by providing a subdirectory for each file domain you create. Follow these steps to create an active file domain: 1. Create a file domain 2. Create a fileset 3. Mount a fileset Use these commands to display file domain information:
showfdmn showfsets

Features and Benefits
AdvFS provides a number of features over traditional file systems, as shown in the following table.
Table 7-3 AdvFS Features and Benefits
Feature
Rapid crash recovery Extended capacity High performance Online defragmentation Volume spanning* Online resizing*

Benefit
Write-ahead logging eliminates the need to use the fsck utility to check and repair file systems and reduces the time needed to mount file systems. The design extends the size of both files and file systems, supporting largescale storage systems. An extent-based file allocation scheme consolidates data transfer. System performance improves by making files more contiguous while the system remains in use. A file or file system can span multiple volumes within a shared storage pool. The size of the file system can be dynamically changed by adding or removing volumes while the system is in use.

7 ­ 12

Managing AdvFS File Systems

Table 7-3 AdvFS Features and Benefits (Continued)
Feature
Online backup* File-level striping* File undelete*

Benefit
File system contents can be backed up to media without interrupting the work flow of system users by using fileset clones. Distributing file data across multiple volumes improves file transfer rates. Users can recover deleted files without assistance from system administrators.

*

This feature requires the optional AdvFS Utilities license

AdvFS Components
The Advanced File System consists of two components: Advanced File System (AdvFS) is a file system option on the Tru64 UNIX operating system. If you registered the Tru64 UNIX operating system license PAK, you can install and use the file system. The basic file system provides: Recoverability Fast restart One partition or volume per domain Multiple filesets per domain Fileset quotas
vdump, vrestore, and other commands to manage the file system

Advanced File System Utilities is a separately licensed, optional layered product. It enhances the file system capabilities and system management. The Utilities product provides: Multivolume capabilities Online fileset resizing Fileset migration Online backup (cloning) File undelete Domain balancer File striping Functions can be accomplished using either AdvFS commands or the graphic user interface management tool.

7 ­ 13

Introducing AdvFS Concepts

AdvFS Storage Model
Unlike UFS, AdvFS separates the two layers. The directory hierarchy layer handles file naming and the file system interface - opening and reading files. The physical storage layer handles write-ahead logging, file allocation, and physical disk I/O functions. The following figure shows the separation of directory and storage layers in AdvFS.
Figure 7-4 AdvFS Storage Model

Directory Hierarchy Layer

Storage Layer

alr0502

This separation allows you to manage the physical storage of files separately from the directory hierarchy. For example, you can move a file from one disk to another within a storage domain without changing its pathname. The following figure shows a file's physical location being moved without changing its pathname.

7 ­ 14

Managing AdvFS File Systems

Figure 7-5 File Migration in AdvFS

Directory Hierarchy Layer

Storage Layer

alr0503

Two new terms used to describe the AdvFS design are file domain and fileset.

File Domains
A file domain is a named set of one or more volumes that provides a shared pool of physical storage. A volume is any mechanism that behaves like a UNIX block device, for example: An entire disk A disk partition A RAIDset A logical volume configured with the Logical Storage Manager (LSM) The following figure shows a file domain with two volumes.
Figure 7-6 File Domain
File

Segment 1

Segment 2

alr0504

7 ­ 15

Introducing AdvFS Concepts

The first step in setting up an Advanced File System is creating a file domain as illustrated in Figure 7-6. When created, a file domain consists of a single volume. If you have the Advanced File System Utilities product installed and licensed, you can add more volumes.

Guidelines for File Domains
Follow these guidelines for creating file domains: Avoid I/O scheduling contention and enhance system performance by dedicating the entire disk (partition c) to a file domain. You can have 100 active file domains per system. A file domain is active when at least one of its filesets is mounted. Although the risk of media failure is slight, a single failure within a file domain renders the entire domain useless. In the case of media failure, you must recreate the file domain and restore all the files. To reduce the risk of domain failure, limit the number of volumes per file domain to three (250 are allowed). You should also store a copy of your configuration information - domain and fileset names and associated volumes. To maintain high performance, avoid splitting a disk between two file domains. Version 5.1 file domains support quota values larger than 2 terabytes. To upgrade a file domain, create a new domain on Version 5.1 or later and copy information from the old file domain to it.

Filesets
A fileset represents a portion of the directory hierarchy. Each fileset is a uniquely named set of directories and files that form a subtree structure. The following figure shows three filesets.
Figure 7-7 Filesets

alr0505

A fileset is similar to a file system in many ways: You mount filesets like you mount file systems. Filesets are units on which you enable quotas. Filesets are units that you back up. Filesets offer features not provided by file systems:
7 ­ 16

Managing AdvFS File Systems

You can clone a fileset and back it up while users are still accessing the original. A fileset can span several disks in a file domain.

Guidelines for Filesets
Follow these guidelines for creating filesets: Tru64 UNIX V5.1 supports an unlimited number of filesets per system. The more filesets that you establish, the greater your flexibility. On the other hand, a greater number of filesets increases your management overhead.

Installed Subsets for AdvFS
At installation time the following mandatory and optional subsets can be installed.
OSFADVFS510 OSFADVFSBIN510 OSFADVFSBINOBJECT510 OSFXADVFS510 OSFADVFSDAEMON510 AdvFS Commands (System Administration) AdvFS Kernel Modules (Kernel Build Environment) AdvFS Kernel Objects (Kernel Software Development) AdvFS Graphical User Interface (System Administration) AdvFS Daemon (System Administration)

See setld(8) and doconfig(8) for more information.

7 ­ 17

Setting Up AdvFS

Setting Up AdvFS
Overview
This section describes how to set up an Advanced File System by: Creating a file domain Creating a fileset Mounting the fileset

Setting Up an Advanced File System
Setting up an Advanced File System requires four steps.
Step
1 2 3 4

Action
Create a file domain using the mkfdmn command. Create a fileset using the mkfset command. Create the mount point using the mkdir command. Mount the fileset using the mount command.

You must have root user privilege to use the mkfdmn and mkfset commands. The Advanced File System provides root file system support. The operating system installation script optionally creates a root_domain, usr_domain and var_domain. You can also create AdvFS file systems for user data. As with any file system type, you must plan your system configuration.

Creating a File Domain
Use the mkfdmn command to create a single volume file domain. See mkfdmn(8).
mkfdmn [-F] [-l log_pages] [-o] [-r] [-V3|-V4] special domain
-F -l log_pages -o -r -V3|-V4 special domain Ignore overlapping partition or block warnings Specifies the number of log pages in the file domain Overwrites an existing file domain Specifies the file domain is the root domain (root domain can have only one volume) Create file domain using version of on-disk formats (V4 = formats employed by AdvFS with Tru64 UNIX V5.1) Specifies the block special device name Specifies the name of the file domain to be created

The two required parameters are the special device name and the domain name, which must be unique. The following example shows the syntax for creating an AdvFS domain using a disk partition.
# mkfdmn /dev/disk/dsk3c domain1

You can also specify an LSM volume with the mkfdmn command:
# mkfdmn /dev/vol/publicdg/vol-01 domain2
7 ­ 18

Managing AdvFS File Systems

Creating a Fileset
Use the mkfset command to create a fileset in a file domain. See mkfset(8).
mkfset domain fileset domain - Specifies the name of an existing file domain fileset - Specifies the name of the fileset to be created # mkfset domain1 users

You do not have to name the fileset the same as its mount point directory; however, it is a common convention. For example, if the mount point directory is /tmp, you can name the fileset tmp. You can create multiple filesets within a file domain. You can mount and unmount each fileset independently of the other filesets in the file domain.

Creating a Mount Point
If the mount point directory does not already exist, create it using the mkdir command. See mkdir(1).
mkdir directory # mkdir /public

Mounting the Fileset
Use the mount command to mount a fileset. See mount(8).
mount [-t advfs] domain#fileset directory

-t advfs domain fileset directory

Specifies Advanced File System (can be omitted) Specifies the file domain name Specifies the fileset name Specifies the mount point directory

The number sign (#) character, between the file domain and fileset, is a required part of the syntax representing a fileset. It does not represent a comment. See the reference page for more options to the mount command.
# mount domain1#users /usr/users

Creating an /etc/fstab Entry
To automatically mount a fileset when the system boots, add an entry to the /etc/fstab file. See fstab(4).
domain1#users /usr/users advfs userquota,groupquota 0 0

You should specify the quota options, even if you are not enforcing disk quotas because enabling quotas allows you to generate usage statistics.

7 ­ 19

Setting Up AdvFS

AdvFS Setup
In the following example, we will create two domains and three filesets, as shown in the figure.
Figure 7-8 AdvFS Setup
/(root)

usr

tmp

public

bin

lib

sbin

docs

projects

Domain 1

Domain 2
alr0506

We will create the file domain domain1 on dsk3c, which will contain the fileset usr1, to be mounted on the directory /usr1. We will also create the file domain domain2 on dsk2c, which will contain the two filesets tmp and public. These will be mounted on /tmp and /public, respectively. The /public directory does not exist, and must be created.
Example 7-5 AdvFS Setup # # # # # # # # # # # mkfdmn /dev/disk/dsk3c domain1 mkfset domain1 usr1 mkdir /usr1 mount domain1#usr1 /usr1 mkfdmn /dev/disk/dsk2c domain2 mkfset domain2 tmp mkfset domain2 public mkdir /public mount domain2#tmp /tmp mount domain2#public /public

7 ­ 20

Managing AdvFS File Systems

Managing AdvFS
Overview
The following table describes the basic AdvFS commands.
Table 7-4 Basic AdvFS Commands
Command
advscan chfile chfsets chvol defragment edquota mkfdmn mkfset ncheck quot quota quotacheck quotaon quotaoff renamefset repquota rmfset showfdmn showfile showfsets tag2name vdf vdump, rvdump vrestore, rvrestore vfile verify

Function
Locates AdvFS volumes, rebuilds /etc/fdmns directory Changes the attributes of a file Changes the attributes of a fileset Changes the attributes of a volume Makes the files in a domain more contiguous (improves performance) Edits user and group quotas Creates a file domain Creates a fileset in an existing file domain Prints the tag and full pathname for all files in a specified fileset; uses the sorted output as input for the quot command Summarizes fileset ownership Displays disk usage and quota limits by user or group Checks fileset quota consistency Turns quota enforcement on Turns quota enforcement off Renames an existing fileset Summarizes fileset quotas Removes a fileset from a file domain Displays the attributes of a file domain Displays the attributes of a file Displays the attributes of the filesets in a file domain Determines the name and path of an AdvFS file that is identified by a tag number. Displays information for AdvFS domains and filesets Performs incremental backups to local or remote storage devices Restores files from devices written with the vdump or rvdump commands Outputs the contents of a reserved file from an unmounted domain. Verifies that the AdvFS directory structure is correct, that all directory entries reference a valid file (tag), and that all files (tags) have a directory entry

Other commands include advfsstat, mountlist, nvfragpg, nvlogpg, and
switchlog.

7 ­ 21

Managing AdvFS

Show File
The showfile command displays the attributes of one or more AdvFS files. The command also displays the extent map of each file. An extent is a contiguous area of disk space that the file system allocates to a file. Simple files have one extent map; striped files have an extent map for every stripe segment. You can list AdvFS attributes for an individual file or the contents of a directory. Although the showfile command lists both AdvFS and non-AdvFS files, the command displays meaningful information for AdvFS files only.

Show Domain
Use the showfdmn command to show the attributes of a file domain and its volumes. See showfdmn(8).
showfdmn [-k] domain # showfdmn usrdmn
Id 2b5361ba.000791be Vol 1L 512-Blks 820164 Date Created Tue Jan 12 16:26:34 1999 % Used 57% Cmode on Rblks 256 LogPgs 256 Wblks 256 Version Domain Name 4 usrdmn Vol Name /dev/disk/dsk0d

Free 351580

The -k option displays the number of blocks in terms of 1K blocks instead of the default 512-byte blocks. A file domain must be active (at least one fileset mounted) before the showfdmn command can display volume information. The showfdmn command displays the following fields.
Field
Id Date Created LogPgs Version Domain Name Vol 512 Blks Free % Used Cmode Rblks Wblks Vol Name

Description
Unique hexadecimal number that identifies a file domain Day, month, and time that a file domain was created Number of 8-kilobyte pages in the transaction log of the specified file domain Version number for AdvFS on-disk data structures Name of the file domain Volume number within the file domain; an L next to the number indicates that the volume contains the transaction log Size of the volume in 512-byte blocks Number of blocks in a volume that are available for use Percentage of volume space currently allocated to files or metadata I/O consolidation mode; default mode is on Maximum number of 512-byte blocks read from the volume at one time Maximum number of 512-byte blocks written to the volume at one time Name of the special device file for the volume

To display information on all file domains on a system, enter the following sequence of commands:
7 ­ 22

Managing AdvFS File Systems

# cd /etc/fdmns # showfdmn *

The /etc/fdmns directory contains a subdirectory for each file domain created by mkfdmn.

Show Filesets
Use the showfsets command to show the attributes of the filesets in a domain. See showfsets(8).
showfsets [-b|-q] [-k] domain [fileset...] # showfsets -k domain1 mnt Id : 2c73e2f9.000f143a.1.8001 Clone is : mnt_clone Files : 79, SLim = 0, Hlim = Blocks (1k) : 331, SLim = 0, Hlim = Quota Status : user=on group=on Fragging : on mnt_clone Id : 2c73e2f9.000f143a.2.8001 Clone of : mnt Revision : 1 # The -b option lists the names of the filesets in a domain, without additional detail.

0 0

The -q option displays the quota limits for filesets in a domain. The -k option displays the number of blocks in terms of 1K blocks instead of the default 512-byte blocks. The showfsets command displays the following fields for each fileset.
Field
Id Clone Files Blocks Quota Status

Description
A combination of the file domain identifier and an additional set of numbers that identify the fileset within the file domain Specifies whether this fileset is a clone or has a clone Specifies the number of files in the fileset and the current soft and hard quota limits Specifies the number of blocks currently in use by a mounted fileset and the current soft and hard quota limits Specifies which quota types are enabled (enforced)

Restoring /etc/fdmns
The /etc/fdmns directory contains a set of subdirectories, one for each file domain on your system. Each subdirectory contains symbolic links to every volume in the file domain. AdvFS cannot mount filesets without this directory. The /etc/fdmns directory is similar to the /etc/fstab file in that the system uses it to mount file systems. If you install a new version of the operating system, you need to restore it. If the /etc/fdmns directory is deleted or corrupted, you should restore it from a backup copy. You can also use advscan or manually reconstruct /etc/fdmns.

7 ­ 23

Managing AdvFS

You can manually reconstruct the /etc/fdmns directory if you know the name of each file domain and its associated volumes. For example, to reconstruct the domain domain1 with the volume dsk3c, do the following:
mkdir /etc/fdmns mkdir /etc/fdmns/domain1 cd /etc/fdmns/domain1 ln -s /dev/disk/dsk3c Alternatively, you can use the advscan (8) command to locate AdvFS volumes on disk devices and rebuild all or part of your /etc/fdmns directory. # # # #

Domain Panics
When a metadata write error occurs, or if corruption is detected in a single AdvFS file domain, the system initiates a domain panic (rather than a system panic) on the file domain. This isolates the failed domain and allows a system to continue to serve all other domains. After a domain panic AdvFS no longer issues I/O requests to the disk controller for the affected domain. Although the file domain cannot be accessed, the filesets in the file domain can be unmounted.

Backup and Restore
The dump and restore commands support only UFS file systems. AdvFS provides equivalent commands: vdump and vrestore for backups to/from local storage devices, and rvdump and rvrestore for backups to/from remote storage devices. The usage is similar; however the tape format is not compatible. The vdump and vrestore commands: Support AdvFS, UFS, and NFS file systems Allow you to back up individual subdirectories using the -D flag Write data in a compressed form using the -C flag, reducing storage and running faster on slow backup devices

Managing Quotas
Quotas provide a useful way of tracking and controlling the amount of physical storage that each fileset consumes. The AdvFS quota system is compatible with the Berkeley-style quotas of UFS. However, the AdvFS quota system differs in several ways: AdvFS differentiates between quota maintenance and quota enforcement. Quota maintenance tracks file and disk space usage for users and groups in the quota.user and quota.group files in the root directory of a fileset. When quota enforcement is enabled, the AdvFS quota system enforces all quota limits set by the system administrator. The AdvFS quota system always maintains quota information. Unlike UFS, this function cannot be disabled. AdvFS supports fileset quotas, in addition to user and group quotas. This limits the amount of disk storage and number of files consumed by a fileset. This is useful
7 ­ 24

Managing AdvFS File Systems

when a file domain contains more than one fileset. Similar to the Berkeley-style quota system, you can set soft and hard limits on the number of blocks and number of files. The soft limit can be exceeded for a specified grace period. The following table shows the commands used to manage AdvFS fileset quotas for users, groups, and filesets.
Table 7-5 AdvFS Quota Commands
Command
quot quota quotacheck repquota

Function
Displays number of blocks in the fileset that are owned by each user Displays user and group disk usage and quota limits Checks fileset quota consistency; filesets checked should be quiescent Summarizes user/group or fileset quotas

Table 7-6 Quota Enforcement Commands
Command
edquota quotaon quotaoff

Function
Edits user and group quota limits, and grace period Turns quota enforcement on; filesets specified must have entries in /etc/fstab and be mounted at the time Turns quota enforcement off

Table 7-7 Fileset Quota Commands
Command
chfsets vdf showfdmn showfsets

Function
Changes block and file soft and hard limits for a fileset Displays the limits and actual number of blocks used by a fileset Displays space usage for the specified domain Displays the file and block usage limits for the filesets in a domain

To set up fileset quotas: 1. Edit the /etc/fstab file and make sure the fileset is listed with the mount options userquota, groupquota. domain1#fileset1 /fileset1 advfs rw,userquota,groupquota Note: With Version 5.1, rw and rq are both treated the same. 2. Use the quotacheck command to check the fileset quota consistency. 3. Use the edquota command to create hard and soft quota limits for users and groups; also to set the grace period for soft limits. 4. Use the chfsets command to create hard and soft quota limits for a fileset. 5. Use the quotaon command to turn on quota enforcement.
7 ­ 25

Managing AdvFS

The following example sets a soft file limit of 800 and a hard file limit of 1000 on the fileset fileset1. It then uses edquota to specify the grace period for groups.
# chfsets fileset1 Id File H File S # edquota -F 800 -f 1000 domain1 fileset1 Limit Limit -g -t : : : 2c2f557f.000b15f4.3.8004 0 --> 1000 0 --> 800

Converting /usr from UFS to AdvFS
This section presents the procedure to convert the /usr file system from UFS to AdvFS. You can convert the /usr (UFS) file system to the equivalent /usr (AdvFS) file system by backing up the existing file system to a file and restoring it to an AdvFS environment. You can also back up to tape, or simply move the file system to an AdvFS fileset on another disk. See Advanced File System Technical Summary, Appendix A for details. Requirements: Root user privilege Disk space for the intermediate file Five percent more disk space for the converted file system The Advanced File System installed on your system Assumptions for this example: Existing UFS configuration File system Intermediate File system File domain Fileset 1. Log in as root. 2. Back up the /usr file system.
# cd /usr # vdump -0f /tmp/usr_bck /usr

/usr
/tmp/usr_bck file

Disk partition /dev/disk/dsk3g New AdvFS configuration /usr Disk partition /dev/disk/dsk3g
usr_domain usr

Use following procedure to convert the /usr file system to AdvFS.

3. Edit the /etc/fstab file; change:
/dev/disk/dsk3g /usr ufs rw 1 2

to:
7 ­ 26

Managing AdvFS File Systems

usr_domain#usr

/usr

advfs rq,userquota,groupquota

0

0

4. Shut down the system.
# shutdown -h now

5. 6.

Reboot to single user mode. Mount the root file system as rw, create the file domain and the fileset.
# mount -u / # mkfdmn /dev/disk/dsk3g usr_domain # mkfset usr_domain usr

7.

Mount the usr fileset on the /usr directory.
# mount usr_domain#usr /usr

8. Restore the /usr file system.
# vrestore -xf /tmp/usr_bck -D /usr

9. Boot the system to multiuser mode.

Converting /usr from AdvFS to UFS
This section presents the procedure to convert AdvFS filesets to UFS file systems. You can convert AdvFS filesets to a UFS file system by backing up the existing AdvFS filesets to a file and restoring them to a UFS environment. You can also back up to tape, or simply move the file system to a UFS file system on another disk. Requirements: Root user privilege Disk space for the intermediate file Assumptions for this example: Existing AdvFS configuration File system File domain Filesets File system Intermediate 1. Log in as root.
/usr

Disk partition /dev/disk/dsk3g
usr_domain usr, var

New UFS configuration /usr /tmp/usr_bck file, /tmp/var_bck file Disk partition /dev/disk/dsk3g Use this procedure to convert the usr and var filesets to a UFS /usr file system. 2. Back up the usr and var filesets.
7 ­ 27

Managing AdvFS

# cd /usr # vdump -0f /tmp/usr_bck usr_domain#usr # vdump -0f /tmp/var_bck usr_domain#var

3.

Edit the /etc/fstab file; change:
usr_domain#usr /usr advfs rq,userquota,groupquota

0 0

to:
/dev/disk/dsk3g /usr ufs rw 1 2

4.

Shut down the system.
# shutdown -h now

5. Reboot to single user mode. 6. Mount the root file system as rw, create the /usr file system.
# mount -u / # newfs /dev/rdisk/dsk3g RZ26 7. Mount the /usr directory on dsk3g. # mount -t ufs /dev/disk/dsk3g /usr

8. Restore the /usr and var file systems.
# vrestore -xf /tmp/usr_bck -D /usr

# mkdir /usr/var
# vrestore -xf /tmp/var_bck -D /usr/var

9.

Boot the system to multiuser mode.

If no other filesets exist in the usr_domain file domain, log in as root and delete the file domain from the /etc/fdmns directory.
Note If you are converting a root fileset to a UFS file system, you must perform a disklabel -t ufs on the disk on which you will restore the file system to. Since this command overwrites all partitions on the disk, make sure that all data on the disk is backed up before you issue the disklabel command.

7 ­ 28

Managing AdvFS File Systems

Defragmenting a File Domain
The blocks of a file can be spread over a disk. When blocks of a file are contiguous, they can be moved between disk and memory in clusters, improving read/write performance of the file. The defragment utility attempts to make files contiguous. See the defragment(8) command in the Advanced File System Utilities Reference Manual.
defragment [-e] [-n] [-N threads][-t time] [-T time][-v] [-V] domain
-e -n -N threads -t time Ignores errors and continues, if possible. Errors that are ignored are usually related to a specific file. Prevents defragmentation from actually taking place. Use with the -v flag to display statistics on the number of extents in the file domain. Specifies the number of threads to run on the utility Specifies a flexible time interval (in minutes) for the defragment utility to run. If the utility is performing an operation when the specified time has elapsed, the procedure continues until the operation is complete. Specifies an exact time interval (in minutes) for the utility to run Displays statistics on the amount of fragmentation in the file domain and information on the progress of the defragment procedure. Selecting this flag slows down the defragmentation process. Same as -v but includes information on each operation on each file Specifies the name of an existing file domain.

-T time -v

-V domain

Before you can defragment a file domain, all filesets in the file domain must be mounted. Do not run the defragment utility while the addvol, balance, defragment, or rmvol utilities are running on the same file domain.

7 ­ 29

Managing AdvFS

Using advfsstat
The performance of a disk depends upon the I/O demands upon it. If your file domain is structured so that heavy access is focused on one volume, it is likely that system performance will degrade. Once you have determined the load balance on your system, there are a number of ways to equalize the activity and increase throughput. The advfsstat command displays detailed information about a file domain, including information about the AdvFS buffer cache, fileset vnode operations, locks, the name cache, and volume I/O performance. The command reports information in units of one disk block (512 bytes). By default, the command displays one sample. You can use the -i option to output information at specific time intervals. The following example of the advfsstat command shows the current volume I/O queue statisti