Text preview for : unix admin Pages(400-540)[1].pdf part of muk muki unix/linux



Back to : unix admin Pages(400-540) | Home

Tru64 UNIX File Systems

A-PDF Split DEMO

Tru64 UNIX File Systems
Overview
The Tru64 UNIX operating system supports the following types of file systems: UNIX File System (UFS) -- a local file system ISO 9660 Compact Disk File System (CDFS) -- a standard CD-ROM file system Memory File System (MFS) -- a UFS that resides in memory for speed Network File System (NFS) -- a facility for sharing files in a heterogeneous network Advanced File System (AdvFS) -- a local, log-based file system discussed in Module 7 File on File Mounting File System (FFM) -- allows regular, character, or blockspecial files to be mounted over regular files for a STREAMS-based pipe Process File System (procfs) -- a local file system that enables running processes to be accessed and manipulated as files File Descriptor File System (fdfs) -- allows applications to reference a process's open file descriptors as if they were files DVD File System (dvdfs) -- provides the ability to mount and read UDF formatted Digital Versatile Disks on both single nodes and clusters AdvFS and UFS are the principal file systems used by applications and the components of the operating system. AdvFS is the default file system at installation time.
Note
UFS is read-only in TruCluster Server, and MFS is unsupported in TruCluster Server.

Virtual File System
The operating system supports multiple types of file systems by using a Virtual File System (VFS): VFS provides file-system independent support for system calls. Different file system types are not apparent to the general user. Figure A-1 illustrates the virtual file system interface providing access to other file systems.

A­2

A-PDF Split DEMO
Figure A-1 File System Interface

VFS UFS AdvFS NFS CDFS
sm0209

VFS sits logically on top of the other file systems. A file system call goes to a VFS routine that determines the file system type it applies to, then calls a specific file system routine.

Accessing a File Through VFS
The process to access a file through the virtual file system is as follows: 1. Each open file is represented by a vnode in the system vnode table. 2. A process accesses a file through a file descriptor in its file descriptor table. 3. This descriptor points to an entry in the system file table. 4. The system file table includes a pointer to the vnode table. 5. The vnode table points to the disk blocks. 6. A memory buffer cache holds currently used disk blocks, for faster access. Figure A-2 shows the process of accessing a file through VFS.
Figure A-2 Accessing a File Through VFS
Process Descriptor Table descriptor descriptor descriptor System Open File Table file structure file structure file structure file structure Vnode Table vnode vnode vnode Disk Buffer Cache

sm0210

A­3

UFS File Systems

A-PDF Split DEMO

UFS File Systems
UNIX File System
UFS uses a large block size to maximize the amount of data transferred with each I/O request. UFS is also known as the Berkeley fast file system, developed by the University of California at Berkeley. File names can contain up to 255 characters. The goal is to lay out files for fast access and minimal waste of space. Uses a large block size (4096 or 8192 bytes) to maximize data transfer Allows blocks to be split into fragments to reduce wasted space There are two differences between UFS and System V file system: Use of blocks and fragments Changed file system layout using cylinder groups
Figure A-3 Small Blocks Require More Overhead

FILE 1 FILE 2

X
sm0211

Figure A-4 Large Blocks Waste Space

FILE 1 FILE 2

X

Figure A-5 Large Blocks and Small Fragments

FILE 1 FILE 2

X
sm0213

In the first two figures, you see file systems with only one block size, no fragment size. A small block size requires more overhead and more time to handle large files. A large block size wastes space when files are small. 4.3BSD, ULTRIX, and Tru64 UNIX file systems allow a block to be divided into smaller pieces called fragments. The default fragment size is typically 1 KB. When disk space is needed, either a block or a fragment can be allocated.
A­4

A-PDF Split DEMO
Note
If a file is larger than 96 KB, the system does not bother to allocate a fragment; it assumes the file will grow larger, so it allocates a full block.

Many units in the file system are measured in fragments rather than blocks. For the layout policies to be effective, the file system cannot be kept completely full. Each file system maintains a parameter (minfree) for the minimum percentage of file system blocks that can be free (the default value is 10%). Below this threshold, only the superuser can allocate blocks. Reducing this percentage may hurt performance in a file system that is actively allocating and freeing blocks.

UNIX File System Layout
A file system can divide a disk partition into one or more cylinder groups. Cylinder groups require more overhead for storing extra structure, but the gain in performance is worth it. The superblock is replicated in a different position in each cylinder group to prevent loss of critical data in case an entire cylinder or surface is damaged. The following figure illustrates the UNIX file system layout within a partition.
Figure A-6 UNIX File System Layout

Boot Block

Super Block

Alternate Super Block

Cylinder Group Block

Inode Table

Data

Data

Alternate Super Block

Cylinder Inode Group Table Block

Data

Cylinder Group 0 Partition

Cylinder Group N
sm0214

UFS Storage Model
The UFS is a good example of a traditional file system. Each UFS disk (or disk partition) contains one separate file system. You mount the file system into the directory hierarchy using mount points. The directory hierarchy layer is tightly bound to the physical storage layer. When a file system becomes full, you cannot move selected files without changing the pathnames of those files. The following figure shows the tight binding between the directory hierarchy and physical storage.

A­5

UFS File Systems

A-PDF Split DEMO
Figure A-7 UFS Storage Model
file system root file system file system

sm0216

This is why many people consider the terms file system and partition to be equivalent.

A­6

A-PDF Split DEMO

Creating a UFS File System
Overview
The root, /usr, and /var file systems can be selected as UFS during the initial installation, although the default file system is now AdvFS. If you want separate UFS file systems for users' home directories, layered products, or other files, you must create these file systems after the installation.

Using the newfs Command
When you have a new partition or logical volume, you must prepare it to hold files by creating a file system. To create a new file system, use the newfs command.
newfs [ -N ] [ options ] device -N
options device Displays file system parameters without creating a file system. Options depend on the type of file system; for example, for UFS you can specify the size in sectors or the number of bytes per inode. See newfs(8) for more options. The unmounted, raw device name; for example, /dev/rdisk/dsk1a.

You must be root to use newfs. The newfs command destroys all data on an existing file system. Example A-1 shows how to create a UFS file system with a block size of 8192 bytes and fragment size of 1024 bytes on partition c of an RZ28 disk, which is drive 1.
Example A-1 Using the newfs Command
# disklabel dsk6 # /dev/rdisk/dsk6c: type: SCSI disk: RZ28 . . . 8 partitions: # size a: 131072 b: 401408 c: 4110480 d: 1191936 e: 1191936 f: 1194128 g: 1787904 h: 1790096 offset 0 131072 0 532480 1724416 2916352 532480 2320384 fstype unused unused unused unused unused unused unused unused [fsize bsize 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 cpg] # (Cyl. 0 - 82*) # (Cyl. 82*- 336*) # (Cyl. 0 - 2594) # (Cyl. 336*- 1088*) # (Cyl. 1088*- 1841*) # (Cyl. 1841*- 2594) # (Cyl. 336*- 1464*) # (Cyl. 1464*- 2594)

# /usr/sbin/newfs -b 8192 -f 1024 /dev/rdisk/dsk6b rz28 Warning: 928 sector(s) in last cylinder unallocated /dev/rdisk/dsk6b:401408 sectors in 254 cylinders of 16 tracks, 99 sectors 196.0MB in 16 cyl groups (16 c/g, 12.38MB/g, 3008 i/g) super-block backups (for fsck -b #) at:
A­7

Creating a UFS File System

A-PDF Split DEMO
32, 25488, 50944, 76400, 101856, 127312, 152768, 178224, 203680, 229136, 254592, 280048, 305504, 330960, 356416, 381872, # disklabel dsk6 # /dev/rdisk/dsk6c: type: SCSI disk: RZ28 . . . 8 partitions: # size offset a: 131072 0 b: 401408 131072 c: 4110480 0 d: 1191936 532480 e: 1191936 1724416 f: 1194128 2916352 g: 1787904 532480 h: 1790096 2320384 #

fstype unused 4.2BSD unused unused unused unused unused unused

[fsize bsize cpg] 0 0 # (Cyl. 0 - 82*) 1024 8192 16 # (Cyl. 82*- 336*) 0 0 # (Cyl. 0 - 2594) 0 0 # (Cyl. 336*- 1088*) 0 0 # (Cyl. 1088*- 1841*) 0 0 # (Cyl. 1841*- 2594) 0 0 # (Cyl. 336*- 1464*) 0 0 # (Cyl. 1464*- 2594)

1. Partition size, tracks/cylinder, sectors/track from the disk label. 2. The superblock contains critical data about the file system, so it is duplicated in each cylinder group.

Using the extendfs Command
You can use the extendfs command to increase the storage space in an existing UFS file system. The file system must not be mounted when you perform this operation.

Adjusting UFS Parameters with tunefs
The tunefs command changes the dynamic parameters of a UFS file system which affect the layout policies. The parameters to be changed are indicated by the flags specified. Because the superblock is not kept in the buffer cache, the changes will only take effect if the program is run on unmounted file systems. The system must be rebooted after the root file system is tuned.

A­8

A-PDF Split DEMO

Checking a UNIX File System
When to Check File Systems
File systems are checked automatically when the system starts. You should check a file system before mounting it manually or backing it up.

Using the fsck Command
Use the fsck command to check a UNIX file system. AdvFS and NFS do not need to be checked. The fsck command looks for and corrects inconsistencies such as: Unreferenced inodes A link count number in an inode that is too large Missing blocks in the free list Blocks in the free list that are also in files, or blocks that are in two files Incorrect counts in the superblock
fsck [ options ] [ filesystem ]
Option Description
Checks and corrects a set of inconsistencies. If it encounters other errors, it exits. Without the -p option, fsck works interactively, prompting before each correction. Specifies the block to use as the superblock. (Block 32 is usually used as alternate superblock.) Assumes a yes response to all prompts. Assumes a no response to all prompts. The raw device name for the file system you want checked; for example, in

-p -b block -y -n
filesystem

/dev/rdisk/dsk1a. If you do not specify a file system, all file systems /etc/fstab are checked.

Orphaned files are put in the lost+found directory, using the inode as a name. If the lost+found directory does not exist, you can create it with the mklost+found(8) command. If you choose not to create it, it will be created automatically if orphaned files are found. You must be root to use fsck. Check file systems when they are unmounted. Since the root file system cannot be unmounted, check it in single-user mode. This prevents fsck reporting and attempting to fix inconsistencies due to normal system operations. Example A-2 shows how to invoke the fsck command.

A­9

Checking a UNIX File System

A-PDF Split DEMO
Example A-2 Using the fsck Command
# /usr/sbin/fsck /dev/rdisk/dsk1c ** /dev/rdisk/dsk1c ** Last Mounted on /usr/local ** Phase 1 - Check Blocks and Sizes ** Phase 2 - Check Pathnames ** Phase 3 - Check Connectivity ** Phase 4 - Check Reference Counts ** Phase 5 - Check Cyl groups 699 files, 92155 used, 3596 free (204 frags, 424 blocks, 0.2% fragmentation) # fsck /dev/rdisk/dsk3g ** /dev/rrz3g File system unmounted cleanly - no fsck needed # fsck /dev/rdisk/dsk3b ** /dev/rrz3b BAD SUPER BLOCK: MAGIC NUMBER WRONG USE -b OPTION TO FSCK TO SPECIFY LOCATION OF AN ALTERNATE SUPER-BLOCK TO SUPPLY NEEDED INFORMATION; SEE fsck(8). # fsck -b 32 /dev/rdisk/dsk3b

1. Phase 1 checks for consistency of inodes. 2. Phase 2 checks for directories that point to incorrect inodes. 3. Phase 3 checks for unreferenced directories. 4. Phase 4 checks link counts in files and directories. 5. Phase 5 checks bad, duplicated, or unreferenced blocks and total free block count. 6. The fsck command displays the number of files in the file system, the number of used and free blocks, and the percentage of fragmentation. 7. The fsck command recognizes when a file system has been unmounted cleanly. 8. Here an attempt is made to check the swap area, which does not contain a file system. 9. The superblock has been corrupted (or in this case does not exist). The command can be repeated, specifying an alternate superblock with the -b option. 10. An alternate superblock block number can be specified to the fsck command.

A ­ 10

A-PDF Split DEMO

Using the dcheck Command
The dcheck command reads the directories in a file system and compares the linkcount in each inode with the number of directory entries by which it is referenced. If a file system is not specified, a set of default file systems is checked. The fsck command supersedes the dcheck command for normal consistency checking.

Using the icheck Command
The icheck command examines a file system, builds a bit map of used blocks, and compares this bit map against the free map maintained on the file system. If the file system is not specified, a set of default file systems is checked. The normal output of icheck includes a report of the following items: The total number of files and the numbers of regular, directory, block special, character special, and fifo files. The total number of blocks in use and the numbers of single-, double-, and tripleindirect blocks and directory blocks. The number of free blocks. The number of blocks missing; that is, not in any file or in any free map. The icheck command is obsoleted for normal consistency checking by fsck.

Using the clri Command
The clri command writes zeros on the i-nodes with the decimal i-numbers on the specified filesystem. After clri has finished its work, any blocks in the affected file are defined as "missing" when you run icheck on the filesystem. Read and write permission is required on the specified file system device. The i-node becomes allocatable. The primary purpose of this routine is to remove a file which does not appear in any directory. If you use the command to remove an i-node which does appear in a directory, take care to track down the entry and remove it. Otherwise, when the i-node is reallocated to some new file, the old entry will still point to that file.If you then remove the old entry, you will destroy the new file, and the new entry will again point to an unallocated i-node. Consequently, the entire cycle repeats itself. The clri command is obsoleted for normal file system repair work by the fsck command.

A ­ 11

Mounting and Unmounting UFS File Systems

A-PDF Split DEMO

Mounting and Unmounting UFS File Systems
Overview
Mounting and unmounting UFS file systems is done in the same manner as for AdvFS file systems. You can also place an entry in /etc/fstab to automatically mount a UFS file system when the system comes up. To display the currently mounted file systems, use the mount command without arguments, as shown in this example.
Example A-3 Using the mount Command
# mount /dev/disk/dsk0h /usr/users # mount -o ro,nosuid /dev/disk/dsk1c /usr/local # mount /dev/disk/dsk0a on / type ufs (rw) /dev/disk/dsk0g on /usr type ufs (rw) /dev/disk/dsk0h on /usr/users type ufs (rw) /dev/disk/dsk1c on /usr/local type ufs (ro,nosuid)

For file-system-specific options, see mount(8).

Using the fstab File
Example A-4 shows a sample /etc/fstab file with entries for UFS file systems.
Example A-4 An /etc/fstab File
# cat /etc/fstab /dev/disk/dsk0a /proc /dev/disk/dsk0g /dev/disk/dsk0h /usr/users@nfsusers /usr/public@nfspublic / /proc /usr /var /usr/users /usr/public ufs procfs ufs ufs nfs nfs rw rw rw rw rw,bg rw,bg 1 0 1 1 0 0 1 0 2 2 0 0

1. Special device name or remote file system to be mounted 2. Directory where the file system is mounted 3. Type of file system. See fstab(4) for a list of valid values for the type field. 4. Mount options for the type of file system. Common options are:
ro rw rq xx
Read-only access Read-write access Read-write access with quotas Ignore this file system entry

5. Backup frequency for dump: 0 indicates the file system is not backed up 6. The fsck pass order (root is 1): 0 indicates the file system is not checked You must be superuser to edit the /etc/fstab file. See fstab(8) For file system specific options, see mount(8).
A ­ 12

A-PDF Split DEMO
The fsck command can check file systems on different drives simultaneously, improving its performance. However, file systems on the same drive are checked sequentially.
Note
Compact disks are read-only, so they should not be checked by fsck (passnum should be 0). Do not create an entry in /etc/fstab for a compact disk unless you leave the disk in the drive. If you boot your system without a compact disk in the drive indicated in fstab, the mount daemon hangs trying to mount it.

Using the umount Command
Example A-5 shows how to use the umount command for UFS file systems.
Example A-5 Using the umount Command
# mount /dev/disk/dsk0a on / type ufs (rw) /proc on /proc type procfs (rw) /dev/disk/dsk0g on /usr type ufs (rw) /dev/disk/dsk0h on /var type ufs (rw) /usr/users@nfsusers on /usr/users type nfs (v3, rw, udp, hard, intr) /usr/public@nfspublic on /usr/public type nfs (v3, rw, udp, hard, intr) /dev/disk/dsk6b on /mnt type ufs (rw) # umount /mnt #

A ­ 13

Compact Disk File System

A-PDF Split DEMO

Compact Disk File System
Tru64 UNIX systems support the compact disk, read-only memory file system, CDFS, based on the international standard ISO 9660-1988. Volumes recorded in the ISO 96601988 (interchange level 2) or High Sierra Group (HSG) format can be mounted for reading. CDFS is a dynamically loadable kernel option. See cdfs(4) and mount(8cdfs) for more information.

A ­ 14

A-PDF Split DEMO

Memory File System
The mfs command builds a memory file system (MFS), which is a UFS file system in virtual memory, and mounts it on the specified directory. When the file system is unmounted, mfs exits and the contents of the file system are lost. If mfs is sent a signal while running (for example, during system shutdown), it attempts to unmount its corresponding file system. For a memory file system, the device file argument provides only a set of configuration parameters, including the size of the virtual memory segment to allocate. If the device file is omitted, you must specify the segment size. The device file is usually the primary swap area, because that is where the file system is backed up when free memory gets low and the memory supporting the file system has to be paged. The following example creates an MFS of 128 MB (250,000 sectors), mounted on /tmp:
# /usr/sbin/mfs -s250000 /tmp

You can set up /tmp as a memory file system by adding an entry in the /etc/fstab file. For example, the following line creates a 10-MB memory file system, mounted on /tmp:
-s20480 /tmp mfs rw 1 0

Note that the contents of a memory file system are lost whenever a reboot or unmount is performed. You must be superuser to use this command. See the reference page on mfs(8) for more information.

A ­ 15

Network File System

A-PDF Split DEMO

Network File System
NFS is a service that allows you to mount directories across the network and treat those directories as if they were local. NFS is a facility for sharing files in a heterogeneous environment of processors, operating systems, and networks. Sharing is accomplished by mounting a remote file system or directory on a local system and then reading or writing files as though they were local. Exporting file systems consists of creating a /etc/exports file that lists directories that other systems have permission to access. The exporting system (or server) plays a passive role in file sharing. The system that imports a file system (the client) can mount that file system at any point within its local file system. Imported file systems are not copied to the client's own file system but are accessed transparently using remote procedure calls. This figure shows system cabala importing two directories from system cherub.
Figure A-8 Remote Mounting with NFS
cabala (Client) / cherub (Server) /

bin

usr

etc

bin

usr

etc

share man

cherub file1 file2

project file1 file2

share man

man1 ...

man8

man1 ...

man8
sm0215

A system that acts as a server can also act as a client. A server that exports file systems can also mount remote file systems exported by other systems, therefore becoming a client.

A ­ 16

A-PDF Split DEMO

Learning Check
1. The ____________ file system sits logically on top of the other UNIX file systems. 2. A UFS file system can divide a disk partition into one or more _________ groups. 3. To create a UFS file system, you use the ____________ command. 4. To check a UFS file system, you use the ____________ command. 5. Which command is used to mount a UFS file system /dev/disk/dsk1h on mount point /mnt. 6. What happens to the contents of a memory file system (MFS) when it is unmounted? _________________ 7. Client access to a Network File System (NFS) is controlled through entries in the _______________ file on the server system.

A ­ 17

Learning Check

A-PDF Split DEMO

A ­ 18

A-PDF Split DEMO

Configuring an LSM Volume
Appendix B

Introduction
Disk storage management often requires that for each file system or database, you should: Allocate and reallocate disk space as space requirements change Address the space allocated for a particular file system or database Access data through an application programming interface LSM is an integrated, host-based solution disk storage management tool that comes with the operating system and provides the following features: Online storage management Provides the ability to manage a system's disks as a pool of storage space for creating LSM volumes. By using LSM volumes instead of disk partitions, you can reconfigure LSM volumes to achieve the best performance and availability as your storage needs change without having to stop storage input and output (I/O), shut down the system, or back up and restore data. Concatenation (disk spanning) Combines multiple physical disks or portions of disks into a single, larger LSM volume for use by large file systems or databases. Striping (RAID0) Improves a system's disk I/O performance by interleaving the data within a volume across several physical disks. Also enables combining multiple physical disks into an LSM volume, similar to concatenation, with better I/O performance. Mirroring (RAID1) Protects against data loss due to hardware malfunction by creating one or more mirror (duplicate) images of data on other disks. Boot disk mirroring Enables mirroring of critical system disk partitions used for booting and running the system to ensure that no single disk failure leaves the system unusable. Dirty Region Logging (DRL) Provides fast resynchronization of a mirrored volume after a system failure, by resynchronizing only the regions that were being updated when the system failed. DRL replaces the Block Change Logging (BCL) in previous LSM versions. Striping and mirroring (RAID0+1) provides improved system performance and high data availability. RAID5 provides higher data availability by storing parity information along with striped data, which improves read performance.
B­1

A-PDF Split DEMO
Hot-sparing Automatically reacts to I/O failures on redundant (mirrored or RAID5) objects by relocating affected objects to spare disks or to other free disk space. Encapsulation Enables migration of existing data on disks and disk partitions to LSM volumes. TruCluster support Manages storage in a TruCluster environment the same way system storage space is managed. All LSM features are available within a TruCluster environment except for RAID5 and boot disk mirroring.

Objectives
To effectively use the Logical Storage Manager, you should be able to: Explain the LSM concepts of diskgroup, disks, subdisks, plexes, volumes, concatenation, striping, and mirroring Set up and initialize LSM using the volsetup utility Create and mirror volumes, then back up a volume using the volassist command Mirror the root, swap, and /usr partitions with LSM Manage an LSM configuration using the LSM Storage Administrator, lsmsa (graphical user interface)

Resources
For more information on the topics in this module, see the following: Tru64 UNIX Logical Storage Manager Tru64 UNIX Logical Storage Manager Software Product Description or Quickspecs Tru64 UNIX System Administration

B­2

A-PDF Split DEMO

Introducing LSM Concepts
Overview
This section explains concepts and terminology used to describe the features and capabilities of the Tru64 UNIX Logical Storage Manager (LSM). All LSM subsets are part of the base Tru64 UNIX installation as optional subsets. There are specific kernel options that must be selected when building the kernel. While it is possible to use the basic LSM functionality, additional functions such as mirroring, striping, RAID level 5 and the graphical administration tool require a separate LSM license (LSM-OA). LSM builds virtual disks called volumes on top of UNIX system disks. A volume is a special device that contains data used by a UNIX file system, database, or other application. Figure B-1 shows the block and character device interface relationship of databases, file systems, applications, and secondary swap with LSM volumes.
Figure B-1 LSM Disk Storage Management
Block/Character Device Interface DATABASE

Vol01

FILESYSTEM

Vol02

mkfdm -r /dev/vol/dg1/vol02 new_domain mkfset new_domain new mount new_domain#new /mnt

APPLICATION

Vol03

SECONDARY SWAP swapon /dev/vol/rootdg/vol04

Vol04

alr0401

B­3

Introducing LSM Concepts

A-PDF Split DEMO
Things to be aware of when using LSM volumes: User applications perform their I/O to volumes. Volumes have standard driver interface: block and character device. Block and character devices can be opened, closed, read, and written like physical disk devices. Block-special files are located at /dev/vol/diskgroupname. Character-special files are located at /dev/rvol/diskgroupname. Volumes in the rootdg are also located in /dev/vol and /dev/rvol directories.

LSM Objects
LSM maintains a configuration database that describes the objects in the LSM configuration. This table summarizes LSM objects.
Table B-1 LSM Objects
Object
Volume

Description
A virtual disk device that looks to applications and file systems like a regular disk partition device. Volumes are logical devices that appear in the /dev/vol (block interface) or /dev/rvol (character interface) directories. I/O is performed on volumes.The volumes are categorized fsgen or gen depending on their usage and content type. If the volume will be used by a file system, the usage type is fsgen. The gen usage type is useful for databases that reside directly on volumes. In addition, there are two special volume categories (root and swap) used for root disk encapsulation. An instance of the volume data, therefore a mirrored volume has two or more plexes. A plex is made up of one or more subdisks. A logical representation of a set of contiguous disk blocks on a physical disk. Subdisks are associated with plexes to form volumes. A collection of nonvolatile, read/write data blocks that can be randomly accessed. LSM supports SCSI and DSA disks. A collection of disks that share the same LSM configuration database. The root diskgroup rootdg is a special, private disk group that must exist and contains configuration information on all other disk groups in the configuration.

Plex Subdisk Disk Diskgroup

Figure B-2 shows the relationship of volumes, plexes, subdisks, and physical disks for a simple volume where 1000 blocks on a volume map to a physical disk. In this illustration, the mapping is a straight pass through to the physical disk. Note the tasks associated with these LSM objects.

B­4

A-PDF Split DEMO
Figure B-2 LSM Object Relationships

Volume V1
Blk 1,2...
1000

Create and associate volumes, plexes, and subdisks

Plex P1
Blk1,2... Blk 1,2... 1000

Specify whether multiple subdisks should be concatenated or striped

Subdisk SD1
Blk1,2... 1000

Specify offset and length on the physical disk

Physical Disk dsk6b
Blk1,2... 1000

Disk media name (disk01)

alr0403

The following sections relate LSM objects to various forms of disks: Concatenated Striped Mirrored RAID level 5

LSM Concatenated Disks
Disk concatenation refers to the arrangement of subdisks both sequentially and contiguously in the address space of a plex. With concatenation, subdisks are linked together into the logical address space. Data is accessed from each of the subdisks in sequence. Concatenation enables you to: Create large volumes using any number of subdisks. Create volumes from smaller leftover pieces of disks. This may be useful in smaller configurations when running out of disk space. Figure B-3 shows how LSM objects relate to disk concatenation. Note that subdisk 1 disk blocks pick up where subdisk 2 disk blocks leave off creating a contiguous address space for the plex. In this example, blocks 1-600 for Plex P1 are obtained from subdisk SD1 and blocks 601-1000 of Plex P1 are obtained from subdisk SD2.
B­5

Introducing LSM Concepts

A-PDF Split DEMO
Figure B-3 LSM Concatenation

Volume V1 Volume V1 Blk 1,2... 1000

Associate two or more subdisks to concatenate disks to make a larger volume

Plex P1
Blk 1,2... 1000

Subdisk SD1
Blk1,2... 600

Subdisk SD2
Blk 601,602... 1000

Physical Disk
dsk4b (disk01) Blk1,2... 600

Physical Disk
dsk6b (disk02) Blk1,2... 400
alr0404

LSM Striping
Disk striping (RAID level 0) is a method of distributing an I/O load, or hot-spot, equally to several devices. The data stream is divided into fixed-size blocks (called the stripe width) which are interleaved across several disks. By allocating storage across multiple disks, striping helps to balance I/O load where high traffic exists on certain disks. Balanced I/O improves volume throughput. Figure B-4 shows a striped volume with a stripe width of 50 blocks. If the application issues read requests for blocks 25 and 51, data access would occur at the same time on the two disks. The main difference between striping and concatenating is how the plex blocks are mapped onto its subdisks.

B­6

A-PDF Split DEMO
Figure B-4 Striping Using LSM
Volume V1
Blk 1,2... 1000

Associate two or more subdisks to a plex to stripe across multiple disks

Plex P1
Blk 1,2... 1000

Subdisk SD1
Blk 1-50, 101-150... 901950

Subdisk SD2
Blk 51-100, 151-200... 9511000

Physical Disk
dsk4b (disk01) Blk 1, 2... 500

Physical Disk
dsk6b (disk02) Blk 1,2... 500
alr0405

Note

The LSM striping figure shows a stripe width of 50 blocks. This is for illustration purposes only. It is not a real-world stripe width size.

Striping has some system administration issues you must know: Currently, a striped plex cannot be resized. Subdisks that use striping cannot be split or replaced. All striping subdisks must be the same size. Performance is highly dependent on the size of each block in the stripe. By default, this is set to128 sectors on the disk, which is optimal for most applications. A striped plex can be much faster than a nonstriped (concatenated) one. This plex can be given preference for data reads (select read policy) instead of using the default round-robin policy.

B­7

Introducing LSM Concepts

A-PDF Split DEMO

LSM Mirroring
LSM enables you to create mirrored volumes (RAID level 1) by attaching multiple plexes, which are up-to-date copies of the data in a volume. Any write operation to the volume is written to all of the plexes (up to eight in parallel). Any read operation from the volume can be satisfied by any of the plexes. The default read policy is to use a round-robin approach to distribute multiple read requests to each plex in the volume. Alternately, if you have a device that is particularly faster than others, you can designate that plex as preferred. A third read policy, select, is a hybrid of round-robin and preferred. Figure B-5 depicts mirroring of a logical volume using two physical disks.
Figure B-5 Mirroring Using LSM
Associate two or more plexes to a volume to mirror the data

Volume V1
Blk 1,2... 1000

Plex P1
Blk 1,2... 1000

Plex P2
Blk 1,2... 1000

Subdisk SD1
Blk 1,2... 1000

Subdisk SD2
Blk 1,2... 1000

Physical Disk Physical Disk rz4b dsk4b
Blk 1,2... 1000

Physical Disk dsk6b
Blk 1,2... 1000
alr0406

Multiple plexes are used primarily for reliability. The overhead associated with multiple writes balances against the advantage of reading from multiple sources. Read performance improves because the plexes have the same data, which improves overall read throughput for the volume. It is important to understand the read/write ratio and the needs of your users, to understand the performance implications of mirroring. Where possible, disks should be on different buses and ideally on different controllers. This affords higher availability and better performance.

B­8

A-PDF Split DEMO
One disadvantage to mirroring is that each plex requires the same amount of physical disk space as the original volume.

LSM Striping and Mirroring
LSM supports striping and mirroring in combination (RAID level 0 and 1). When used on the same volume, this combination offers the benefits of spreading data across multiple disks and also data redundancy. To be effective, the striped plex and its mirror must be allocated from separate disks. The layout type of the mirror can be concatenated or striped.

LSM RAID Level 5
A RAID-5 volume consists of a RAID-5 plex and a log plex. Logging protects against double failure (disk and system) and should always be used with RAID-5 volumes. A RAID-5 plex provides data redundancy through the use of parity information that can be used to reconstruct data after a system failure. It also improves I/O performance for read operations through use of data striping. Mirroring of RAID-5 volumes is not currently supported. A RAID-5 plex is logically viewed as a number of storage columns, where a column is comprised of one or more subdisks. LSM uses a left-symmetric layout that stripes data and parity across columns, placing the parity in a different column for every stripe of data. Figure B-6 illustrates the left-symmetric layout for a RAID-5 volume using five disks, one disk per column. A log plex is also shown.
Figure B-6 RAID-5 Left-Symmetric Layout

The first parity stripe unit is located in the rightmost column of the first stripe. Each successive parity stripe unit is located in the next stripe, left-shifted one column from the previous parity stripe unit location, then returning to the rightmost column to repeat the pattern. Each parity stripe unit contains the result of an exclusive OR (XOR) performed on the data in the data stripe units in the same stripe. The stripe width is 32 blocks (16 KB) which is the default.

B­9

Introducing LSM Concepts

A-PDF Split DEMO
Logs are associated with RAID-5 volumes by being attached as additional, non-RAID5 layout plexes. RAID-5 logs can be mirrored and striped. They must belong to disks other than those used for the RAID-5 plex.

Building Combinations with LSM
You can build various combinations of plexes with subdisks using LSM. The following figure shows one plex with a single subdisk and another plex with two subdisks.
Figure B-7 LSM Objects in Combination

Volume V1
Blk 1, 2... 1000

The volume is mirrored since there are two plexes

Plex P1
Blk 1, 2... 1000

Plex P2
Blk 1, 2... 1000

Plex P2 has concatenated two disk regions

Subdisk SD1
Blk 1, 2... 1000

Subdisk SD2
Blk 1, 2... 300

Subdisk SD3
Blk 301, 302... 1000

Physical Disk dsk4b (disk01) Blk 1, 2... 1000

Physical Disk dsk6b (disk02) Blk 51, 52... 350

Physical Disk dsk8b (disk03) Blk 651, 652... 1350
alr0408

B ­ 10

A-PDF Split DEMO

Setting Up LSM
Overview
The Logical Storage Manager (LSM) supports a variety of configurations including mirroring and striping, and RAID-5 or disk concatenation. Before setting up LSM objects (subdisks, plexes and volumes) you must determine which LSM configuration best serves your needs. Once the configuration is known, LSM and the appropriate licenses can be installed (if not already installed). Then LSM can be initialized and disks set up for LSM use. Existing UFS or AdvFS data can also be encapsulated into LSM volumes.

LSM Hardware and Software Requirements
LSM Hardware Requirements LSM is supported on any Tru64 UNIX hardware platform if the hardware supports Version 3.2 or higher. Any devices supported by the hardware and Tru64 UNIX for the hardware/software combination also support LSM use. All Small Computer Systems Interface (SCSI) and DIGITAL Storage Architecture (DSA) disks supported by this version of Tru64 UNIX are supported by LSM. SCSI redundant array of independent disks (RAID) hardware devices are supported as standard disks, with each RAID device-logical unit viewed as a physical disk. The Tru64 UNIX Operating System Software Product Description provides information about hardware supported by the operating system. LSM Software Requirements The Logical Storage Manager software is supplied as part of the base operating system. The LSM Storage Administrator is a Motif based application that requires the Basic X Environment subset (OSFX11nnn) be installed on the system. Typically, the LSM subsets and the LSM kernel options are selected when the base operating system is installed. If the LSM subsets are not already installed, use the setld -l command with the Tru64 UNIX distribution media to install them. Licensing Requirements The base Tru64 UNIX license allows you to use the LSM concatenation and spanning feature. No special LSM software license is needed to include multiple physical disks within a single LSM volume. To use LSM advanced features, such as mirroring, striping, and the Storage Administrator (lsmsa), you must have an LSM license (LSM-OA). The StorageWorks Software license includes the license for LSM advanced features.

B ­ 11

Setting Up LSM

A-PDF Split DEMO

Configuration Limitations
The maximum configuration supported by the Tru64 UNIX Logical Storage Manager V5.1 is: 8189 volumes (V4.0B = 4093) 8192 plexes per system 32 plexes per volume (V4.0B = 8) 8192 subdisks (basic unit of disk space allocation) per plex (V4.0B = 4096) Volume size of 1TB (V4.0B = 512 GB)

More LSM Concepts
A few more concepts must be discussed before starting to initialize LSM. Configuration Database The LSM configuration database contains records describing all of the objects (volumes, plexes, subdisks, disk media names, and disk access names) being used in a disk group. A special type of configuration database exists for the disks in the default rootdg disk group, the root configuration. Disks in the rootdg have root configuration information in addition to ordinary disk group configuration information. The distinction is that the root information contains records for all disks on the system, not just those in the individual disk groups. /etc/vol/volboot File The /etc/vol/volboot file is used by the vold daemon during startup to locate copies of the rootdg database. Disks containing a copy of the rootdg configuration database are added to the /etc/vol/volboot file. This file contains a host ID used by LSM to establish ownership of physical disks. The host ID ensures that two or more hosts that can access disks on a shared SCSI bus will not interfere with each other in their use of the shared disks.
Note
Although the /etc/vol/volboot file is a text file, do not edit the file, and do not, under any circumstances, delete the file.

Private and Public Regions LSM disks use two regions on each physical disk: a private region and a public region. The private region is a relatively small area in which LSM keeps its disk media label and a configuration database. The public region is a large area of the disk used as the storage space for building subdisks. These regions are used to create various LSM disks.

B ­ 12

A-PDF Split DEMO
LSM Disk Types As shown in Figure B-8, there are three types of LSM disks, LSM simple disk, LSM sliced disk, and LSM nopriv disk. The LSM simple disk and LSM sliced disk each have both a public and private region. The third disk, an LSM nopriv disk, does not contain a private region.
Figure B-8 LSM Disk Types

public region private region public region dsk3g Private region

dsk2g public region dsk2h dsk4c

LSM Simple Disk

LSM Sliced Disk

LSM nopriv Disk
alr0409

LSM disk type characteristics are summarized in Table B-2.
Table B-2 LSM Disk Type Characteristics
Disk Type
Simple Sliced

Description
Contains both public and private regions in the same partition. Use simple disks when adding a partition to LSM. Typically used when the entire disk is under LSM control. The disk label contains information that identifies the public and the private regions. Use sliced disks when adding an entire disk to LSM. Contains no private region and therefore it does not contain LSM configuration information. A nopriv disk can only be added to an existing disk group that already contains a simple disk with existing LSM data.

Nopriv

LSM configuration databases are stored in the private region of each LSM disk except for the nopriv disk type. Public regions collectively form the storage space for application use. For purposes of availability on smaller systems, each simple and sliced disk can contain multiple copies of the configuration database. Example B-1 shows a portion of a disk's label after the b partition has been added to LSM. Example B-2 shows a portion of the disk's label, obtained using the disklabel command, after the entire disk has been added to LSM.
Example B-1 Disk Label after Adding Disk Partition to LSM
# a: b: c: d: e: f: g: h: size 131072 401408 4110480 1191936 1191936 1194128 1787904 1790096 offset fstype [fsize bsize 0 unused 0 0 131072 LSMsimp 0 unused 0 0 532480 unused 0 0 1724416 unused 0 0 2916352 unused 0 0 532480 unused 0 0 2320384 unused 0 0 cpg] # # # # # # # # (Cyl. 0 - 95*) (Cyl. 95*-386*) (Cyl. 0 -2987*) (Cyl. 386*- 1253*) (Cyl. 1253*- 2119*) (Cyl. 2119*- 2987*) (Cyl. 386*- 1686*) (Cyl. 1686*- 2987*)

B ­ 13

Setting Up LSM

A-PDF Split DEMO
Example B-2 Disk Label after Adding Entire Disk to LSM
# a: b: c: d: e: f: g: h: size 131072 401408 4110480 1191936 1191936 1194128 4109456 1024 offset fstype [fsize bsize 0 unused 0 0 131072 unused 0 0 0 unused 0 0 532480 unused 0 0 1724416 unused 0 0 2916352 unused 0 0 0 LSMpubl 410945 LSMpriv cpg] # # # # # # # # (Cyl. 0 - 95*) (Cyl. 95*-386*) (Cyl. 0 -2987*) (Cyl. 386*- 1253*) (Cyl. 1253*- 2119*) (Cyl. 2119*- 2987*) (Cyl. 0 - 2986*) (Cyl. 2986*- 2987*)

LSM Disk Naming LSM disks are accessed using two disk names: Disk access names: System device names like dsk4c and dsk7. You can also use old device names like rz4c and rz6. Disk media names: A logical name given to the disk when it is added to a disk group. You can use any name you like, even the new or old device name. Examples include disk01 or even dsk8 (where the dsk8 logical name was chosen to be the same as the system device name). The disk media name is usually abbreviated to dm or DM in LSM displays. New disk access names take the form dsknP where n is a number and P is the partition letter in the range a to h. Old disk access names use standard disk naming conventions and take the form dd[l]n[nnn][P]; for example rz4g. For a simple or nopriv disk, you must specify a partition letter. For a sliced disk, you must specify a physical drive that does not have a partition letter. For example, simple disks have device access names like dsk6e and sliced disks have device access names like dsk8. The disk access name for a given disk is determined by the location of the disk (that is, controller, bus number, SCSI ID). Disk media name is an administrative name for the disk when it is placed into a disk group, such as disk01. If you do not assign a disk name, the voldiskadd utility defaults to disk_nn where nn is a sequential number if the disk is being added to the rootdg disk group. Otherwise, the disk media name defaults to groupname_nn where groupname is the name of the disk group that the disk is being added to and nn is the next available number in the group. The disk media name is the name for that disk regardless of its location. Subdisks are defined using the disk media name. Therefore, if disk dsk4 (disk01) was swapped with dsk6 (disk02) and rebooted, LSM automatically detects that disk02 is now at dsk4 and disk01 is now at dsk6 and properly handles the volume mapping.
Note
These defaults occur when you use the voldiskadd utility. When using the volsetup utility, the disk media name becomes the same as the disk access name. You can change the disk media name using the voledit rename command.

B ­ 14

A-PDF Split DEMO
Disk Groups Disks are organized by LSM into disk groups. A disk group is a named collection of disks that share a common configuration. Logical volumes are created within a disk group and are restricted to the disks within that group. All systems with LSM installed must have the rootdg disk group. By default, all LSM operations are directed toward this group. Many times you will receive an error when you execute a command and discover that you did not provide the disk group name. Disk groups are useful in situations where all data related to a particular set of applications, or to a group of users, must be moved to another system. In these cases, the data along with their configuration information can be moved together. In other words, the disks being moved self-describe their configuration.

Using the volprint Command
During the LSM initialization process, you should keep track of what is happening as you go. Use the /sbin/volprint command to display information about disk groups, subdisks, plexes, and volumes. The format for the /sbin/volprint command is:
/sbin/volprint -g diskgroup [dGhlpstv]

These options are described in Table B-3.
Table B-3 Some Options for the volprint Command
Option Function
Selects records from the specified disk group; if the -g diskgroup option is not used, the default is to use the rootdg disk group Selects only disk media records Selects only disk group records Lists complete hierarchies below selected records. For volumes, this includes all associated plexes and subdisks; for plexes, this list includes all associated subdisks Displays all information from each selected record Selects only plexes Selects only subdisks Prints single-line output records for the type of record chosen Selects only volumes for display

-g diskgroup

-d -G -h -l -p -s -t -v

B ­ 15

Setting Up LSM

A-PDF Split DEMO
Example B-3 provides an example volprint display.
Example B-3 A volprint Display
# volprint -g lottaspace -ht

DG DM V PL SD dg dm dm dm
v pl sd pl sd

NAME NCONFIG NLOG MINORS GROUP-ID NAME DEVICE TYPE PRIVLEN PUBLEN STATE NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE lottaspace default default 5000 917638011.1063.dove disk04g dsk4g simple 4096 1783808 disk05g dsk5g simple 4096 1783808 disk07g dsk7g simple 4096 1783808 big-vol big-vol-01 disk04g-01 big-vol-02 disk07g-01 fsgen big-vol big-vol-01 big-vol big-vol-02 ENABLED ENABLED disk04g ENABLED disk07g ACTIVE ACTIVE 0 ACTIVE 0 1500000 1500000 1500000 1500000 1500000 SELECT CONCAT 0 CONCAT 0 dsk4g dsk7g RW ENA RW ENA

Managing LSM
Once disks have been placed under LSM control and into disk groups, LSM volumes can be managed using one of three methods: Graphical user interface The graphical user interface provides windows, icons, and menus to manage LSM volumes. The utility that you run is the lsmsa utility. Menu interface The menu interface is a character cell display menu interface used to manage volumes. You use the voldiskadm utility. Each main menu entry leads you through a particular operation by providing information and prompting for input. Command line interface The command line interface provides both a top-down and a bottom-up approach to LSM storage management. The top-down approach uses the volassist utility to automatically build the underlying LSM objects. The bottom-up approach uses a combination of low-level commands to build individual objects to customize the construction of LSM volumes.

B ­ 16

A-PDF Split DEMO

Initializing LSM with the volsetup Utility
Overview
It is easier to initialize LSM using the volsetup script than with the individual commands. The volsetup script: Executes commands that modify disk labels and initialize disks for LSM Creates the rootdg disk group Adds disks into the rootdg disk group Creates the /etc/vol/volboot file Sets the system to start LSM on system reboot Starts the vold and voliod daemons

Using the volsetup Utility
When you start the volsetup script, you are prompted to estimate the number of disks that are to be managed by LSM.
Note
Do not specify the boot disk as one of the disks you specify to be used to initialize the rootdg. You can encapsulate the root and swap partitions and add them to the rootdg disk group later.

Example B-4 Using the volsetup Script for LSM Initialization
# /usr/sbin/volsetup Approximate maximum number of physical disks that will be managed by LSM ? [10] Enter the disk(s) to add into the rootdg disk group. NOTE: Enter a blank line to end the list of disks. ? dsk2b dsk3b dsk9b dsk10b ? Specified partition /dev/rdisk/dsk9b is marked in use. Also partition(s) which overlap /dev/rdisk/dsk9b are marked in use. If you continue with the operation you can possibly destroy existing data. Would you like to continue using dsk9b?? [y,n,q,?] (default: n) Initialize vold and the root disk group: Add disk dsk2b to the root disk group as dsk2b: Addition of disk dsk2b as dsk2b succeeded. Add disk dsk3b to the root disk group as dsk3b: Addition of disk dsk3b as dsk3b succeeded. Add disk dsk9b to the root disk group as dsk9b: Specified partition /dev/rdisk/dsk9b is marked in use. Also partition(s) which overlap /dev/rdisk/dsk9b are marked in use.
B ­ 17

y

Initializing LSM with the volsetup Utility

A-PDF Split DEMO
If you continue with the operation you can possibly destroy existing data. CONTINUE ? [y/n] y Addition of disk dsk9b as dsk9b succeeded. Add disk dsk10b to the root disk group as dsk10b: Addition of disk dsk10b as dsk10b succeeded. Initialization of vold and the root disk group was successful. # volprint -ht DG NAME GROUP-ID DM NAME DEVICE TYPE PRIVLEN PUBLEN PUBPATH V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT ST-WIDTH MODE SD NAME PLEX PLOFFS DISKOFFS LENGTH DISK-NAME DEVICE dg rootdg dm dm dm dm dsk10b dsk2b dsk3b dsk9b 830202105.1025.tinker dsk10b dsk2b dsk3b dsk9b simple simple simple simple 512 512 512 512 261120 261120 261120 261120 /dev/rdisk/dsk10b /dev/rdisk/dsk2b /dev/rdisk/dsk3b /dev/rdisk/dsk9b

At this point, the rootdg disk group has been created, disks have been initialized and added to the rootdg disk group. The /etc/inittab file has been modified to ensure that LSM is started automatically by a system reboot. The /etc/vol/volboot file contains the names of the disks in the rootdg disk group so LSM can locate the rootdg disk group. You should verify that the vold and voliod daemons have been started as follows:
# /sbin/voldctl mode mode: enabled # /sbin/voliod 2 volume I/O daemons are running

You are now ready to create subdisks, plexes, and volumes using individual commands or the graphical user interface (lsmsa).

B ­ 18

A-PDF Split DEMO

Using the volassist Command
Overview
A one-step command, volassist, provides the following functions: Finds space for and creates simple volumes Adds simple plexes to existing volumes Extends and shrinks existing volumes Provides for the migration of data from a specified disk Provides facilities for the online backup of existing volumes You supply a keyword and command options that select the action the command is to perform. The volassist command uses available subdisks and uses a best configuration based upon what is available. To avoid the risk of losing data availability or decreasing I/O performance, volassist does not create volumes with more than one mirror per disk; nor will it create striped mirrors with more than one stripe per disk. However, snapshot mirrors may reside on the same disk with normal mirrors of the volume because the snapshot mirror is temporary, and the snapshot volume which is created is not used for data access.

Replacing Disks under LSM Control
If a disk starts showing error conditions, you may want to move subdisks off the disk and replace the disk. You can replace an existing disk with a new disk, move volumes to the new disk, and attempt to recover any redundant (mirrored or RAID5) volumes on the disk. You cannot recover non-redundant volumes. You should restore non-redundant volumes from backup. If the disk being replaced is a boot disk, you can set up the new disk as a boot disk. If you replace a good disk, you must remove the disk from its disk group and place it in the free disk pool before you replace the disk. If you replace a disk that has failed and is disconnected, you do not need to remove the disk from the disk group. To replace a disk, enter the voldiskadm command and choose Replace a failed or removed disk from the main menu. A list of disks is displayed from which you can choose a replacement disk. If you have disks that are initialized for use with the LSM software, but not added to a disk group, you can select one of those disks as a replacement.

B ­ 19

Using the volassist Command

A-PDF Split DEMO

Using the volassist Command
The format for the volassist command is:
volassist [option] keyword attribute=value... [!]media_name...

If media_name is included on the command line, volassist uses the named subdisks. If!media_name is included on the command line, volassist considers all nonvolatile, nonreserved disks except those disks named by media_name. For instance, if !media_name is !disk02, volassist will not consider disk02 for the operation. The volassist command options are shown in Table B-4.
Table B-4 volassist Commands Options
Option
-b

Function
Performs extended operations in the background. This applies to plex consistency recovery operations for volassist make, growto, and growby. It also applies to plex attach operations started by either volassist mirror or volassist snapstart. Specifies a file that contains defaults for various attributes related to volume creation and space allocation. If not specified, the default is /etc/default/volassist. Not listed in the reference pages. Occasionally, upon execution of a

-d defaults

-f

volassist command, you are instructed to use the -f option to force
the operation to execute. -g diskgroup Specifies the disk group for the operation. If not specified, the disk group is chosen based on the disk media name operands (if any) for the volassist make operation, or the volume operands for all other operations. Limits the operation to apply to this usage type. Any attempt to affect volumes with a different usage type will fail. For volassist make, this is the usage type to use for the created volume. If not provided, the default is taken from the defaults file. If the defaults file does not exist, the default is fsgen.

-U usetype

Table B-4 provides a description of the volassist keywords.
Table B-5 volassist Command Keyword Description
Keyword Function
Creates a volume with the specified name and length. If subdisk names are provided, the volume is made from those subdisks. Otherwise, all nonvolatile, nonreserved disks in the disk group are considered candidates for allocation to the volume. Attributes, such as layout (concat, stripe, raid5) can be used. Creates a new mirror (plex) name and attaches it to the volume. The volume must be enabled. If subdisk names are provided, the volume is made from those subdisks. Otherwise, all nonvolatile, nonreserved disks in the disk group are considered candidates for allocation to the volume. Normal plex generation attributes can be supplied. Moves subdisks within the named media_name volume from the exclusion disks designated by !media_name to other disks within the volume designated by media_name.

make volume_name length [!]media_name mirror volume_name [!]media_name

move volume_name !media_name

B ­ 20

A-PDF Split DEMO
Table B-5 volassist Command Keyword Description (Continued)
Keyword Function
Increases the length of the named volume to the new length. The length is increased by extending the length of the last subdisk in each plex of the volume, or by adding new subdisks concatenated to the end of each plex. If subdisk names are provided, new space is allocated only from the indicated disks. Otherwise, all nonvolatile, nonreserved disks in the disk group are considered candidates for allocation to the volume. Increases the length of the named volume by the provided length. The length is increased by extending the length of the last subdisk in each plex of the volume, or by adding new subdisks concatenated to the end of each plex. If subdisk names are provided, new space is allocated only from the indicated disks. Otherwise, all nonvolatile, nonreserved disks in the disk group are considered candidates for allocation to the volume. Decreases the length of the named volume to the new length. The length is decreased by removing or shortening subdisks to leave each plex with the desired volume length. Decreases the length of the named volume by the designated length. The length is decreased by removing and shortening subdisks to leave each plex with the desired volume length.

growto volume_name new_length [!]media_name[1] [2]

growby volume_name length_change [!]media_name[1] [2]

shrinkto volume_name new_length[1] shrinkby volume_name length_change[1] snapstart volume_name [!]media_name

volassist snapstart is the first step in using the volassist command to back up an LSM volume. The volassist snapstart command creates a write-only backup plex which gets attached to and synchronized with the named volume. When the backup plex is synchronized with the volume, it is ready to be used as a snapshot plex. When the backup volume is synchronized with the volume, the plex state changes to SNAPDONE. Keep in mind, it may take a very long time for the backup plex to become synchronized with the volume. Once synchronized, the backup plex continues to be updated until it is detached.
The volassist snapwait command can be used to track the state of the backup plex while it is being synchronized with the volume. If volassist snapstart is executed with the -b option to run in the background, the volassist snapwait command will exit when the snapshot volume state changes from SNAPATT to SNAPDONE. Use the volassist snapwait command after the volassist snapstart -b command in a script to wait until the backup plex is synchronized before detaching the snapshot volume. Once the backup plex is volume synchronized, it can be used for a backup. The volassist snapshot command detaches the backup plex and creates a new normal volume with the name designated by new_volume. The snapshot state is set to ACTIVE, as it is a normal functioning volume, and available for backing up.You should warn users to stop using the volume prior to executing the volassist snapshot. HP recommends that the volume be unmounted briefly to ensure that the snapshot data on disk is consistent and complete.

snapwait volume_name

snapshot volume_name new_volume

[1] A volume containing a striped plex cannot be grown or shrunk. [2] A volume must be enabled to be grown. In the examples in this section, the volassist command is used to create, mirror, grow, shrink, and move volumes. As shown at the beginning, it is assumed that there are two disks available for volume creation.

B ­ 21

Using the volassist Command

A-PDF Split DEMO
Example B-5 Creating a New Volume with volassist make
# volprint -g lottaspace -ht DG DM V PL SD dg NAME NCONFIG NLOG MINORS GROUP-ID NAME DEVICE TYPE PRIVLEN PUBLEN STATE NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE lottaspace default default 5000 917638011.1063.dove dsk4g dsk5g simple simple 4096 4096 1783808 1783808 -

dm disk04g dm disk05g

# /sbin/volassist -g lottaspace make big-vol 100m layout=concat # volprint -g lottaspace -ht
DG DM V PL SD NAME NAME NAME NAME NAME NCONFIG DEVICE USETYPE VOLUME PLEX NLOG TYPE KSTATE KSTATE DISK MINORS PRIVLEN STATE STATE DISKOFFS GROUP-ID PUBLEN LENGTH LENGTH LENGTH STATE READPOL PREFPLEX LAYOUT NCOL/WID MODE [COL/]OFF DEVICE MODE

dg lottaspace dm disk04g dm disk05g
v big-vol pl big-vol-01 sd disk04g-01

default dsk4g dsk5g
fsgen big-vol big-vol-01

default simple simple
ENABLED ENABLED disk04g

5000 4096 4096

917638011.1063.dove 1783808 1783808
204800 204800 204800

dsk4g RW ENA

ACTIVE ACTIVE 0

SELECT CONCAT 0

1. Determine what disks are available (dsk4g and dsk5g). 2. Create the new volume as a 100 MB (204800 sectors) concatenated volume. 3. Verify that the volume was created and check to see what subdisks were used to create the volume. Notice that because the entire volume fits on one subdisk, only one subdisk, disk04g was used.
Example B-6 Increasing the Length of a Volume with volassist
# /sbin/volassist growto big-vol 1500000 lsm:volassist: ERROR: 2017:Grow must use -f to force the operation. # /sbin/volassist -f growto big-vol 1500000 # volprint -g lottaspace -ht
DG NAME DM NAME V NAME PL NAME SD NAME NCONFIG DEVICE USETYPE VOLUME PLEX NLOG TYPE KSTATE KSTATE DISK MINORS PRIVLEN STATE GROUP-ID PUBLEN STATE LENGTH READPOL

PREFPLEX

STATE LENGTH DISKOFFS LENGTH

LAYOUT NCOL/WID MODE [COL/]OFF DEVICE MODE

dg lottaspace dm disk04g dm disk05g
B ­ 22

default dsk4g dsk5g

default simple simple

5000 4096 4096

917638011.1063.dove 1783808 1783808 -

A-PDF Split DEMO
v big-vol pl big-vol-01 sd disk04g-01 fsgen big-vol big-vol-01 ENABLED ACTIVE 1500000 SELECT ENABLED ACTIVE 1500000 CONCAT RW disk04g 0 1500000 0 dsk4g ENA

# /sbin/volassist growby big-vol 2000000
lsm:volassist: ERROR: 2017:Grow must use -f to force the operation.

# /sbin/volassist -f growby big-vol 2000000 # volprint -g lottaspace -ht NAME NAME NAME NAME MODE SD NAME MODE dg lottaspace dm disk04g dm disk05g v pl sd sd big-vol big-vol-01 disk04g-01 disk05g-01 DG DM V PL NCONFIG DEVICE USETYPE VOLUME PLEX default dsk4g dsk5g fsgen big-vol big-vol-01 big-vol-01 NLOG TYPE KSTATE KSTATE DISK MINORS GROUP-ID PRIVLEN PUBLEN STATE STATE LENGTH READPOL PREFPLEX STATE LENGTH LAYOUT NCOL/WID DISKOFFS LENGTH 5000 4096 4096 [COL/]OFF DEVICE

default simple simple

917638011.1063.dove 1783808 1783808 -

ENABLED ACTIVE 3500000 SELECT ENABLED ACTIVE 3500000 CONCAT RW disk04g 0 1783808 0 dsk4g ENA disk05g 0 1716192 1783808 dsk5g ENA
lsm:volassist: ERROR:

# /sbin/volassist shrinkto big-vol 1500000
2018:Shrink must use -f to force the operation.

# /sbin/volassist -f shrinkto big-vol 1500000 # volprint -g lottaspace -ht DG NAME NCONFIG NLOG MINORS GROUP-IDDM NAME DEVICE TYPE PRIVLEN PUBLEN STATE V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE dg lottaspace dm disk04g dm disk05g v big-vol pl big-vol-01 sd disk04g-01 default dsk4g dsk5g fsgen big-vol big-vol-01 default simple simple 5000 4096 4096 917638011.1063.dove 1783808 1783808 -

ENABLED ACTIVE 1500000 SELECT ENABLED ACTIVE 1500000 CONCAT RW disk04g 0 1500000 0 dsk4g ENA

1. Increase the size of the volume to 1,500,000 sectors. 2. Any attempt to increase or decrease the size of the volume while it is open fails. To override the failure, reexecute the command with the -f force option. 3. Verify that the volume has increased in size to 1,500,000 sectors. 4. Increase the size of the volume once more, this time using the growby keyword.
B ­ 23

Using the volassist Command

A-PDF Split DEMO
5. Again, the force option must be used to allow execution. 6. Note that the size of the volume is now greater than subdisk disk04g-01 and subdisk disk05g-01 is added to the volume. 7. Decrease the size of the volume to 1,500,000 sectors. 8. Volume is open; must use the force option. 9. The volume size has been reduced to 1,500,000 sectors, and subdisk disk05g-01 has been removed from the volume. In the following example, a volume is mirrored with the volassist command. The mirror is created on a different SCSI bus so subdisks have been created on SCSI 1 (disk disk07g).
Example B-7 Mirroring a Volume with volassist
# volprint -g lottaspace -ht DG DM V PL SD NAME NAME NAME NAME NAME NCONFIG NLOG MINORS GROUP-ID DEVICE TYPE PRIVLEN PUBLEN STATE USETYPE KSTATE STATE LENGTH READPOL PREFPLEX VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE default dsk4g dsk5g dsk7g fsgen bi