Shared Storage Pools:

A shared storage pool is a pool of SAN storage devices that can span multiple Virtual I/O Servers. It is based on a cluster of Virtual I/O Servers and a distributed data object repository. (This repository is using a cluster filesystem that has been developed specifically for the purpose of storage virtualization and you can see something like this: /var/vio/SSP/bb_cluster/D_E_F_A_U_L_T_061310)

When using shared storage pools, the Virtual I/O Server provides storage through logical units that are assigned to client partitions. A logical unit is a file backed storage device that resides in the cluster filesystem in the shared storage pool. It appears as a virtual SCSI disk in the client partition.

The Virtual I/O Servers that are part of the shared storage pool are joined together to form a cluster. Only Virtual I/O Server partitions can be part of a cluster. The Virtual I/O Server clustering model is based on Cluster Aware AIX (CAA) and RSCT technology.

Cluster can consist:
VIOS version 2.2.0.11, Fix Pack 24, Service Pack 1         <--1 node
VIOS version 2.2.1.3                                       <--4 node
VIOS Version 2.2.2.0                                       <--16 node

------------------------------------------------------------------------------

Thin provisioning

A thin-provisioned device represents a larger image than the actual physical disk space it is using. It is not fully backed by physical storage as long as the blocks are not in use. A thin-provisioned logical unit is defined with a user-specified size when it is created. It appears in the client partition as a virtual SCSI disk with that user-specified size. However, on a thin-provisioned logical unit, blocks on the physical disks in the shared storage pool are only allocated when they are used.

Consider a shared storage pool that has a size of 20 GB. If you create a logical unit with a size of 15 GB, the client partition will see a virtual disk with a size of 15 GB. But as long as the client partition does not write to the disk, only a small portion of that space will initially be used from the shared storage pool. If you create a second logical unit also with a size of 15 GB, the client partition will see two virtual SCSI disks, each with a size of 15 GB. So although the shared storage pool has only 20 GB of physical disk space, the client partition sees 30 GB of disk space in total.

After the client partition starts writing to the disks, physical blocks will be allocated in the shared storage pool and the amount of free space in the shared storage pool will decrease. Deleting files or logical volumes from the shared storage pool, on a client partition does not increase free space of the shared storage pool.

When the shared storage pool is full, client partitions will see an I/O error on the virtual SCSI disk. Therefore even though the client partition will report free space to be available on a disk, that information might not be accurate if the shared storage pool is full.

To prevent such a situation, the shared storage pool provides a threshold that, if reached, writes an event in the errorlog of the Virtual I/O Server.

(If you use -thick flag with mkdbsp command, not a thin provisioned disk, but a usual disk (thick) will be created and client will have all the disk space.)

------------------------------------------------------------------------------

When a cluster is created, you must specify one physical volume for the repository and one for the storage pool physical volume. The storage pool physical volumes are used to provide storage to the client partitions. The repository physical volume is used to perform cluster communication and store the cluster
configuration.

If you need to increase the free space in the shared storage pool, you can either add an additional physical volume or you can replace an existing volume with a bigger one. Physical disks cannot be removed from the shared storage pool.


Requirements:
-each VIO Server must resolve correctly other VIO servers in cluster (DNS or /etc/hosts must be filled up with all VIO Servers)
-hostname command should show FQDN (with domain.com)
-VLAN tagging interfaces are not supported in earlier VIO versions for cluster communications
-fibre channel adapter should be ste to dyntrk=yes, fc_err_recov=fast_fail
-disks reserve policy should be set to no_reserve and all VIO Server must have these disk in available state.
-1 disk is needed for repository (min 10GB) and 1 or more for data (min 10GB) (these should be SAN FC LUNs)
-Active Memory Sharing paging space cannot be on SSP disk

------------------------------------------------------------------------------

Commands for create:
cluster -create -clustername bb_cluster -spname bb_pool -repopvs hdiskpower1 -sppvs hdiskpower2 -hostname bb_vio1
        clustername    bb_cluster                                                <--name of the cluster
        -spname bb_pool                                                          <--storage pool name
        -repopvs hdiskpower1                                                     <--disk of repository
        -sppvs hdiskpower2                                                       <--storage pool disk
        -hostname bb_vio1                                                        <--VIO Server hostname (where to create cluster)
(This command will create cluster, start CAA daemons and create shared storage pool)


cluster -addnode -clustername bb_cluster -hostname bb_vio2                       adding node to the cluster (16 node can be added)
chsp -add -clustername bb_cluster -sp bb_pool hdiskpower2                        adding disk to a shared storage poool

mkbdsp -clustername bb_cluster -sp bb_pool 10G -bd bb_disk1 -vadapter vhost0     creating a 10G LUN and assigning to vhost0 (lsmap -all will show it)
mkbdsp -clustername bb_cluster -sp bb_pool 10G -bd bb_disk2                      creating a 10G LUN

mkbdsp -clustername bb_cluster -sp bb_pool -bd bb_disk2 -vadapter vhost0         assigning LUN to a vhost adapter (command works only if bd name is unique)
mkbdsp -clustername bb_cluster -sp bb_pool -luudid c7ef7a2 -vadapter vhost0      assigning and earlier created LUN by LUN ID to a vhost adapter (same as above)



Commands for display:
cluster -list                                                      display cluster name and ID
cluster -status -clustername bb_cluster                            display cluster state and pool state on each node

lssp -clustername bb_cluster                                       list storage pool details (pool size, free space...)
lssp -clustername bb_cluster -sp bb_pool -bd                       list created LUNs in the storage pool (backing devices in lsmap -all)

lspv -clustername bb_cluster -sp bb_pool                           list physical volumes of shared storage pool (disk size, id)
lspv -clustername bb_cluster -capable                              list which disk can be added to the cluster

lscluster -c                                                       list cluster configuration
lscluster -d                                                       list disk details of the cluster
lscluster -m                                                       list info about nodes (interfaces) of the cluster
lscluster -s                                                       list network statistics of the local node (packets sent...)
lscluster -i -n bb_cluster                                         list interface information of the cluster

odmget -q "name=hdiskpower2 and attribute=unique_id" CuAt          checking LUN ID (as root)



Commands for remove:
rmbdsp -clustername bb_cluster -sp bb_pool -bd bb_disk1            remove created LUN (backing device will be deleted from vhost adapter)
                                                                   (disks cannot be removed from cluster (for example hdiskpower...)
cluster -rmnode -clustername bb_cluster -hostname bb_vios1         remove node from cluster
cluster -delete -clustername bb_cluster                            remove cluster completely

------------------------------------------------------------------------------

Create cluster and Shared Storage Pool:

1. create a cluster and pool: cluster -create ...
2. adding additional nodes to the cluster: cluster -addnode
3. checking which physical volume can be added: lspv -cluatername clusterX -capable
4. adding physical volume: chsp -add
5. create and map LUNS to clients: mkdsp -clustername...


------------------------------------------------------------------------------

cleandisk -r hdiskX                    clean cluster signature from hdisk
cleandisk -s hdiskX                    clean storage pool signature from hdisk
/var/vio/SSP                           cluster related directory (and files) will be created in this path

------------------------------------------------------------------------------

Managing snapshots:

Snapshots from a LUN can be created which later can be restored in case of any problems

# snapshot -create bb_disk1_snap -clustername bb_cluster -spname bb_pool -lu bb_disk1    <--create a snapshot

# snapshot -list -clustername bb_cluster -spname bb_pool                                 <--list snapshots of a storage pool
Lu Name          Size(mb)    ProvisionType    Lu Udid
bb_disk1         10240       THIN             4aafb883c949d36a7ac148debc6d4ee7
Snapshot
bb_disk1_snap

# snapshot -rollback bb_disk1_snap -clustername bb_cluster -spname bb_pool -lu bb_disk1  <--rollback a snapshot to a LUN
$ snapshot -delete bb_disk1_snap -clustername bb_cluster -spname bb_pool -lu bb_disk1    <--delete a snapshot

------------------------------------------------------------------------------

Setting alerts for Shared Storage Pools:

As thin provisioning is in place, real storage free space cannot be seen exactly. If storage pool gets 100% full, IO error will occur on client LPAR. To avoid this alerts can be configured:

$ alert -list -clustername bb_cluster -spname bb_pool
PoolName                 PoolID                             Threshold%
bb_pool                  000000000A8C1517000000005150C18D   35                        <--it shows the free percentage

# alert -set -clustername bb_cluster -spname bb_pool -type threshold -value 25        <--if free space goes below 25% it will alert

# alert -list -clustername bb_cluster -spname bb_pool
PoolName                 PoolID                             Threshold%
bb_pool                  000000000A8C1517000000005150C18D   25                        <--new value can be seen here

$ alert -unset -clustername bb_cluster -spname bb_pool                                <--unset an alert
in errlog you can see the warning

----------------------------------------

 

추가

 

 

 

 

Here are my notes of the Shared Storage Pools commands that you need to remember.  I have said it before but SSP is big on concepts and system admin time saving but very simple to operate.

 

Creating the pool (once only) by example:

  • Assumption: 1 GB Repository disk = hdisk15 and Pool data disks = hdisk7 to hdisk14, cluster and pool both called "alpha"
  • Assumption: IBM V7000 A has LUNs: hdisk7 hdisk8 hisk9 hdisk10
  • Assumption: IBM V7000 B has LUNs: hdisk11 hdisk12 hisk13 hdisk14

Examples

  • cluster -create -clustername alpha  -spname alpha  -repopvs hdisk15   -sppvs hdisk7 hdisk8   -hostname orangevios1.domain.com
  • cluster -addnode -clustername alpha  -hostname silvervios1.domain.com
  • failgrp -modify -fg Default -attr FGName=V7000A
  • failgrp -create -fg V7000B: hdisk11 hdisk12
  • pv -add -fg V7000A: hdisk9 hdisk10 V7000B: hdisk13 hdisk14

Shared Storage Pools - Daily work

Create a LU (Logical Unit - virtual disk in the pool)

  •  lu -remove | -map | -unmap | -list  [-lu name]

  •  lu -create -lu name -size nnG -vadapter vhostXX [-thick]

Examples

  • lu -create -lu lu42 -size 32G -vadapter vhost66 -thick

  • lu -map    -lu lu42           -vadapter vhost22

  • lu -list

  • lu -list -verbose

  • lu -remove -lu lu42

Snapshots - stop the VM for a safe consistent disk image but you could (if confident) take a live snapshot and rely on filesystem logs and application based data recovery like RDBMS transaction logs

  •  snapshot [-create -delete -rollback -list]  name [-lu <list-LU-names>]     -clustername x -spname z

 Examples

  • snapshot -create   bkup1 -lu lu42 lu43   -clustername alpha -spname alpha

  • snapshot -rollback bkup1    -clustername alpha -spname alpha

  • snapshot -delete    bkup1    -clustername alpha -spname alpha

 

Shared Storage Pool – Weekly Configuration & Monitoring

Configuration Details

  • cluster -list                  <-- yields cluster name

  • cluster -status -clustername alpha

  • cluster -status -clustername alpha -verbose <-- shows you the poolname

  • lscluster -d <-- yields all the hdisks with local names for each VIOS

Monitor pool use

  •  lssp -clustername alpha

  •  lssp -clustername alpha -sp alpha -bd

  • Note this command uses "-sp" and not "-spname" like  many others.

Monitor for issues

  • alert -set -type {threshold | overcommit} -value N

  • alert -list

  • Ongoing monitoring of VIOS logs for warnings

  • Note - Pool Alert Events are logged to your HMC which you can get emailed to people.
    Look for Resource-Name=VIOD_POOL     Description=Informational Message

 

Shared Storage Pool – Quarterly / Yearly Maintenance

Pool mirroring check

  •  failgrp -create       <-- once only when creating the pool
  •  failgrp -list [-verbose]

Growing the pool size and monitoring

  •  pv -list [-verbose]
  •  pv -list -capable    <-- check new LUN ready
  •  pv -add -fg a: hdisk100 b: hdisk101

Moving the pool to a different disk subsystem

  •  pv -oldpv hdisk100 -newpv hdisk200 

'IBM AIX > VIOS' 카테고리의 다른 글

SEA  (0) 2016.07.18
NPIV  (0) 2016.07.18
VIO 가상 아답터 생성  (0) 2016.07.18
VIOS OS mirror  (0) 2016.07.18
SR-IOV - vNIC  (0) 2016.07.18

+ Recent posts