VIRTUAL SCSI

Virtual SCSI is based on a client/server relationship. The Virtual I/O Server owns the physical resources and acts as server or, in SCSI terms, target device. The client logical partitions access the virtual SCSI backing storage devices provided by the Virtual I/O Server as clients.

Virtual SCSI server adapters can be created only in Virtual I/O Server. For HMC-managed systems, virtual SCSI adapters are created and assigned to logical partitions using partition profiles.

The vhost SCSI adapter is the same as a normal SCSI adapter. You can have multiple disks assigned to it. Usually one virtual SCSI server adapter mapped to one virtual SCSI client adapter will be configured, mapping backing devices through to individual LPARs. It is possible to map these virtual SCSI server adapters to multiple LPARs, which is useful for creating virtual optical and/or tape devices, allowing removable media devices to be shared between multiple client partitions.

on VIO server:
root@vios1: / # lsdev -Cc adapter
vhost0  Available       Virtual SCSI Server Adapter
vhost1  Available       Virtual SCSI Server Adapter
vhost2  Available       Virtual SCSI Server Adapter


The client partition accesses its assigned disks through a virtual SCSI client adapter. The virtual SCSI client adapter sees the disks, logical volumes or file-backed storage through this virtual adapter as virtual SCSI disk devices. 

on VIO client:
root@aix21: / # lsdev -Cc adapter
vscsi0 Available  Virtual SCSI Client Adapter

root@aix21: / # lscfg -vpl hdisk2
  hdisk2           U9117.MMA.06B5641-V6-C13-T1-L890000000000  Virtual SCSI Disk Drive

In SCSI terms:
virtual SCSI server adapter: target
virtual SCSI client adapter: initiator
(Analogous to server client model, where client is the initiator.)

Physical disks presented to the Virtual I/O Server can be exported and assigned to a client partition in a number of different ways:
- The entire disk is presented to the client partition.
- The disk is divided into several logical volumes, which can be presented to a single client or multiple different clients.
- With the introduction of Virtual I/O Server 1.5, files can be created on these disks and file-backed storage can be created.
- With the introduction of Virtual I/O Server 2.2 Fixpack 24 Service Pack 1 logical units from a shared storage pool can be created.

The IVM and HMC environments present 2 different interfaces for storage management under different names. Storage Pool interface under IVM is essentially the same as LVM under HMC. (These are used sometimes interchangeably.) So volume group can refer to both volume groups and storage pools, and logical volume can refer to both logical volumes and storage pool backing devices.

Once these virtual SCSI server/client adapter connections have been set up, one or more backing devices (whole disks, logical volumes or files) can be presented using the same virtual SCSI adapter. 

When using Live Partition Mobility storage needs to be assigned to the Virtual I/O Servers on the target server.

----------------------------

Number of LUNs attached to a VSCSI adapter:

VSCSI adapters have a fixed queue depth that varies depending on how many VSCSI LUNs are configured for the adapter. There are 512 command elements of which 2 are used by the adapter, 3 are reserved for each VSCSI LUN for error recovery and the rest are used for IO requests. Thus, with the default queue_depth of 3 for VSCSI LUNs, that allows for up to 85 LUNs to use an adapter: (512 - 2) / (3 + 3) = 85. 

So if we need higher queue depths for the devices, then the number of LUNs per adapter is reduced. E.G., if we want to use a queue_depth of 25, that allows 510/28= 18 LUNs. We can configure multiple VSCSI adapters to handle many LUNs with high queue depths, each requiring additional memory. One may have more than one VSCSI adapter on a VIOC connected to the same VIOS if you need more bandwidth.

Also, one should set the queue_depth attribute on the VIOC's hdisk to match that of the mapped hdisk's queue_depth on the VIOS.

Note that to change the queue_depth on an hdisk at the VIOS requires that we unmap the disk from the VIOC and remap it back, or a simpler approach is to change the values in the ODM (e.g. # chdev -l hdisk30 -a queue_depth=20 -P) then reboot the VIOS. 

----------------------------

File Backed Virtual SCSI Devices

Virtual I/O Server (VIOS) version 1.5 introduced file-backed virtual SCSI devices. These virtual SCSI devices serve as disks or optical media devices for clients.  

In the case of file-backed virtual disks, clients are presented with a file from the VIOS that it accesses as a SCSI disk. With file-backed virtual optical devices, you can store, install and back up media on the VIOS, and make it available to clients.

----------------------------

Check VSCSI adapter mapping on client:

root@bb_lpar: / # echo "cvai" | kdb | grep vscsi                             <--cvai is a kdb subcommand
read vscsi_scsi_ptrs OK, ptr = 0xF1000000C01A83C0
vscsi0     0x000007 0x0000000000 0x0                aix-vios1->vhost2        <--shows which vhost is used on which vio server for this client
vscsi1     0x000007 0x0000000000 0x0                aix-vios1->vhost1
vscsi2     0x000007 0x0000000000 0x0                aix-vios2->vhost2


Checking for a specific vscsi adapter (vscsi0):

root@bb_lpar: /root # echo "cvscsi\ncvai vscsi0"| kdb |grep -E "vhost|part_name"
priv_cap: 0x1  host_capability: 0x0  host_name: vhost2 host_location:
host part_number: 0x1   os_type: 0x3    host part_name: aix-vios1

----------------------------

Other way to find out VSCSI and VHOST adapter mapping:
If the whole disk is assigned to a VIO client, then PVID can be used to trace back connection between VIO server and VIO client.

1. root@bb_lpar: /root # lspv | grep hdisk0                                  <--check pvid of the disk is question on client
   hdisk0          00080e82a84a5c2a                    rootvg

2. padmin@bb_vios1: /home/padmin # lspv | grep 5c2a                          <--check which disk has this pvid on vio server
   hdiskpower21     00080e82a84a5c2a                     None

3. padmin@bb_vios1: /home/padmin # lsmap -all -field SVSA "Backing Device" VTD "Client Partition ID" Status -fmt ":" | grep hdiskpower21
   vhost13:0x0000000c:hdiskpower21:pid12_vtd0:Available                      <--check vhost adapter of the given disk

 ----------------------------

Managing VSCSI devices (server-client mapping)

1. HMC -> VIO Server -> DLPAR -> Virtual Adapter (create vscsi adapter, name the client which can use it, then create the same in profile)
                                (the profile can be updated: configuration -> save current config.)
                                (in case of an optical device, check out any client partition can connect)
2. HMC -> VIO Client -> DLPAR -> Virtual Adapter (create the same adapter as above, the ids should be mapped, do it in the profile as well)
3. cfgdev (VIO server), cfgmgr (client)                        <--it will bring up vhostX on vio server, vscsiX on client
4. create needed disk assignments:
  -using physical disks:
    mkvdev -vdev hdisk34 -vadapter vhost0 -dev vclient_disk    <--for easier identification useful to give a name with the -dev flag
    rmvdev -vdev <backing dev.>                                <--back. dev can be checked with lsmap -all (here vclient_disk)

  -using logical volumes:
    mkvg -vg testvg_vios hdisk34                               <--creating vg for lv
    lsvg                                                       <--listing a vg
    reducevg <vg> <disk>                                       <--deleting a vg

    mklv -lv testlv_client testvg_vios 10G                     <--creating lv what will be mapped to client    
    lsvg -lv <vg>                                              <--lists lvs under a vg
    rmlv <lv>                                                  <--removes an lv

    mkvdev -vdev testlv_client -vadapter vhost0 -dev <any_name>        <--for easier identification useful to give a name with the -dev flag
                                                                       (here backing device is an lv (testlv_client)
    rmvdev -vdev <back. dev.>                                  <--removes an assignment to the client

  -using logical volumes just with storage pool commands:
   (vg=sp, lv=bd)

    mksp <vgname> <disk>                                       <--creating a vg (sp)
    lssp                                                       <--listing stoarge pools (vgs)
    chsp -add -sp <sp> PhysicalVolume                          <--adding a disk to the sp (vg)
    chsp -rm -sp bb_sp hdisk2                                  <--removing hdisk2 from bb_sp (storage pool)

    mkbdsp -bd <lv> -sp <vg> 10G                               <--creates an lv with given size in the sp
    lssp -bd -sp <vg>                                          <--lists lvs in the given vg (sp)
    rmbdsp -bd <lv> -sp <vg>                                   <--removes an lv from the given vg (sp)

    mkvdev..., rmvdev... also applies

  -using file backed storage pool
    first a normal (LV) storage pool should be created with: mkvg or mksp, after that:
    mksp -fb <fb sp name> -sp <vg> -size 20G                   <--creates a file backed storage pool in the given storage pool with given size
                                                               (it wil look like an lv, and a fs will be created automatically as well)
    lssp                                                       <--it will show as FBPOOL
    chsp -add -sp clientData -size 1G                          <--increase the size of the file storage pool (ClientData) by 1G


    mkbdsp -sp fb_testvg -bd fb_bb -vadapter vhost2 10G        <--it will create a file backed device and assigns it to the given vhost
    mkbdsp -sp fb_testvg -bd fb_bb1 -vadapter vhost2 -tn balazs 8G <--it will also specify a virt. target device name (-tn)

    lssp -bd -sp fb_testvg                                     <--lists the lvs (backing devices) of the given sp
    rmbdsp -sp fb_testvg -bd fb_bb1                            <--removes the given lv (bd) from the sp
    rmsp <file sp name>                                        <--remove s the given file storage pool


removing it:
rmdev -dev vhost1 -recursive
----------------------------



On client partitions, MPIO for virtual SCSI devices currently only support failover mode (which means only one path is active at a time:
root@bb_lpar: / # lsattr -El hdisk0
PCM             PCM/friend/vscsi                 Path Control Module        False
algorithm       fail_over                        Algorithm                  True


----------------------------

Multipathing with dual VIO config:

on VIO SERVER:
# lsdev -dev <hdisk_name> -attr                                    <--checking disk attributes
# lsdev -dev <fscsi_name> -attr                                    <--checking FC attributes


# chdev -dev fscsi0 -attr fc_err_recov=fast_fail dyntrk=yes-perm   <--reboot is needed for these
      fc_err_recov=fast_fail                                       <--in case of a link event IO will fail immediately
      dyntrk=yes                                                   <--allows the VIO server to tolerate cabling changes in the SAN

# chdev -dev hdisk3 -attr reserve_policy=no_reserve                <--each disk must be set to no_reservr
    reserve_policy=no_reserve                                      <--if this is configured, dual vio server can present a disk to client



on VIO client:
# chdev -l vscsi0 -a vscsi_path_to=30 -a vscsi_err_recov=fast_fail -P    <--path timout checks health of VIOS and detects if VIO Server adapter isn't responding
    vscsi_path_to=30                            <--by default it is disabled (0), each client adapter must be configured, minimum is 30
    vscsi_err_recov=fast_fail                   <--failover will happen immediately rather than delayed


# chdev -l hdisk0 -a queue_depth=20 -P          <--it must match the queue depth value used for the physical disk on the VIO Server
    queue_depth                                 <--it determines how many requests will be queued on the disk


# chdev -l hdisk0 -a hcheck_interval=60 -a hcheck_mode=nonactive -P    <--health check updates automatically paths state
                                                                       (otherwise failed path must be set manually))
    hcheck_interval=60                        <--how often do hcheck, each disk must be configured (hcheck_interval=0 means it is disabled)
    hcheck_mode=nonactive                     <--hcheck is performed on nonactive paths (paths with no active IO)


Never set the hcheck_interval lower than the read/write timeout value of the underlying physical disk on the Virtual I/O Server. Otherwise, an error detected by the Fibre Channel adapter causes new healthcheck requests to be sent before the running requests time out.

The minimum recommended value for the hcheck_interval attribute is 60 for both Virtual I/O and non Virtual I/O configurations.
In the event of adapter or path issues, setting the hcheck_interval too low can cause severe performance degradation or possibly cause I/O hangs. 

It is best not to configure more than 4 to 8 paths per LUN (to avoid too many hchecks IO), and set the hcheck_interval to 60 in the client partition and on the Virtual I/O  Server.


----------------------------

TESTING PATH PRIORITIES:

By default all the paths are defined with priority 1 meaning that traffic will go through the first path. 
If you want to control the paths 'path priority' has to be updated.
Priority of the VSCSI0 path remains at 1, so it is the primary path.
Priority of the VSCSI1 path will be changed to 2, so it will be lower priority. 


PREPARATION ON CLIENT:
# lsattr -El hdisk1 | grep hcheck
hcheck_cmd      test_unit_rdy                            <--hcheck is configured, so path should come back automatically from failed state
hcheck_interval 60                              
hcheck_mode     nonactive                       


# chpath -l hdisk1 -p vscsi1 -a priority=2               <--I changed priority=2 on vscsi1 (by default both paths are priority=1)

# lspath -AHE -l hdisk1 -p vscsi0
    priority  1     Priority    True

# lspath -AHE -l hdisk1 -p vscsi1
    priority  2     Priority    True

So, configuration looks like this:
VIOS1 -> vscsi0 -> priority 1
VIOS2 -> vscsi1 -> priority 2


TEST 1:

1. ON VIOS2: # lsmap -all                                 <--checking disk mapping on VIOS2
    VTD                   testdisk
    Status                Available
    LUN                   0x8200000000000000
    Backing device        hdiskpower1
    ...

2. ON VIOS2: # rmdev -dev testdisk                        <--removing disk mapping from VIOS2

3. ON CLIENT: # lspath
    Enabled hdisk1 vscsi0
    Failed  hdisk1 vscsi1                                 <--it will show failed path on vscsi2 (this is coming from VIOS2)

4. ON CLIENT: # errpt                                     <--error report will show "PATh HAS FAILED"
    IDENTIFIER TIMESTAMP  T C RESOURCE_NAME  DESCRIPTION
    DE3B8540   0324120813 P H hdisk1         PATH HAS FAILED

5. ON VIOS2: # mkvdev -vdev hdiskpower1 -vadapter vhost0 -dev testdisk    <--configure back disk mapping from VIOS2

6. ON CLIENT: # lspath                                    <--in 30 seconds path will come back automatically
    Enabled hdisk1 vscsi0
    Enabled hdisk1 vscsi1                                 <--because of hcheck, path came back automatically (no manual action was needed)

7. ON CLIENT: # errpt                                     <--error report will show path has been recovered
    IDENTIFIER TIMESTAMP  T C RESOURCE_NAME  DESCRIPTION
    F31FFAC3   0324121213 I H hdisk1         PATH HAS RECOVERED


TEST 2:

I did the same on VIOS1 (rmdev...disk, which has path priority 1 (IO is going there by default) 

ON CLIENT: # lspath
    Failed  hdisk1 vscsi0
    Enabled hdisk1 vscsi1

ON CLIENT: # errpt                                        <--an additional disk operation error will be in errpt
    IDENTIFIER TIMESTAMP  T C RESOURCE_NAME  DESCRIPTION
    DCB47997   0324121513 T H hdisk1         DISK OPERATION ERROR
    DE3B8540   0324121513 P H hdisk1         PATH HAS FAILED

----------------------------

How to change a VSCSI adapter on client:

# lspath
Enabled hdisk0 vscsi0
Enabled hdisk0 vscsi2                                                  <--we want to change vsci2 to vscsi1

On VIO client:
1. # rmpath -p vscsi2 -d                                               <--remove paths from vscsi2 adapter
2. # rmdev -dl vscsi2                                                  <--remove adapter

On VIO server:

3. # lsmap -all                                                        <--check assignment and vhost device
4. # rmdev -dev vhost0 -recursive                                      <--remove assignment and vhost device

On HMC:
5. Remove deleted adapter from client (from profil too)
6. Remove deleted adapter from VIOS (from profil too)
7. Create new adapter on client (in profil too)                        <--cfgmgr on client
8. Create new adapter on VIOS (in profil too)                          <-cfgdev on VIO server

On VIO server:
9. # mkvdev -vdev hdiskpower0 -vadapter vhost0 -dev rootvg_hdisk0      <--create new assignment

# lspath
Enabled hdisk0 vscsi0
Enabled hdisk0 vscsi1                                                  <--vscsi1 is there (cfgmgr may needed)

----------------------------

Assigning and moving DVD RAM between LPARS


1. lsdev -type optical                    <--check if VIOS owns optical device (you should see sg. like: cd0 Available SATA DVD-RAM Drive)
2. lsmap -all                             <--to see if cd0 is already mapped and which vhost to use for assignment (lsmap -all | grep cd0)
3. mkvdev -vdev cd0 -vadapter vhost0      <--it will create vtoptX as a virtual target device (check with lsmap -all )

4. cfgmgr (on client lpar)                <--bring up cd0 device on client (before moving cd0 device rmdev device on client  first)

5. rmdev -dev vtopt0 -recursive           <--to move cd0 to another client, remove assignment from vhost0
6. mkvdev -vdev cd0 -vadapter vhost1      <--create new assignment to vhost1

7. cfgmgr (on other client lpar)          <--bring up cd0 device on other client

(Because VIO server adapter is configured with "Any client partition can connect" option, these pairs are not suited for client disks.)

----------------------------

'IBM AIX > VIOS' 카테고리의 다른 글

LPM by IBM  (0) 2016.07.18
LPM  (0) 2016.07.18
SEA  (0) 2016.07.18
NPIV  (0) 2016.07.18
VIO 가상 아답터 생성  (0) 2016.07.18


Shared Ethernet Adapter (SEA)

A SEA can be used to connect a physical Ethernet network to a virtul Ethernet network. The SEA hosted in the Virtual I/O Server acts as a layer-2 bridge between the internal and external network. With Shared Ethernet Adapters on the Virtual I/O Server, virtual Ethernet adapters on client logical partitions can send and receive outside network traffic.

Shared Ethernet Adapter is a Virtual I/O Server component that bridges a physical Ethernet adapter and one or more virtual Ethernet adapters:
-The real adapter can be a physical Ethernet adapter, a Link Aggregation or EtherChannel device, or a Logical Host Ethernet Adapter . The real adapter cannot be another Shared Ethernet Adapter or a VLAN pseudo-device.
-The virtual Ethernet adapter (trunk adpater in the SEA) must be created with the following settings:


Adapter ID: Any ID for the Virtual ethernet adapter
Port Virtual Ethernet: PVID given to this adapter (usually a VLAN ID which is not used at any other adapter to avoid untagging packets)
IEE 802.1q: Additional VLAN IDs can be specified here
Ethernet bridging: This checkbox enables accessing external networks
Priority: For SEA Failover mode, you can specify which SEA should be the primary (here it is the secondary SEA)


--------------------------------------------------

A Shared Ethernet Adapter provides access by connecting the internal VLANs with the VLANs on the external switches. Using this connection, logical partitions without physical adapters can share the IP subnet with stand-alone systems and other external logical partitions. (A virtual Ethernet adapter connected to the SEA must have the Access External Networks check box enabled.)

The Shared Ethernet Adapter forwards outbound packets received from a virtual Ethernet adapter to the external network and forwards inbound packets to the appropriate client logical partition over the virtual Ethernet link to that logical partition.

IF SEA failover has been configured leave SEA without IP addresses. (It makes maintenance of SEA also easier)

Checking SEA on VIO server:
padmin@vios1: / # lsdev -dev ent* | grep Shared
ent8    Available       Shared Ethernet Adapter

padmin@vios1: / # lsdev -dev ent8 -attr | grep adapter
pvid_adapter  ent4     Default virtual adapter to use for non-VLAN-tagged packets         True
real_adapter  ent0     Physical adapter associated with the SEA                           True

---------------------------------------------------

Quality of Service

Quality of Service (QoS) is a Shared Ethernet Adapter feature which infulences bamdwidth. QoS allows the Virtual I/O Server to give a higher priority to some types of packets. Shared Ethernet Adapter on the VIO Server can inspect bridged VLAN-tagged traffic for the VLAN priority field in the VLAN header. The 3-bit VLAN priority field allows each individual packet to be prioritized with a value from 0 to 7 to distinguish more important traffic from less important traffic. More important traffic is sent preferentially and uses more Virtual I/O Server bandwidth than less important traffic.

---------------------------------------------------

PVID:

The SEA directs packets based on the VLAN ID tags. One of the virtual adapters in the SEA must be designated (at creation) as the default PVID adapter (ent1 on the below picture). Ethernet frames without any VLAN ID tags that the SEA receives from the external network are forwarded to this adapter and assigned the default PVID

---------------------------------------------------

SEA and VLAN traffic:

The VLAN tag information is referred to as VLAN ID (VID). Ports on a switch are configured as being members of a VLAN designated by the VID for that port. The default VID for a port is referred to as the Port VID (PVID). The VID can be added to an Ethernet packet either by a VLAN-aware host, or by the switch in the case of VLAN-unaware hosts.

For VLAN-unaware hosts, a port is set up as untagged and the switch will tag all packets entering through that port with the Port VLAN ID (PVID). The switch will also untag all packets exiting that port before delivery to the VLAN unaware host. A port used to connect VLAN-unaware hosts is called an untagged port, and it can be a member of only one VLAN identified by its PVID.

Hosts that are VLAN-aware can insert and remove their own tags and can be members of more than one VLAN. These hosts are typically attached to ports that do not remove the tags before delivering the packets to the host, but will insert the PVID tag when an untagged packet enters the port.

A port will only allow packets that are untagged or tagged with the tag of one of the VLANs that the port belongs to.


Based on the above image, incoming packets from external networks:
- SEA forwards untagged packets to ent1 and these are tagged with the default PVID=1
- SEA forwards packets with VID=1 or VID=10 to adapter ent1 as well
- before LPAR2 recieves packets Hypervisor will remove VLAN tag
- en0 on LPAR1 will receive untagged packets
- en1 on LPAR1 will receive only packets with VID=10

Outgoing packets to external networks:
- packets sent by LPAR2 will be tagged by Hypervisor, with PVID=1
- packets sent by LPAR1 through en1 are tagged with VID=10 by AIX, and en0 packets are tagged with PVID=1 by Hypervisor
- at VIOS: packets tagged with VID=10, are processed with the VLAN tag unmodified.
- at VIOS: packets with VID=1 (PVID of ent1 in SEA) are untagged before ent1 receives them, then bridged to ent0 and sent out.
 (VLAN-unaware destination devices on the external network will be able to receive these packets.)

(The virtual Ethernet adapter ent1 of the SEA also uses VID 10 and will receive the packet from the POWER Hypervisor with the VLAN tag unmodified. The packet will then be sent out through ent0 with the VLAN tag unmodified. So, only VLAN-capable destination devices will be able to receive these. )

---------------------------------------------------

Shared Ethernet Adapter Failover:

In a Shared Ethernet Adapter failover configuration there are two Virtual I/O Servers, each running a Shared Ethernet Adapter. The Shared Ethernet Adapters communicate with each other on a control channel using two virtual Ethernet adapters configured on a separate VLAN. The control channel is used to carry heartbeat packets between the two Shared Ethernet Adapters. When the primary Shared Ethernet Adapter loses connectivity the network traffic is automatically switched to the backup Shared Ethernet Adapter.



The trunk priority for the Virtual Ethernet adapters on VIO Server 1 (which has the Access external network flag set) is set to 1. This means that normally the network traffic will go through VIO Server 1. VIO Server 2 with trunk priority 2 is used as backup in case VIO Server 1 has no connectivity to the external network.

more info: https://www-304.ibm.com/support/docview.wss?uid=isg3T1011040

---------------------------------------------------

Shared Ethernet Adapter failover with Loadsharing

The Virtual I/O Server Version 2.2.1.0, or later, provides a load sharing function to enable to use the bandwidth of the backup Shared Ethernet Adapter (SEA).It makes an effective use of the backup SEA bandwidth.



In this example the packets of VLAN 10 will go through VIOS1 and packets of VLAN 20 will go through VIOS2.

Prerequisites:
- Both of primary and backup Virtual I/O Servers are at Version 2.2.1.0, or later.
- Two or more trunk adapters are configured for the primary and backup SEA pair.
- The virtual local area network (VLAN) definitions of the trunk adapters are identical between the primary and backup SEA pair.

To create or enable the SEA failover with Load Sharing, you have to enable the load sharing mode on the primary SEA first before enabling load sharing mode on the backup SEA. The load sharing algorithm automatically determines which trunk adapters will be activated and will treat network packets for VLANs in the SEA 
pair. You can not specify the active trunk adapters of the SEAs manually in the load sharing mode.

Changing the SEA to Load Sharing mode:
$ chdev -dev ent6 -attr ha_mode=sharing

---------------------------------------------------

To reduce SEA failover time to minimum these can help:

- For all AIX client partitions, set up Dead Gateway Detection (DGD) on the default route:
1. route change default -active_dgd        <--Set up DGD on the default route
2. in etc/rc.tcpip add: route change default -active_dgd to the        <--it makes this change to permanent
3. no -p -o dgd_ping_time=2        <--set pings interval of a gateway by DGD to 2 seconds
       (default is 5s; 2s will allow faster recovery):
- On the network switch, enable PortFast if Spanning Tree is on or disable Spanning Tree.
- On the network switch, set the channel group for your ports to Active if they are currently set to Passive

---------------------------------------------------

Simplified SEA (without control channel):

SEA can implement a new method to discover SEA pair partners using the VLAN ID 4095 in its virtual switch. After partners are identified, a new SEA high availability (HA) protocol is used to communicate between them.

If the followings are met during SEA creation no control channel adapter is necessary:
-VIOS Version 2.2.3
-Hardware Management Console (HMC) 7.7.8
-Firmware Level 780 or higher

---------------------------------------------------

Good overview of SEA sharing mode + VLANs:
entstat -all entX | grep -e "  Priority" -e "Virtual Adapter" -e "  State:" -e "High Availability Mode" -e "  ent"

Good overview of SEA Link status + MAC address:
entstat -all entX | grep -e "(ent" -e "Type:" -e "Address:" -e "Link Status" -e "Link State:" -e "Switch "


---------------------------------------------------

Checking SEA Load sharing distribution:

# entstat -all ent8 | grep -e "  Priority" -e "Virtual Adapter" -e "  State:" -e "High Availability Mode"

ent8:       SEA adapter
ent4, ent5: Trunk virtual ethernet adapters in SEA

VIO1:
State: PRIMARY_SH                    <--shows it is in load sharing mode and it is the primary SEA adapter (if we were in failover mode)
High Availability Mode: Sharing
Priority: 1
...
Virtual Adapter: ent4
  Priority: 1  Active: False
Virtual Adapter: ent5
  Priority: 1  Active: True

VIO2:
State: BACKUP_SH                    <--shows it is in load sharing mode and it is the backup SEA adapter (if we were in failover mode)
High Availability Mode: Sharing
Priority: 2
...
Virtual Adapter: ent4
  Priority: 2  Active: True
Virtual Adapter: ent5
  Priority: 2  Active: False

---------------------------------------------------

SEA and SEA failover creation:

To create a Shared Ethernet Adapter (SEA) you need:
- <PHYS>: a physical adapter as backend
- <VIRT>: a virtual adapter
- <VLAN>: an internal VLAN ID
default: specifies the default virtual adapter to be used for non-VLAN-tagged packets
defaultid:  this VLAN ID used for untagged packets (the PVID used for the SEA device)

for SEA failover:
<CONT>: a second virtual adapter for the control channel 

+ optional settings:
-netaddr: SEA will periodically ping this IP address, so it can detect network failures
-largesend: enable TCP segmentation offload


 # simple SEA
 $ mkvdev -sea <PHYS> -vadapter <VIRT> -default <VIRT> -defaultid <VLAN>

 # Shared Ethernet Adapter Failover:
 $ mkvdev -sea <PHYS> -vadapter <VIRT> -default <VIRT> -defaultid <VLAN> -attr ha_mode=auto ctl_chan=<CONT>

 # Shared Ethernet Adapter Failover without control channel, example:
 $ mkvdev -sea ent14 -vadapter ent8 ent10 ent12 ent13 -default ent8 -defaultid 4000 -attr jumbo_frames=yes ha_mode=auto
 (After creation possible to change to sharing mode, 1st on primary VIO after backup VIO: chdev -dev ent15 -attr ha_mode=sharing


 (with optional settings)
 $ mkvdev -sea ent0 -vadapter ent2 -default ent2 -defaultid 1 -attr ha_mode=auto ctl_chan=ent3 netaddr=9.3.4.1 largesend=1

(Any interface with an IP address on the adapters used when defining the SEA must be detached.)
(When you want to change something on SEA (enable/disable load sharing...), do the change on the primary SEA first, then set it on the backup SEA.)

---------------

adding a virtual adapter later to the SEA:

chdev -dev entx -attr virt_adapters=entY,entZ
(entX: SEA adapter; entY,entZ: virtual adapters - all virt. adapters has to be listed here, not just the new one)

---------------

Changing SEA online (without downtime):

SEA configured on VIO1 with priority 1 and on VIO2 on priority 2 (it is important when changing sharing mode)
SEA configured in load sharing mode, so first I change it to auto, and after to standby on each VIO where I work:

1.chdev -dev entX -attr ha_mode=auto                        <--1st on VIO1 after VIO2 change to auto mode, so both will have auto

2.chdev -dev entX -attr ha_mode=standby                     <--on VIO1: so network will go through on VIO2
3.rmvdev -sea entX                                        <--on VIO1: remove SEA
4.rmvdev -lnagg entY                                      <--on VIO1: remove Etherchannel
5.<<do any change/HW repair>>
6.mkvdev -lnagg ent0 ent1 -attr mode=8023ad...              <--on VIO1: recreate Etherchannel
7.mkvdev -sea ent2 -vadapter ent8 ent9 ... ha_mode=standby  <--on VIO1: recreate SEA
8.chdev -dev entX -attr ha_mode=auto                        <--on VIO1: set back ha_mode to auto, so traffic will go based on priority

do same tasks (from standby) on VIOS2...when finished:
chdev -dev entX -attr ha_mode=sharing                       <--1st on VIO1 after on VIO2


This works as well:
rmdev -l ent15
chdev -l ent15 -a jumbo_frames=yes
mkdev -l ent15

---------------

SEA Failover testing:

On VIOS1 and VIOS2 virtual adapters have been created. At creation time trunk priority has been set:
VIOS1: 1
VIOS2: 2

With command 'mkvdev' SEAs (ent14) have been created on both VIO

1. check settings:

    VIOS1:
    lsattr -El ent14 | grep ha_mode            <--should show: ha_mode=auto
    netstat -v ent14 | grep Active             <--should show: Priority: 1  Active: True

    VIOS2:
    lsattr -El ent14 | grep ha_mode            <--should show: ha_mode=auto
    netstat -v ent14 | grep Active             <--should show: Priority: 2  Active: False

2. perform manual SEA failover:

    VIOS1:
    chdev -l ent14 -a ha_mode=standby

3. check settings:

    VIOS1:
    lsattr -El ent14 | grep ha_mode            <--should show: ha_mode=standby
    netstat -v ent14 | grep Active             <--should show: Priority: 1  Active: False
    errpt | head                               <--should show: BECOME BACKUP

    VIOS2:
    lsattr -El ent14 | grep ha_mode            <--should show: ha_mode=auto
    netstat -v ent14 | grep Active             <--should show: Priority: 2  Active: True
    errpt | head                               <--should show: BECOME PRIMARY

4. switching back:

    VIOS1:
    chdev -l ent14 -a ha_mode=auto

5. check settings:

    VIOS1:
    lsattr -El ent14 | grep ha_mode            <--should show: ha_mode=auto
    netstat -v ent14 | grep Active             <--should show: Priority: 1  Active: True
    errpt | head                               <--should show: BECOME PRIMARY

    VIOS2:
    lsattr -El ent14 | grep ha_mode            <--should show: ha_mode=auto
    netstat -v ent14 | grep Active             <--should show: Priority: 2  Active: False
    errpt | head                               <--should show: BECOME BACKUP

---------------

thread attribute:

Threading ensures that CPU resources are shared fairly when a Virtual I/O Server provides a mix of SEA and VSCSI services. 
If it set to 1, it will equalize  the priority between virtual disk and SEA network I/O. This throttles Ethernet traffic to prevent it from consuming a higher percentage of CPU resources versus the virtual SCSI activity. This is a concern only when CPU resources are constrained resources.)

padmin@vios1 : /home/padmin # lsdev -dev ent14 -attr | grep thread
thread        1          Thread mode enabled (1) or disabled (0)                            True

Threading is enabled by default for shared Ethernet adapters.
Disable threading when a Virtual I/O Server is not used for VSCSI (chdev –dev entX –attr thread=0).

---------------

entstat -all ent4                                          shows if this adapter is active or not (entstat -all ent4 | grep Active)
netstat -cdlistats | grep -Ei "\(ent|media|link status"    this lists links on all physical adapter (good!!!)

---------------

Configuring the interface on SEA (adding ip...):

cfgassist or mktcpip command:
mktcpip -hostname VIO_Server1 -inetaddr 9.3.5.196 -interface en3 -netmask 255.255.254.0 -gateway 9.3.4.1

---------------

SEA load sharing mode error:

chdev -dev ent23 -attr ha _mode=sharing

Method error (/usr/lib/methods/chgsea):
        0514-018 The values specified for the following attributes 
                 are not valid:
ha_mode. Insufficient no. of adapters.



This indicates that you have only 1 virtual adapter configured in the SEA, so load cannot be shared (that is why you cannot chage ha_mode attribute). Add additional Virtual Ethernet Adpater to the SEA for this sharing mode to activate.

---------------

Total Etherchannel failure and LIMBO state on SEA:

During dual VIOS install, when second SEA configured on VIOS2, network connection was lost and received this:

errpt:
CE9566DF   0719154713 P H ent9           TOTAL ETHERCHANNEL FAILURE

entstat for SEA:
    State: LIMBO
    High Availability Mode: Auto
    Priority: 1
...
Virtual Adapter: ent5
  Priority: 1  Active: False
Virtual Adapter: ent4
  Priority: 1  Active: False
Virtual Adapter: ent3
  Priority: 1  Active: False
Virtual Adapter: ent2
  Priority: 1  Active: False


Limbo state means:
The physical network is not operational or network state is unknown, or the Shared Ethernet Adapter cannot ping the specified remote host.

Limbo packets are sent by the primary Shared Ethernet Adapter when it detects that its physical network is not operational, or when it cannot ping the specified remote host (to inform the backup that it needs to become active).

After checking control channel on both SEA, found configuration problem. One of the control channel was in virtual switch ETHERNET0 the other on was in ETHERNET1, so control channel could not work properly. (You can check it in VIOS LPAR properties on HMC, or with entstat command.)

On the VIOS LPAR with wrong control channel:
1. remove SEA device: rmdev...
2. shutdown LPAR and change profile on HMC: control channel virt. adapter to the correct virtual switch
3. start LPAR and create SEA device again.

After this everything was OK.

---------------

'IBM AIX > VIOS' 카테고리의 다른 글

LPM  (0) 2016.07.18
VSCSI  (0) 2016.07.18
NPIV  (0) 2016.07.18
VIO 가상 아답터 생성  (0) 2016.07.18
VIOS OS mirror  (0) 2016.07.18


NPIV (Virtual Fibre Channel Adapter)

With NPIV, you can configure the managed system so that multiple logical partitions can access independent physical storage through the same physical fibre channel adapter. (NPIV means N_Port ID Virtualization. N_Port ID is a storage term, for node port ID, to identify ports on the nod (FC Adpater) in the SAN area.)
To access physical storage in a typical storage area network (SAN) that uses fibre channel, the physical storage is mapped to logical units (LUNs) and the LUNs are mapped to the ports of physical fibre channel adapters. Each physical port on each physical fibre channel adapter is identified using one worldwide port name (WWPN).

NPIV is a standard technology for fibre channel networks that enables you to connect multiple logical partitions to one physical port of a physical fibre channel adapter. Each logical partition is identified by a unique WWPN, which means that you can connect each logical partition to independent physical storage on a SAN.

To enable NPIV on the managed system, you must create a Virtual I/O Server logical partition (version 2.1, or later) that provides virtual resources to client logical partitions. You assign the physical fibre channel adapters (that support NPIV) to the Virtual I/O Server logical partition. Then, you connect virtual fibre channel adapters on the client logical partitions to virtual fibre channel adapters on the Virtual I/O Server logical partition. A virtual fibre channel adapter is a virtual adapter that provides client logical partitions with a fibre channel connection to a storage area network through the Virtual I/O Server logical partition. The Virtual I/O Server logical partition provides the connection between the virtual fibre channel adapters on the Virtual I/O Server logical partition and the physical fibre channel adapters on the managed system.

The following figure shows a managed system configured to use NPIV:



on VIO server:
root@vios1: / # lsdev -Cc adapter
fcs0      Available 01-00 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03)
fcs1      Available 01-01 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03)
vfchost0  Available       Virtual FC Server Adapter
vfchost1  Available       Virtual FC Server Adapter
vfchost2  Available       Virtual FC Server Adapter
vfchost3  Available       Virtual FC Server Adapter
vfchost4  Available       Virtual FC Server Adapter

on VIO client:
root@aix21: /root # lsdev -Cc adapter
fcs0 Available C6-T1 Virtual Fibre Channel Client Adapter
fcs1 Available C7-T1 Virtual Fibre Channel Client Adapter

Two unique WWPNs (world-wide port names) starting with the letter "c" are generated by the HMC for the VFC client adapter. The pair is critical and both must be zoned if Live Partition Migration is planned to be used. The virtual I/O client partition uses one WWPN to log into the SAN at any given time. The other WWPN is used when the client logical partition is moved to another managed system using PowerVM Live Partition Mobility. 

lscfg -vpl fcsX will show only the first WWPN
fcstat fcsX will show only the active WWPN


Both of them are showing only 1 WWPN but fcstat will show always the active WWPN which is in use (which will change after an LPM), however lscfg will show as a static value the 1st WWPN assigned to the HBA only.

One VFC client adapter per physical port per client partition and maximum 64 active VFC client adapter per physical port. There is always one-to-one relationship between the virtual Fibre Channel client adapter and the virtual Fibre Channel server adapter.

The difference between traditional redundancy with SCSI adapters and the NPIV technology using virtual Fibre Channel adapters is that the redundancy occurs on the client, because only the client recognizes the disk. The Virtual I/O Server is essentially just a pass-through managing the data transfer through the POWER hypervisor. When using Live Partition Mobility storage moves to the target server without requiring a reassignment (opposite with virtual scsi), because the virtual Fibre Channels have their own WWPNs that move with the client partitions on the target server.

After creating an FC client adapter, and trying to make it persistent across restarts, another different pair of virtual WWPNs would be generated, when creating the adapter in the profile. To prevent this undesired situation, which would require another SAN zoning and storage configuration, make sure to save any virtual Fibre Channel client adapter DLPAR changes into a new partition profile by selecting: Configuration -> Save Current Configuration and change the default partition profile to the new profile.

-----------------------------------------------------

NPIV clients num_cmd_elem attribute should not exceed the VIOS adapter’s num_cmd_elems.
If you increase num_cmd_elems on the virtual FC (vFC) adapter, then you should also increase the setting on the real FC adapter. 

-----------------------------------------------------


Check NPIV adapter mapping on client:

root@bb_lpar: / # echo "vfcs" | kdb                                         <--vfcs is a kdb subcommand
...
NAME      ADDRESS             STATE   HOST      HOST_ADAP  OPENED NUM_ACTIVE
fcs0      0xF1000A000033A000  0x0008  aix-vios1 vfchost8  0x01    0x0000    <--shows which vfchost is used on vio server for this client
fcs1      0xF1000A0000338000  0x0008  aix-vios2 vfchost6  0x01    0x0000

-----------------------------------------------------

NPIV creation and how they are related together:

FCS0: Physical FC Adapter installed on the VIOS
VFCHOST0: Virtual FC (Server) Adapter on VIOS
FCS0 (on client): Virtual FC adapter on VIO client




Creating NPIV adapters:
0. install physical FC Adapters to VIO Servrs
1. HMC -> VIO Server -> DLPAR -> Virtual Adapter (don't forget profile (save current))
2. HMC -> VIO Client -> DLPAR -> Virtual Adapter (the ids should be mapped, don't forget profile)
3. cfgdev (VIO server), cfgmgr (client)    <--it will bring up the new adapter vfchostX on vio server, fcsX on client
4. check status:
    lsdev -dev vfchost*                    <--lists virtual FC server adapters
    lsmap -vadapter vfchost0 -npiv         <--gives more detail about the specified virtual FC server adapter
    lsdev -dev fcs*                        <--lists physical FC server adapters
    lsnports                               <--checks NPIV readiness (fabric=1 means npiv ready)
5. vfcmap -vadapter vfchost0 -fcp fcs0      <--mapping the virtual FC adapter to the VIO's physical FC
6. lsmap -all -npiv                        <--checks the maping
7. HMC -> VIO Client -> get the WWN of the adapter    <--if no LPM will be used only the first WWN is needed
8. SAN zoning

-----------------------------------------------------

Checking if VIOS FC Adapter supports NPIV:

On VIOS as padmin:
$ lsnports
name             physloc                        fabric tports aports swwpns  awwpns
fcs0             U78C0.001.DAJX633-P2-C2-T1        1     64     64   2048    2032
fcs1             U78C0.001.DAJX633-P2-C2-T2        1     64     64   2048    2032
fcs2             U78C0.001.DAJX634-P2-C2-T1        1     64     64   2048    2032
fcs3             U78C0.001.DAJX634-P2-C2-T2        1     64     64   2048    2032

value in column fabric:
1 - adapter and the SAN switch is NPIV ready
2 - adapter or SAN switch is not NPIV ready and SAN switch configuration should be checked

-----------------------------------------------------

Getting WWPNs from HMC CLI:

lshwres -r virtualio --rsubtype fc --level lpar -m <Man. Sys.> -F lpar_name,wwpns --header --filter lpar_names=<lpar name>

lpar_name,wwpns
bb_lpar,"c05076066e590016,c05076066e590017"
bb_lpar,"c05076066e590014,c05076066e590015"
bb_lpar,"c05076066e590012,c05076066e590013"
bb_lpar,"c05076066e590010,c05076066e590011"

-----------------------------------------------------


Replacement of a physical FC adapter with NPIV

1. identify the adapter

$ lsdev -dev fcs4 -child
name             status      description
fcnet4           Defined     Fibre Channel Network Protocol Device
fscsi4           Available   FC SCSI I/O Controller Protocol Device

2. unconfigure the mappings


$ rmdev -dev vfchost0 -ucfg
vfchost0 Defined

3. FC adapters and their child devices must be unconfigured or deleted

$ rmdev -dev fcs4 -recursive -ucfg
fscsi4 Defined
fcnet4 Defined
fcs4 Defined

4. diagmenu
DIAGNOSTIC OPERATING INSTRUCTIONS -> Task Selection -> Hot Plug Task -> PCI Hot Plug Manager -> Replace/Remove a PCI Hot Plug Adapter.

-----------------------------------------------------

Changing WWPN number:
There are 2 methods: changing dynamically (chhwres) or changing in the profile (chsyscfg). Both of them are similar and both of them done in HMC CLI.

I. Changing dynamically:

1. get current adapter config:
# lshwres -r virtualio --rsubtype fc -m <man. sys.> --level lpar | grep <LPAR name>
lpar_name=aix_lpar_01,lpar_id=14,slot_num=8,adapter_type=client,state=1,is_required=0,remote_lpar_id=1,remote_lpar_name=aix_vios1,remote_slot_num=123,"wwpns=c0507603a42102d8,c0507603a42102d9"

2. remove adapter from client LPAR: rmdev -Rdl fcsX (if needed unmanage device prior from storage driver)

3. remove adapter dynamically from HMC (it can be done in GUI)

4. create new adapter with new WWPNS dynamically:
# chhwres -r virtualio -m  -o a -p aix_lpar_01 --rsubtype fc -a "adapter_type=client,remote_lpar_name=aix_vios1,remote_slot_num=123,\"wwpns=c0507603a42102de,c0507603a42102df\"" -s 8

5. cfgmgr on client LPAR will bring up adapter with new WWPNs.

6. save actual config to profile (so next profile activation wil not bring back old WWPNs)

(vfc mapping removal did not needed in this case, if there are some problems try reconfig. that one as well at VIOS side)

-----------------------------------------------------

II. changing in the profile:

same as above just some commands are different:

get current config:
# lssyscfg -r prof -m <man. sys.> --filter lpar_names=aix_vios1
aix_lpar01: default:"""6/client/1/aix_vios1/5/c0507604ac560004,c0507604ac560005/1"",""7/client/1/aix_vios1/4/c0507604ac560018,c0507604ac560019/1"",""8/client2/aix_vios2/5/c0507604ac56001a,c0507604ac56001b/1"",""9/client/2/aix_vios2/4/c0507604ac56001c,c0507604ac56001d/1"""

create new adapters in the profile:
chsyscfg -m <man. sys.> -r prof  -i 'name=default,lpar_id=5,"virtual_fc_adapters+=""7/client/1/aix-vios1/4/c0507604ac560006,c0507604ac560007/1"""'

-m             - managed system
-r prof        - profile will be changed
-i '           - attributes
name=default   - name of the profile, which will be changed
lpar_id=5      - id of the client LPAR
7              - adapter id on client (slot id)
client         - adapter type
1              - remote PLAR id (VIOS server LPAR id)
aix_vios1      - remote LPAR name (VIOS server name)
4              - remote slote number (adapter id on VIOS server)
WWPN           - both WWPN numbers (separated  with , )
1              - required or desired (1- required, 0- desired)


Here VFC unmapping was needed:
vfcmap -vadapter vfchost4 -fcp        <--remove mapping
vfcmap -vadapter vfchost4 -fcp fcs2        <--create new mapping

-----------------------------------------------------

Virtual FC login to SAN:

When new LPAR with VFC has been created, before to see LUNs (to install AIX), for the first time VFC Adapter has to be logged in to SAN.
This can be done on HMC (above HMC V7 R7.3) with command chnportlogin

chnportlogin: it allows to allocate, log in and zone WWPNs before the client partition is activated.

On HMC:
1. lsnportlogin -m <man. sys> --filter lpar_ids=4                     <-- list status of VFC adapters (lpar_id should be given)

lpar_name=bb_lpar,lpar_id=4,profile_name=default,slot_num=2,wwpn=c0507607d1150008,wwpn_status=0,logged_in=none,wwpn_status_reason=null
lpar_name=bb_lpar,lpar_id=4,profile_name=default,slot_num=2,wwpn=c0507607d1150009,wwpn_status=0,logged_in=none,wwpn_status_reason=null
lpar_name=bb_lpar,lpar_id=4,profile_name=default,slot_num=3,wwpn=c0507607d115000a,wwpn_status=0,logged_in=none,wwpn_status_reason=null
lpar_name=bb_lpar,lpar_id=4,profile_name=default,slot_num=3,wwpn=c0507607d115000b,wwpn_status=0,logged_in=none,wwpn_status_reason=null

The WWPN status.  Possible values are:
   0 - WWPN is not activated
   1 - WWPN is activated
   2 - WWPN status is unknown


2. chnportlogin -m <man. sys> --id 4 -o login                           <-- activate WWPNs (VFC logs in to SAN)

3. lsnportlogin -m <man. sys> --filter lpar_ids=4                       <-- list status (it should be wwpn_status=1)

lpar_name=bb_lpar,lpar_id=4,profile_name=default,slot_num=2,wwpn=c0507607d1150008,wwpn_status=1,logged_in=vios,wwpn_status_reason=null
lpar_name=bb_lpar,lpar_id=4,profile_name=default,slot_num=2,wwpn=c0507607d1150009,wwpn_status=1,logged_in=vios,wwpn_status_reason=null
lpar_name=bb_lpar,lpar_id=4,profile_name=default,slot_num=3,wwpn=c0507607d115000a,wwpn_status=1,logged_in=vios,wwpn_status_reason=null
lpar_name=bb_lpar,lpar_id=4,profile_name=default,slot_num=3,wwpn=c0507607d115000b,wwpn_status=1,logged_in=vios,wwpn_status_reason=null


4. Storage team can do LUN assginment, after they finished, you can do logout:
   chnportlogin -m <man. sys> --id 4 -o logout

-----------------------------------------------------

IOINFO


If HMC is below V7 R7.3 ioinfo can be used to cause VFC adapters to login to SAN.
ioinfo also can be used for debug purposes or to check if disk are available/which disk is boot disk

It can be reached from SMS menu with number: 8 (Open Firmware Prompt)


IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM

          1 = SMS Menu                          5 = Default Boot List
          8 = Open Firmware Prompt              6 = Stored Boot List


     Memory      Keyboard     Network     SCSI     Speaker  ok

0 > ioinfo

!!! IOINFO: FOR IBM INTERNAL USE ONLY !!!
This tool gives you information about SCSI,IDE,SATA,SAS,and USB devices attached to the system

Select a tool from the following

 1. SCSIINFO
 2. IDEINFO
 3. SATAINFO
 4. SASINFO
 5. USBINFO
 6. FCINFO
 7. VSCSIINFO

q - quit/exit

==> 6


Then choose VFC client device from list --> List Attached FC Devices (this will cause that VFC device to login to SAN)
After that on VIOS: lsmap -npiv ... will show LOGGED_IN

(to quit from "ioinfo" command "reset-all" will do a reboot of the LPAR)


-----------------------------------------------------

'IBM AIX > VIOS' 카테고리의 다른 글

VSCSI  (0) 2016.07.18
SEA  (0) 2016.07.18
VIO 가상 아답터 생성  (0) 2016.07.18
VIOS OS mirror  (0) 2016.07.18
SR-IOV - vNIC  (0) 2016.07.18


 On VIO Server create optical device:

    -for using physical CDs and DVDs, create an optical device
        $ mkvdev -vdev cd0 -vadapter vhost4 -dev vcd
        vcd Available
    
        $ lsdev -virtual
        ...
        vcd             Available  Virtual Target Device - Optical Media

    -for file backed (iso images) optical device
        $ mkvdev -fbo -vadapter vhost1
        vtopt0 Available

        $lsdev -virtual
        ...
        vtopt0           Available   Virtual Target Device - File-backed Optical

        (copy the iso image to /var/vio/VMLibrary, 'lsrep' will show media repository content)
        (lssp -> mkrep -sp rootvg -size 4G    <--this will create media repository)
        (creating an iso image: mkvopt -name <filename>.iso -dev cd0 -ro)

        load the image into the vtopt0 device: loadopt -vtd vtopt0 -disk dvd.1022A4_OBETA_710.iso
        (lsmap -all will show it)
        
        or you can check it:
        padmin@vios1 : /home/padmin # lsvopt
        VTD             Media                                   Size(mb)
        vtopt0          AIX_7100-00-01_DVD_1_of_2_102010.iso        3206
            
        if later another disk is needed, you can unload an image with this command: unloadopt -vtd vtopt0
        if we don't need the image anymore at all we can remove it from the repository: rmvopt -name AIX_7100-00-01.iso


'IBM AIX > VIOS' 카테고리의 다른 글

SEA  (0) 2016.07.18
NPIV  (0) 2016.07.18
VIOS OS mirror  (0) 2016.07.18
SR-IOV - vNIC  (0) 2016.07.18
Shared Storage Pools (SSP 구성)  (0) 2016.07.18

$ extendvg –f rootvg hdisk1
$ mirrorios hdisk1

'IBM AIX > VIOS' 카테고리의 다른 글

SEA  (0) 2016.07.18
NPIV  (0) 2016.07.18
VIO 가상 아답터 생성  (0) 2016.07.18
SR-IOV - vNIC  (0) 2016.07.18
Shared Storage Pools (SSP 구성)  (0) 2016.07.18

 

SR-IOV - vNIC

SR-IOV

SR-IOV (Single Root IO Virtualization) is a network virtualization technology. The basic idea is, that a physical device can have multiple virtual instances of itself, and these can be assigned to any LPARs running on the managed system. We have only 1 physical adapter, but each LPAR will think it has its own dedicated adapter. This is achieved by creating logical ports above the physical ports on the adapter. These ports will be exist in the Hypervisor (Firmware), so no VIOS is necessary. In order to see new SR-IOV menu points in HMC GUI, Firmware and HMC should be in a correct level.

Dedicated Mode - Shared Mode:
An SR-IOV capable adapter is either in dedicated mode or shared mode.
In dedicated mode, the adapter is owned by one LPAR. Physical ports of the adapter are owned by that partition. (Usual old config.)
In shared mode, adapter is owned by the Hypervisor. In this mode logical ports can be created and these can be assigned to any LPAR.

Logical Port (LP) - Virtual Function (VF)
A Virtual Function (VF) is a general term used by PCI standards.  On IBM Power Systems, SR-IOV implements VFs as logical ports. Each logical port is associated with a physical port of the adapter. (Basically LPs and VFs are the same. In general documentations the term VF is used many times, but during SR-IOV configuration on HMC, I met only LP, so I`ll use LP here. In these standards another term, the Physical Function (PF) also exists, which is the Physical Port.)

------------------------------------

Capacity (SR-IOV desired bandwidth)

During Logical Port (Virtual Function) creation the desired capacity need to be configured. This is similar to the Entitled Capacity (CPU) setting, just here % is used. The configured value is the desired minimum bandwidth in percentage. It is not capped, which means, if there is additional bandwidth that is not being used currently, it will be shared equally among all logical ports. Assignments are made in increments of 2 %  and total assignments for a single port can not exceed 100%. Capacity cannot be changed dynamically. (It is possible to change in profile and after profile activation is needed.)

So, if an LPAR needs, it will have % desired outgoing bandwidth. If additional outgoing bandwidth is available, any partition can use it. If a partition doesn’t need its minimum, that bandwidth is available to other partitions until the owning partition needs it. Capacity settings don’t have any influence on the incoming bandwidth.

------------------------------------

SR-IOV and LPM:



From a virtualization perspective, SR-VIO logical ports are seen as physical adapters at OS level, therefore operations like Live Partition Mobility are not supported when an SR-IOV logical port is configured on the partition (LPAR B).

If a Logical Port is part of a SEA (which bridge traffic from client partitions to the physical network) then client LPAR has only a Virtual Ethernet Adapter (LPAR A), so it can continue using Live Partition Mobility.

------------------------------------

SR-IOV and Link Aggregation

In an LACP configuration multiple primary SR-IOV logical ports are allowed. When LACP (IEEE802.3ad,) configured with multiple main logical ports, only SR-IOV logical ports can be part of the link aggregation and only 1 single logical port can be configured per physical port.

So, with LACP only one logical port per physical port can be used.
(The second configuration, with more than one logical port assigned to a physical port, will not work.)


To prevent users from adding a logical port to the physical port when LACP being used, you can set the logical port capacity to 100%.

------------------------------------

SR-IOV and Etherchannel (NIB):

In an active-passive configuration (Network Interface Backup), SR-IOV logical port can be primary (active) or backup (passive), or both. If more than one primary adapter is configured in an Etherchannel, then SR-IOV logical port cannot be a primary adapter. When an SR-IOV logical port is configured in an active-passive configuration, it must be configured to detect when to fail over from primary to the backup adapter. This can be achieved by configuring an IP address to ping.

------------------------------------

SR-IOV Configuration:
(Here HMC classical GUI has been used, not enhanced HMC GUI.)


1. verify preprequisites: HMC level, Firmware level, compatible SR-IOV adapter, Man. Sys. capabilities (SR-IOV capable: True)

2. change adapter from Dedicated to Shared mode: Man. Sys. Properties --> IO tab --> choose adapter --> click on shared mode
(This happens at Man. Sys. level, so adapter should not be assigned to any LPARs. If you check again after modification, you will see adapter will be owned by Hypervisor. It is possible to switch back to dedicated mode, just already configured logical ports must be de-configured prior to that.)

3. Physical port config: Man. System --> Properties --> IO --> choose an SR-IOV adapter --> choose a port
Here you can configure speed (10Gb), MTU size, label...

4.Logical Port config: Logical Port is mapped to an LPAR so an LPAR has to be chosen before config. It can be done as a DLPAR operation, or it can be done in LPAR profile.

Add a logical port to a Physical Port:
DLPAR: choose LPAR --> Dynamic part. --> SR-IOV Log. Ports --> Action --> Add Log. Port --> choose a phys. port to conf. additional Log. Port.
PROFILE: In LPAR profile, SR-IOV Log. Ports -->  SR-IOV Menu --> Add Log. Port -->  choose a physical port to configure additional Log. Port

After new window will pop up:


Capacity:     Set desired percentage of Log. Port
Promiscuous:  if the logical port will be assigned to a VIOS in a SEA, then enabling promiscuous mode is required.
              (Promiscuous mode can be enabled on only one logical port per physical port.)
              When promiscuous mode is enabled, Allow All VLAN IDs and  Allow all O/S Defined MAC Addresses are the only options available.

After finishing it, on OS level a new adapter will be configured with VF in the name (cfgmgr may needed)

# lsdev -Cc adapter
ent0    Available 02-00 4-Port Gigabit Ethernet PCI-Express Adapter (e414571614102004)
ent1    Available 02-01 4-Port Gigabit Ethernet PCI-Express Adapter (e414571614102004)
ent2    Available 02-02 4-Port Gigabit Ethernet PCI-Express Adapter (e414571614102004)
ent3    Available 02-03 4-Port Gigabit Ethernet PCI-Express Adapter (e414571614102004)
ent4    Available 0C-00 PCIe3 4-Port 10GbE SR Adapter VF(df1028e21410e304)

# entstat -d ent4
...
...
VF Minimum Bandwidth: 24%
VF Maximum Bandwidth: 100%

------------------------------------
------------------------------------
------------------------------------

vNIC

vNIC (Virtual Network Interface Controller) is a new type of network adapter on AIX 7.1 TL4 (or later) or on AIX 7.2. vNIC adapters are based on SR-IOV technology and with those LPM is possible.

# lsdev -Cc adapter
ent0   Available   Virtual I/O Ethernet Adapter (l-lan)
ent1   Available   Virtual I/O Ethernet Adapter (l-lan)
ent2   Available   Virtual I/O Ethernet Adapter (l-lan)
ent3   Available   Virtual NIC Client Adapter (vnic)
ent4   Available   Virtual NIC Client Adapter (vnic)

'IBM AIX > VIOS' 카테고리의 다른 글

SEA  (0) 2016.07.18
NPIV  (0) 2016.07.18
VIO 가상 아답터 생성  (0) 2016.07.18
VIOS OS mirror  (0) 2016.07.18
Shared Storage Pools (SSP 구성)  (0) 2016.07.18

 

Shared Storage Pools:

A shared storage pool is a pool of SAN storage devices that can span multiple Virtual I/O Servers. It is based on a cluster of Virtual I/O Servers and a distributed data object repository. (This repository is using a cluster filesystem that has been developed specifically for the purpose of storage virtualization and you can see something like this: /var/vio/SSP/bb_cluster/D_E_F_A_U_L_T_061310)

When using shared storage pools, the Virtual I/O Server provides storage through logical units that are assigned to client partitions. A logical unit is a file backed storage device that resides in the cluster filesystem in the shared storage pool. It appears as a virtual SCSI disk in the client partition.

The Virtual I/O Servers that are part of the shared storage pool are joined together to form a cluster. Only Virtual I/O Server partitions can be part of a cluster. The Virtual I/O Server clustering model is based on Cluster Aware AIX (CAA) and RSCT technology.

Cluster can consist:
VIOS version 2.2.0.11, Fix Pack 24, Service Pack 1         <--1 node
VIOS version 2.2.1.3                                       <--4 node
VIOS Version 2.2.2.0                                       <--16 node

------------------------------------------------------------------------------

Thin provisioning

A thin-provisioned device represents a larger image than the actual physical disk space it is using. It is not fully backed by physical storage as long as the blocks are not in use. A thin-provisioned logical unit is defined with a user-specified size when it is created. It appears in the client partition as a virtual SCSI disk with that user-specified size. However, on a thin-provisioned logical unit, blocks on the physical disks in the shared storage pool are only allocated when they are used.

Consider a shared storage pool that has a size of 20 GB. If you create a logical unit with a size of 15 GB, the client partition will see a virtual disk with a size of 15 GB. But as long as the client partition does not write to the disk, only a small portion of that space will initially be used from the shared storage pool. If you create a second logical unit also with a size of 15 GB, the client partition will see two virtual SCSI disks, each with a size of 15 GB. So although the shared storage pool has only 20 GB of physical disk space, the client partition sees 30 GB of disk space in total.

After the client partition starts writing to the disks, physical blocks will be allocated in the shared storage pool and the amount of free space in the shared storage pool will decrease. Deleting files or logical volumes from the shared storage pool, on a client partition does not increase free space of the shared storage pool.

When the shared storage pool is full, client partitions will see an I/O error on the virtual SCSI disk. Therefore even though the client partition will report free space to be available on a disk, that information might not be accurate if the shared storage pool is full.

To prevent such a situation, the shared storage pool provides a threshold that, if reached, writes an event in the errorlog of the Virtual I/O Server.

(If you use -thick flag with mkdbsp command, not a thin provisioned disk, but a usual disk (thick) will be created and client will have all the disk space.)

------------------------------------------------------------------------------

When a cluster is created, you must specify one physical volume for the repository and one for the storage pool physical volume. The storage pool physical volumes are used to provide storage to the client partitions. The repository physical volume is used to perform cluster communication and store the cluster
configuration.

If you need to increase the free space in the shared storage pool, you can either add an additional physical volume or you can replace an existing volume with a bigger one. Physical disks cannot be removed from the shared storage pool.


Requirements:
-each VIO Server must resolve correctly other VIO servers in cluster (DNS or /etc/hosts must be filled up with all VIO Servers)
-hostname command should show FQDN (with domain.com)
-VLAN tagging interfaces are not supported in earlier VIO versions for cluster communications
-fibre channel adapter should be ste to dyntrk=yes, fc_err_recov=fast_fail
-disks reserve policy should be set to no_reserve and all VIO Server must have these disk in available state.
-1 disk is needed for repository (min 10GB) and 1 or more for data (min 10GB) (these should be SAN FC LUNs)
-Active Memory Sharing paging space cannot be on SSP disk

------------------------------------------------------------------------------

Commands for create:
cluster -create -clustername bb_cluster -spname bb_pool -repopvs hdiskpower1 -sppvs hdiskpower2 -hostname bb_vio1
        clustername    bb_cluster                                                <--name of the cluster
        -spname bb_pool                                                          <--storage pool name
        -repopvs hdiskpower1                                                     <--disk of repository
        -sppvs hdiskpower2                                                       <--storage pool disk
        -hostname bb_vio1                                                        <--VIO Server hostname (where to create cluster)
(This command will create cluster, start CAA daemons and create shared storage pool)


cluster -addnode -clustername bb_cluster -hostname bb_vio2                       adding node to the cluster (16 node can be added)
chsp -add -clustername bb_cluster -sp bb_pool hdiskpower2                        adding disk to a shared storage poool

mkbdsp -clustername bb_cluster -sp bb_pool 10G -bd bb_disk1 -vadapter vhost0     creating a 10G LUN and assigning to vhost0 (lsmap -all will show it)
mkbdsp -clustername bb_cluster -sp bb_pool 10G -bd bb_disk2                      creating a 10G LUN

mkbdsp -clustername bb_cluster -sp bb_pool -bd bb_disk2 -vadapter vhost0         assigning LUN to a vhost adapter (command works only if bd name is unique)
mkbdsp -clustername bb_cluster -sp bb_pool -luudid c7ef7a2 -vadapter vhost0      assigning and earlier created LUN by LUN ID to a vhost adapter (same as above)



Commands for display:
cluster -list                                                      display cluster name and ID
cluster -status -clustername bb_cluster                            display cluster state and pool state on each node

lssp -clustername bb_cluster                                       list storage pool details (pool size, free space...)
lssp -clustername bb_cluster -sp bb_pool -bd                       list created LUNs in the storage pool (backing devices in lsmap -all)

lspv -clustername bb_cluster -sp bb_pool                           list physical volumes of shared storage pool (disk size, id)
lspv -clustername bb_cluster -capable                              list which disk can be added to the cluster

lscluster -c                                                       list cluster configuration
lscluster -d                                                       list disk details of the cluster
lscluster -m                                                       list info about nodes (interfaces) of the cluster
lscluster -s                                                       list network statistics of the local node (packets sent...)
lscluster -i -n bb_cluster                                         list interface information of the cluster

odmget -q "name=hdiskpower2 and attribute=unique_id" CuAt          checking LUN ID (as root)



Commands for remove:
rmbdsp -clustername bb_cluster -sp bb_pool -bd bb_disk1            remove created LUN (backing device will be deleted from vhost adapter)
                                                                   (disks cannot be removed from cluster (for example hdiskpower...)
cluster -rmnode -clustername bb_cluster -hostname bb_vios1         remove node from cluster
cluster -delete -clustername bb_cluster                            remove cluster completely

------------------------------------------------------------------------------

Create cluster and Shared Storage Pool:

1. create a cluster and pool: cluster -create ...
2. adding additional nodes to the cluster: cluster -addnode
3. checking which physical volume can be added: lspv -cluatername clusterX -capable
4. adding physical volume: chsp -add
5. create and map LUNS to clients: mkdsp -clustername...


------------------------------------------------------------------------------

cleandisk -r hdiskX                    clean cluster signature from hdisk
cleandisk -s hdiskX                    clean storage pool signature from hdisk
/var/vio/SSP                           cluster related directory (and files) will be created in this path

------------------------------------------------------------------------------

Managing snapshots:

Snapshots from a LUN can be created which later can be restored in case of any problems

# snapshot -create bb_disk1_snap -clustername bb_cluster -spname bb_pool -lu bb_disk1    <--create a snapshot

# snapshot -list -clustername bb_cluster -spname bb_pool                                 <--list snapshots of a storage pool
Lu Name          Size(mb)    ProvisionType    Lu Udid
bb_disk1         10240       THIN             4aafb883c949d36a7ac148debc6d4ee7
Snapshot
bb_disk1_snap

# snapshot -rollback bb_disk1_snap -clustername bb_cluster -spname bb_pool -lu bb_disk1  <--rollback a snapshot to a LUN
$ snapshot -delete bb_disk1_snap -clustername bb_cluster -spname bb_pool -lu bb_disk1    <--delete a snapshot

------------------------------------------------------------------------------

Setting alerts for Shared Storage Pools:

As thin provisioning is in place, real storage free space cannot be seen exactly. If storage pool gets 100% full, IO error will occur on client LPAR. To avoid this alerts can be configured:

$ alert -list -clustername bb_cluster -spname bb_pool
PoolName                 PoolID                             Threshold%
bb_pool                  000000000A8C1517000000005150C18D   35                        <--it shows the free percentage

# alert -set -clustername bb_cluster -spname bb_pool -type threshold -value 25        <--if free space goes below 25% it will alert

# alert -list -clustername bb_cluster -spname bb_pool
PoolName                 PoolID                             Threshold%
bb_pool                  000000000A8C1517000000005150C18D   25                        <--new value can be seen here

$ alert -unset -clustername bb_cluster -spname bb_pool                                <--unset an alert
in errlog you can see the warning

----------------------------------------

 

추가

 

 

 

 

Here are my notes of the Shared Storage Pools commands that you need to remember.  I have said it before but SSP is big on concepts and system admin time saving but very simple to operate.

 

Creating the pool (once only) by example:

  • Assumption: 1 GB Repository disk = hdisk15 and Pool data disks = hdisk7 to hdisk14, cluster and pool both called "alpha"
  • Assumption: IBM V7000 A has LUNs: hdisk7 hdisk8 hisk9 hdisk10
  • Assumption: IBM V7000 B has LUNs: hdisk11 hdisk12 hisk13 hdisk14

Examples

  • cluster -create -clustername alpha  -spname alpha  -repopvs hdisk15   -sppvs hdisk7 hdisk8   -hostname orangevios1.domain.com
  • cluster -addnode -clustername alpha  -hostname silvervios1.domain.com
  • failgrp -modify -fg Default -attr FGName=V7000A
  • failgrp -create -fg V7000B: hdisk11 hdisk12
  • pv -add -fg V7000A: hdisk9 hdisk10 V7000B: hdisk13 hdisk14

Shared Storage Pools - Daily work

Create a LU (Logical Unit - virtual disk in the pool)

  •  lu -remove | -map | -unmap | -list  [-lu name]

  •  lu -create -lu name -size nnG -vadapter vhostXX [-thick]

Examples

  • lu -create -lu lu42 -size 32G -vadapter vhost66 -thick

  • lu -map    -lu lu42           -vadapter vhost22

  • lu -list

  • lu -list -verbose

  • lu -remove -lu lu42

Snapshots - stop the VM for a safe consistent disk image but you could (if confident) take a live snapshot and rely on filesystem logs and application based data recovery like RDBMS transaction logs

  •  snapshot [-create -delete -rollback -list]  name [-lu <list-LU-names>]     -clustername x -spname z

 Examples

  • snapshot -create   bkup1 -lu lu42 lu43   -clustername alpha -spname alpha

  • snapshot -rollback bkup1    -clustername alpha -spname alpha

  • snapshot -delete    bkup1    -clustername alpha -spname alpha

 

Shared Storage Pool – Weekly Configuration & Monitoring

Configuration Details

  • cluster -list                  <-- yields cluster name

  • cluster -status -clustername alpha

  • cluster -status -clustername alpha -verbose <-- shows you the poolname

  • lscluster -d <-- yields all the hdisks with local names for each VIOS

Monitor pool use

  •  lssp -clustername alpha

  •  lssp -clustername alpha -sp alpha -bd

  • Note this command uses "-sp" and not "-spname" like  many others.

Monitor for issues

  • alert -set -type {threshold | overcommit} -value N

  • alert -list

  • Ongoing monitoring of VIOS logs for warnings

  • Note - Pool Alert Events are logged to your HMC which you can get emailed to people.
    Look for Resource-Name=VIOD_POOL     Description=Informational Message

 

Shared Storage Pool – Quarterly / Yearly Maintenance

Pool mirroring check

  •  failgrp -create       <-- once only when creating the pool
  •  failgrp -list [-verbose]

Growing the pool size and monitoring

  •  pv -list [-verbose]
  •  pv -list -capable    <-- check new LUN ready
  •  pv -add -fg a: hdisk100 b: hdisk101

Moving the pool to a different disk subsystem

  •  pv -oldpv hdisk100 -newpv hdisk200 

'IBM AIX > VIOS' 카테고리의 다른 글

SEA  (0) 2016.07.18
NPIV  (0) 2016.07.18
VIO 가상 아답터 생성  (0) 2016.07.18
VIOS OS mirror  (0) 2016.07.18
SR-IOV - vNIC  (0) 2016.07.18



서버에 터미널로 접속 후 "언제?" 이 명령어를 실행했었는지 알아야 할 때가 있다.

어떤 명령어를 실행했었는지는 history라는 명령어가 있긴 하지만

언제 실행했었는지는 나오지 않는다

하지만 /etc/profile에 다음과 같은 구문을 넣어주게 되면

history 명령어를 실행하게 되면 시간도 함께 표시가 된다.

#------------------------------------------------------------------------------
# Add Timestamp to history
#------------------------------------------------------------------------------
HISTTIMEFORMAT="%Y-%m-%d_%H:%M:%S\ "
export HISTTIMEFORMAT
#------------------------------------------------------------------------------

확인 방법은 위 구문을 /etc/profile에 넣어주고

프롬프트에서

# source /etc/profile

실행한다

그런 다음 history 명령어를 실행하게 되면 다음과 같은 결과가 나오게 된다.

1003 2010-06-16_15:41:06\ /usr/local/apache/bin/apachectl stop
1004 2010-06-16_15:41:08\ /usr/local/apache/bin/apachectl start
1005 2010-06-16_16:00:43\ ls -arlt
1006 2010-06-16_16:00:43\ cd


참고로 lastcomm이란 명령어도 확인해 보길 추천함

'IBM AIX' 카테고리의 다른 글

HMC 에서 vio 설치  (0) 2016.09.01
S824 MES  (0) 2016.08.16
RMC  (0) 2016.07.18
sh history  (0) 2016.06.01
Power Systems - 2015 4Q Announce overview  (0) 2016.04.04


출처 : http://cafe.naver.com/aix.cafe?iframe_url=/ArticleRead.nhn%3Farticleid=12215



sh_history 파일에 time stamp 를 남기고 확인 하는 방법에 대해서 내용 전달 합니다.

/etc/environment 파일에

EXTENDED_HISTORY=ON

이라고 설정을 해주시면 다음 LOGIN 혹은 현재 shell에 적용하시려면



# . /etc/environment

라고 하시면 적용이 됩니다.




History 파일을 열어보면 아래와 같이 표시가 되는데요.

# vi .sh_history
exit #?1260175606#?
errpt #?1260236625#?
df -k #?1260236627#?
cd /APM #?1260236632#?
ls #?1260236632#?
cat .sh_history #?1260236642#?
cat /.sh_history #?1260236649#?
date #?1260236667#?
cd / #?1260236694#?
ls #?1260236695#?
cat .bash_history #?1260236720#?
cd /var/adm #?1260236733#?
s #?1260236733#?
ls #?1260236737#?
ls -l #?1260236756#?
cd acct #?1260236764#?
ls #?1260236764#?
ls -al #?1260236765#?
cd #?1260236821#?
fc -t #?1260236823#?
ccat /.sh_history #?1260237296#?



time stamp를 확인하실 때는 fc command를 이용하시면 됩니다.

# fc -t
597 2009/12/08 10:44:09 :: cat /.sh_history
598 2009/12/08 10:44:27 :: date
599 2009/12/08 10:44:54 :: cd /
600 2009/12/08 10:44:55 :: ls
601 2009/12/08 10:45:20 :: cat .bash_history
602 2009/12/08 10:45:33 :: cd /var/adm
603 2009/12/08 10:45:33 :: s
604 2009/12/08 10:45:37 :: ls
605 2009/12/08 10:45:56 :: ls -l
606 2009/12/08 10:46:04 :: cd acct
607 2009/12/08 10:46:04 :: ls
608 2009/12/08 10:46:05 :: ls -al
609 2009/12/08 10:47:01 :: cd
610 2009/12/08 10:47:03 :: fc -t
611 2009/12/08 10:54:56 :: at /.sh_history
612 2009/12/08 10:56:27 :: fc –t



마지막 100라인을 확인하실 때는

# fc –t 100

이라고 입력 하시면 됩니다.

해당 parameter는 AIX 5.3에서부터 가능한 옵션이며 AIX5.2 이하에서는 적용이 되지 않음을 유의하시기 바랍니다.

참고로 history 의 사이즈를 늘리기 위해서는 /etc/environment 파일에

HISTSIZE=#####

라고 입력하시면 됩니다. Default 는 128 입니다



'IBM AIX' 카테고리의 다른 글

HMC 에서 vio 설치  (0) 2016.09.01
S824 MES  (0) 2016.08.16
RMC  (0) 2016.07.18
sh history  (1) 2016.06.01
Power Systems - 2015 4Q Announce overview  (0) 2016.04.04

 

Power Systems 


2015 4Q Announce overview

 

 

 

 

RRC-BPeduc_Power-4Q-overview.pptx

 

'IBM AIX' 카테고리의 다른 글

HMC 에서 vio 설치  (0) 2016.09.01
S824 MES  (0) 2016.08.16
RMC  (0) 2016.07.18
sh history  (1) 2016.06.01
sh history  (0) 2016.06.01

+ Recent posts