Shared Storage Pools:

A shared storage pool is a pool of SAN storage devices that can span multiple Virtual I/O Servers. It is based on a cluster of Virtual I/O Servers and a distributed data object repository. (This repository is using a cluster filesystem that has been developed specifically for the purpose of storage virtualization and you can see something like this: /var/vio/SSP/bb_cluster/D_E_F_A_U_L_T_061310)

When using shared storage pools, the Virtual I/O Server provides storage through logical units that are assigned to client partitions. A logical unit is a file backed storage device that resides in the cluster filesystem in the shared storage pool. It appears as a virtual SCSI disk in the client partition.

The Virtual I/O Servers that are part of the shared storage pool are joined together to form a cluster. Only Virtual I/O Server partitions can be part of a cluster. The Virtual I/O Server clustering model is based on Cluster Aware AIX (CAA) and RSCT technology.

Cluster can consist:
VIOS version 2.2.0.11, Fix Pack 24, Service Pack 1         <--1 node
VIOS version 2.2.1.3                                       <--4 node
VIOS Version 2.2.2.0                                       <--16 node

------------------------------------------------------------------------------

Thin provisioning

A thin-provisioned device represents a larger image than the actual physical disk space it is using. It is not fully backed by physical storage as long as the blocks are not in use. A thin-provisioned logical unit is defined with a user-specified size when it is created. It appears in the client partition as a virtual SCSI disk with that user-specified size. However, on a thin-provisioned logical unit, blocks on the physical disks in the shared storage pool are only allocated when they are used.

Consider a shared storage pool that has a size of 20 GB. If you create a logical unit with a size of 15 GB, the client partition will see a virtual disk with a size of 15 GB. But as long as the client partition does not write to the disk, only a small portion of that space will initially be used from the shared storage pool. If you create a second logical unit also with a size of 15 GB, the client partition will see two virtual SCSI disks, each with a size of 15 GB. So although the shared storage pool has only 20 GB of physical disk space, the client partition sees 30 GB of disk space in total.

After the client partition starts writing to the disks, physical blocks will be allocated in the shared storage pool and the amount of free space in the shared storage pool will decrease. Deleting files or logical volumes from the shared storage pool, on a client partition does not increase free space of the shared storage pool.

When the shared storage pool is full, client partitions will see an I/O error on the virtual SCSI disk. Therefore even though the client partition will report free space to be available on a disk, that information might not be accurate if the shared storage pool is full.

To prevent such a situation, the shared storage pool provides a threshold that, if reached, writes an event in the errorlog of the Virtual I/O Server.

(If you use -thick flag with mkdbsp command, not a thin provisioned disk, but a usual disk (thick) will be created and client will have all the disk space.)

------------------------------------------------------------------------------

When a cluster is created, you must specify one physical volume for the repository and one for the storage pool physical volume. The storage pool physical volumes are used to provide storage to the client partitions. The repository physical volume is used to perform cluster communication and store the cluster
configuration.

If you need to increase the free space in the shared storage pool, you can either add an additional physical volume or you can replace an existing volume with a bigger one. Physical disks cannot be removed from the shared storage pool.


Requirements:
-each VIO Server must resolve correctly other VIO servers in cluster (DNS or /etc/hosts must be filled up with all VIO Servers)
-hostname command should show FQDN (with domain.com)
-VLAN tagging interfaces are not supported in earlier VIO versions for cluster communications
-fibre channel adapter should be ste to dyntrk=yes, fc_err_recov=fast_fail
-disks reserve policy should be set to no_reserve and all VIO Server must have these disk in available state.
-1 disk is needed for repository (min 10GB) and 1 or more for data (min 10GB) (these should be SAN FC LUNs)
-Active Memory Sharing paging space cannot be on SSP disk

------------------------------------------------------------------------------

Commands for create:
cluster -create -clustername bb_cluster -spname bb_pool -repopvs hdiskpower1 -sppvs hdiskpower2 -hostname bb_vio1
        clustername    bb_cluster                                                <--name of the cluster
        -spname bb_pool                                                          <--storage pool name
        -repopvs hdiskpower1                                                     <--disk of repository
        -sppvs hdiskpower2                                                       <--storage pool disk
        -hostname bb_vio1                                                        <--VIO Server hostname (where to create cluster)
(This command will create cluster, start CAA daemons and create shared storage pool)


cluster -addnode -clustername bb_cluster -hostname bb_vio2                       adding node to the cluster (16 node can be added)
chsp -add -clustername bb_cluster -sp bb_pool hdiskpower2                        adding disk to a shared storage poool

mkbdsp -clustername bb_cluster -sp bb_pool 10G -bd bb_disk1 -vadapter vhost0     creating a 10G LUN and assigning to vhost0 (lsmap -all will show it)
mkbdsp -clustername bb_cluster -sp bb_pool 10G -bd bb_disk2                      creating a 10G LUN

mkbdsp -clustername bb_cluster -sp bb_pool -bd bb_disk2 -vadapter vhost0         assigning LUN to a vhost adapter (command works only if bd name is unique)
mkbdsp -clustername bb_cluster -sp bb_pool -luudid c7ef7a2 -vadapter vhost0      assigning and earlier created LUN by LUN ID to a vhost adapter (same as above)



Commands for display:
cluster -list                                                      display cluster name and ID
cluster -status -clustername bb_cluster                            display cluster state and pool state on each node

lssp -clustername bb_cluster                                       list storage pool details (pool size, free space...)
lssp -clustername bb_cluster -sp bb_pool -bd                       list created LUNs in the storage pool (backing devices in lsmap -all)

lspv -clustername bb_cluster -sp bb_pool                           list physical volumes of shared storage pool (disk size, id)
lspv -clustername bb_cluster -capable                              list which disk can be added to the cluster

lscluster -c                                                       list cluster configuration
lscluster -d                                                       list disk details of the cluster
lscluster -m                                                       list info about nodes (interfaces) of the cluster
lscluster -s                                                       list network statistics of the local node (packets sent...)
lscluster -i -n bb_cluster                                         list interface information of the cluster

odmget -q "name=hdiskpower2 and attribute=unique_id" CuAt          checking LUN ID (as root)



Commands for remove:
rmbdsp -clustername bb_cluster -sp bb_pool -bd bb_disk1            remove created LUN (backing device will be deleted from vhost adapter)
                                                                   (disks cannot be removed from cluster (for example hdiskpower...)
cluster -rmnode -clustername bb_cluster -hostname bb_vios1         remove node from cluster
cluster -delete -clustername bb_cluster                            remove cluster completely

------------------------------------------------------------------------------

Create cluster and Shared Storage Pool:

1. create a cluster and pool: cluster -create ...
2. adding additional nodes to the cluster: cluster -addnode
3. checking which physical volume can be added: lspv -cluatername clusterX -capable
4. adding physical volume: chsp -add
5. create and map LUNS to clients: mkdsp -clustername...


------------------------------------------------------------------------------

cleandisk -r hdiskX                    clean cluster signature from hdisk
cleandisk -s hdiskX                    clean storage pool signature from hdisk
/var/vio/SSP                           cluster related directory (and files) will be created in this path

------------------------------------------------------------------------------

Managing snapshots:

Snapshots from a LUN can be created which later can be restored in case of any problems

# snapshot -create bb_disk1_snap -clustername bb_cluster -spname bb_pool -lu bb_disk1    <--create a snapshot

# snapshot -list -clustername bb_cluster -spname bb_pool                                 <--list snapshots of a storage pool
Lu Name          Size(mb)    ProvisionType    Lu Udid
bb_disk1         10240       THIN             4aafb883c949d36a7ac148debc6d4ee7
Snapshot
bb_disk1_snap

# snapshot -rollback bb_disk1_snap -clustername bb_cluster -spname bb_pool -lu bb_disk1  <--rollback a snapshot to a LUN
$ snapshot -delete bb_disk1_snap -clustername bb_cluster -spname bb_pool -lu bb_disk1    <--delete a snapshot

------------------------------------------------------------------------------

Setting alerts for Shared Storage Pools:

As thin provisioning is in place, real storage free space cannot be seen exactly. If storage pool gets 100% full, IO error will occur on client LPAR. To avoid this alerts can be configured:

$ alert -list -clustername bb_cluster -spname bb_pool
PoolName                 PoolID                             Threshold%
bb_pool                  000000000A8C1517000000005150C18D   35                        <--it shows the free percentage

# alert -set -clustername bb_cluster -spname bb_pool -type threshold -value 25        <--if free space goes below 25% it will alert

# alert -list -clustername bb_cluster -spname bb_pool
PoolName                 PoolID                             Threshold%
bb_pool                  000000000A8C1517000000005150C18D   25                        <--new value can be seen here

$ alert -unset -clustername bb_cluster -spname bb_pool                                <--unset an alert
in errlog you can see the warning

----------------------------------------

 

추가

 

 

 

 

Here are my notes of the Shared Storage Pools commands that you need to remember.  I have said it before but SSP is big on concepts and system admin time saving but very simple to operate.

 

Creating the pool (once only) by example:

  • Assumption: 1 GB Repository disk = hdisk15 and Pool data disks = hdisk7 to hdisk14, cluster and pool both called "alpha"
  • Assumption: IBM V7000 A has LUNs: hdisk7 hdisk8 hisk9 hdisk10
  • Assumption: IBM V7000 B has LUNs: hdisk11 hdisk12 hisk13 hdisk14

Examples

  • cluster -create -clustername alpha  -spname alpha  -repopvs hdisk15   -sppvs hdisk7 hdisk8   -hostname orangevios1.domain.com
  • cluster -addnode -clustername alpha  -hostname silvervios1.domain.com
  • failgrp -modify -fg Default -attr FGName=V7000A
  • failgrp -create -fg V7000B: hdisk11 hdisk12
  • pv -add -fg V7000A: hdisk9 hdisk10 V7000B: hdisk13 hdisk14

Shared Storage Pools - Daily work

Create a LU (Logical Unit - virtual disk in the pool)

  •  lu -remove | -map | -unmap | -list  [-lu name]

  •  lu -create -lu name -size nnG -vadapter vhostXX [-thick]

Examples

  • lu -create -lu lu42 -size 32G -vadapter vhost66 -thick

  • lu -map    -lu lu42           -vadapter vhost22

  • lu -list

  • lu -list -verbose

  • lu -remove -lu lu42

Snapshots - stop the VM for a safe consistent disk image but you could (if confident) take a live snapshot and rely on filesystem logs and application based data recovery like RDBMS transaction logs

  •  snapshot [-create -delete -rollback -list]  name [-lu <list-LU-names>]     -clustername x -spname z

 Examples

  • snapshot -create   bkup1 -lu lu42 lu43   -clustername alpha -spname alpha

  • snapshot -rollback bkup1    -clustername alpha -spname alpha

  • snapshot -delete    bkup1    -clustername alpha -spname alpha

 

Shared Storage Pool – Weekly Configuration & Monitoring

Configuration Details

  • cluster -list                  <-- yields cluster name

  • cluster -status -clustername alpha

  • cluster -status -clustername alpha -verbose <-- shows you the poolname

  • lscluster -d <-- yields all the hdisks with local names for each VIOS

Monitor pool use

  •  lssp -clustername alpha

  •  lssp -clustername alpha -sp alpha -bd

  • Note this command uses "-sp" and not "-spname" like  many others.

Monitor for issues

  • alert -set -type {threshold | overcommit} -value N

  • alert -list

  • Ongoing monitoring of VIOS logs for warnings

  • Note - Pool Alert Events are logged to your HMC which you can get emailed to people.
    Look for Resource-Name=VIOD_POOL     Description=Informational Message

 

Shared Storage Pool – Quarterly / Yearly Maintenance

Pool mirroring check

  •  failgrp -create       <-- once only when creating the pool
  •  failgrp -list [-verbose]

Growing the pool size and monitoring

  •  pv -list [-verbose]
  •  pv -list -capable    <-- check new LUN ready
  •  pv -add -fg a: hdisk100 b: hdisk101

Moving the pool to a different disk subsystem

  •  pv -oldpv hdisk100 -newpv hdisk200 

'IBM AIX > VIOS' 카테고리의 다른 글

SEA  (0) 2016.07.18
NPIV  (0) 2016.07.18
VIO 가상 아답터 생성  (0) 2016.07.18
VIOS OS mirror  (0) 2016.07.18
SR-IOV - vNIC  (0) 2016.07.18





How to view the current semaphore values

To view the maximum number of semaphores and semaphore sets which can be created, type:

cat /proc/sys/kernel/sem


File description: /proc/sys/kernel/sem

This file contains 4 numbers defining limits for System V IPC semaphores. These fields are, in order:

SEMMSL - The maximum number of semaphores in a sempahore set.
SEMMNS - A system-wide limit on the number of semaphores in all semaphore sets. The maximum number of sempahores in the system.

SEMOPM - The maximum number of operations in a single semop call

SEMMNI - A system-widerff limit on the maximum number of semaphore identifiers (sempahore sets)





How to change the semaphore values on Linux


# echo 250 32000 256 256 > /proc/sys/kernel/sem

# cat /proc/sys/kernel/sem
250 32000 256 256



To make the change permanent, add or change the following line in the file /etc/sysctl.conf. This file is used during the boot process.


# echo "kernel.sem = 250 32000 256 256" >> /etc/sysctl.conf




Revised file content for /etc/sysctl.conf


# more /etc/sysctl.conf
# Kernel sysctl configuration file for Red Hat Linux
#
# For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and
# sysctl.conf(5) for more details.

# Controls IP packet forwarding
net.ipv4.ip_forward = 0

# Controls source route verification
net.ipv4.conf.default.rp_filter = 1

# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0

# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0

# Controls whether core dumps will append the PID to the core filename
# Useful for debugging multi-threaded applications
kernel.core_uses_pid = 1

# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1

# Controls the maximum size of a message, in bytes
kernel.msgmnb = 65536

# Controls the default maxmimum size of a mesage queue
kernel.msgmax = 65536

# Controls the maximum shared segment size, in bytes
kernel.shmmax = 68719476736

# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296
kernel.sem=250 32000 256 256


# cat /proc/sys/kernel/sem
250 32000 256 256

 

 

 

소파

 

http://itempage3.auction.co.kr/DetailView.aspx?ItemNo=A531612555&frm3=V2

 

242,500

 

 

 

 

서랍장

http://item2.gmarket.co.kr/Item/detailview/Item.aspx?goodscode=526297464&GoodsSale=Y&jaehuid=200001169&NaPm=ct%3Diq8xravc%7Cci%3D64c8d75def92fe390f5cb096f4e71663a146fc3d%7Ctr%3Dslsl%7Csn%3D24%7Chk%3D416d173a80d43845127656f011db81391a96b292

'

 

 

침대

169,000원

 

 

http://storefarm.naver.com/lissem/products/354574375?NaPm=ct%3Diqgbs0g0%7Cci%3D7d43491ab2b1fba372ec3368e826b69719a70058%7Ctr%3Dslsl%7Csn%3D196937%7Cic%3D%7Chk%3D656dbbcc4c451543ccf86134e83aa5830f954228#link_purchasereview

 

 

 

장판

3031

 

http://www.11st.co.kr/product/SellerProductDetail.tmall?method=getSellerProductDetail&prdNo=971859107&NaPm=ct=iq8xzkpk|ci=2d7696b6c8710f0a2825811848d6780be44f210f|tr=slsl|sn=17703|hk=36cae449f44203975a7becee40c29a0b62c9319b&utm_term=&utm_campaign=-&utm_source=%B3%D7%C0%CC%B9%F6_PCS&utm_medium=%B0%A1%B0%DD%BA%F1%B1%B3

 

 

목욕탕 욕조 페인트

 

http://storefarm.naver.com/paintvally/products/124184263?NaPm=ct%3Diq8zma4w%7Cci%3Db13cbe10b731d6e76367739e5c73a84474e6dcd7%7Ctr%3Dsls%7Csn%3D187297%7Chk%3Dd7b9f3ed2b024ebed91cca1978b7769294941ba2#revw

 

 

 

 

 

 

 식탁

http://storefarm.naver.com/bonie/products/328054674?NaPm=ct%3Diqa4ecg0%7Cci%3D63de2dbc11dc2541b31744d9ec2cfda1a1a1126e%7Ctr%3Dslsl%7Csn%3D213090%7Cic%3D%7Chk%3De52944cff47d861c5e49b63fa0ed2cb50509404d#revw


http://item2.gmarket.co.kr/Item/detailview/Item.aspx?goodscode=760567151&GoodsSale=Y&jaehuid=200001169&NaPm=ct%3Diqa4i8vk%7Cci%3De287d78c07fbbaa0517d41417e1c601ce24e2933%7Ctr%3Dslc%7Csn%3D24%7Chk%3D6f20ee60a69d037faf205809b3ccbd2bb978d487


http://www.lotteimall.com/goods/viewGoodsDetail.lotte?goods_no=1306512&NaPm=ct=iqa4rlao|ci=b09d22ef5364152c5a4438e6266fd205f3469ad2|tr=slc|sn=8|hk=6ea537341da5b1046d2fefad399c5ccbe15ca4c8&chl_dtl_no=2540914&chl_no=141370

 

 

 

 

 

'잡동사니' 카테고리의 다른 글

V7000 CLI commands  (0) 2017.06.23
2열 시트 커버  (0) 2016.08.09
[투싼ix] 도어내캐치  (1) 2016.04.04
[투싼ix] 클러스터 이오나이저  (0) 2016.04.04

 

RHEL 6.4 Bonding

- ent0 & ent1 본딩 구성 예

 1. 랜카드 인식 확인
  - #ifconfig


 2. 필요 파일 생성(#cd /etc/sysconfig/network-scripts 확인 후 있는 파일 빼고 생성)

  - #touch /etc/sysconfig/network-scripts/ifcfg-bond0

  - #touch /etc/sysconfig/network-scripts/ifcfg-eth0

  - #touch /etc/sysconfig/network-scripts/ifcfg-eth1


 3. ifcfg-bond0 파일 수정 (vi로 아래 내용 입력 후 저장)

  - #vi /etc/sysconfig/network-scripts/ifcfg-bond0

    DEVICE=bond0
    IPADDR= 000.000.000.000 /* 자신의 IP 입력 */
    NETMASK=000.000.000.000 /* 넷마스크 */
    GATEWAY=000.000.000.000 /* 게이트웨이 */
    DNS1=000.000.000.000 /* DNS */
    DNS2=000.000.000.000 /*보조 DNS */
    USERCTL=no
    BOOTPROTO=none
    ONBOOT=yes
    NM_CONTROLLED=no

    /* 고정 IP가 아닌 경우 BOOTPROTO=dhcp 로 */


 4. eth0 파일 수정

  - vi /etc/sysconfig/network-scripts/ifcfg-eth0

    DEVICE=eth0
    USERCTR=no
    BOOTPROTO=none
    NM_CONTROLLED=no
    ONBOOT=yes
    MASTER=bond0
    SLAVE=yes


 5. eth1 파일 수정(eth0 수정 후 : #sed 's/eth0/eth1/' ifcfg-eth0 > ifcfg-eth1 이용으로 간편히 수정, 생성)

  - vi /etc/sysconfig/network-scripts/ifcfg-eth1

    DEVICE=eth1
    USERCTR=no
    BOOTPROTO=none
    NM_CONTROLLED=no
    ONBOOT=yes
    MASTER=bond0
    SLAVE=yes


 6. bonding.conf 파일 수정

  - #touch /etc/modprobe.d/bonding.conf

  - vi /etc/modprobe.d/bonding.conf
   
    alias bond0 bonding
    options bond0 mode=1 miimon=100
    /* mode 0 - balance-rr : (Round Robin)Load Balancing, 송신할 패킷마다 사용하는  NIC 변경
       mode 1 - active-backup : Failover, bond  내에서 한 개의 Slave만 사용, 포트 문제 발상시 다른  Slave가 Enable
       mode 2 - blance-xor : Load Balancing, 소스와 목적지의 MAC을 XOR 연사을 통해 사용할 NIC을 결정하여 분배
       mode 3 - Broadcast : Fault-Tolerance, 모든 Slave로 데이터 전송(failover), 일반적으로 잘 사용 안 함.
       mode 4 - 802.3ad : Dynamic Link Aggregation, IEEE 802.3ad 프로토콜을 이용하여 동적 Aggregation 작성
     대역폭 상승, 부하 분산, Failover 지원
       mode 5 - blance-tlb(TLB) : 적응형 송신 부하 분산, 송신패킷 로드밸런싱, 송신 시 부하가 낮은 NIC 이용
       mode 6 - blance-alb(ALB) : 적응형 부하 분산, 송수신 패킷 로드 밸런싱, 송수신시 부하가 낮은 NIC를 사용
   
       miimon : 네트워크 인터페이스가 살아있는지 확인하는데 사용, 단위 0.001초, 기본 값은 0, 0으로 설정 시 failover 비활성화


 7. network 수정

  - vi /etc/sysconfig/network

    NETWORKING=yes
    NETWORKING_IPV6=no
    HOSTNAME=자기호스트 이름
    GATEWAYDEV=bond0
    기타 있는 문구는 그냥 둠.


 8. 본딩 모듈 적용

  - #modprobe bonding


 9. 네트워크 서비스 재시작

  - #service network restart


 10. bond0 인터페이스 Master 확인, 나머지 NIC Slave 확인
  - #ifconfig


 11. 물리적 테스트로 확인.


 12. 완료

 



서버에 터미널로 접속 후 "언제?" 이 명령어를 실행했었는지 알아야 할 때가 있다.

어떤 명령어를 실행했었는지는 history라는 명령어가 있긴 하지만

언제 실행했었는지는 나오지 않는다

하지만 /etc/profile에 다음과 같은 구문을 넣어주게 되면

history 명령어를 실행하게 되면 시간도 함께 표시가 된다.

#------------------------------------------------------------------------------
# Add Timestamp to history
#------------------------------------------------------------------------------
HISTTIMEFORMAT="%Y-%m-%d_%H:%M:%S\ "
export HISTTIMEFORMAT
#------------------------------------------------------------------------------

확인 방법은 위 구문을 /etc/profile에 넣어주고

프롬프트에서

# source /etc/profile

실행한다

그런 다음 history 명령어를 실행하게 되면 다음과 같은 결과가 나오게 된다.

1003 2010-06-16_15:41:06\ /usr/local/apache/bin/apachectl stop
1004 2010-06-16_15:41:08\ /usr/local/apache/bin/apachectl start
1005 2010-06-16_16:00:43\ ls -arlt
1006 2010-06-16_16:00:43\ cd


참고로 lastcomm이란 명령어도 확인해 보길 추천함

'IBM AIX' 카테고리의 다른 글

HMC 에서 vio 설치  (0) 2016.09.01
S824 MES  (0) 2016.08.16
RMC  (0) 2016.07.18
sh history  (0) 2016.06.01
Power Systems - 2015 4Q Announce overview  (0) 2016.04.04


출처 : http://cafe.naver.com/aix.cafe?iframe_url=/ArticleRead.nhn%3Farticleid=12215



sh_history 파일에 time stamp 를 남기고 확인 하는 방법에 대해서 내용 전달 합니다.

/etc/environment 파일에

EXTENDED_HISTORY=ON

이라고 설정을 해주시면 다음 LOGIN 혹은 현재 shell에 적용하시려면



# . /etc/environment

라고 하시면 적용이 됩니다.




History 파일을 열어보면 아래와 같이 표시가 되는데요.

# vi .sh_history
exit #?1260175606#?
errpt #?1260236625#?
df -k #?1260236627#?
cd /APM #?1260236632#?
ls #?1260236632#?
cat .sh_history #?1260236642#?
cat /.sh_history #?1260236649#?
date #?1260236667#?
cd / #?1260236694#?
ls #?1260236695#?
cat .bash_history #?1260236720#?
cd /var/adm #?1260236733#?
s #?1260236733#?
ls #?1260236737#?
ls -l #?1260236756#?
cd acct #?1260236764#?
ls #?1260236764#?
ls -al #?1260236765#?
cd #?1260236821#?
fc -t #?1260236823#?
ccat /.sh_history #?1260237296#?



time stamp를 확인하실 때는 fc command를 이용하시면 됩니다.

# fc -t
597 2009/12/08 10:44:09 :: cat /.sh_history
598 2009/12/08 10:44:27 :: date
599 2009/12/08 10:44:54 :: cd /
600 2009/12/08 10:44:55 :: ls
601 2009/12/08 10:45:20 :: cat .bash_history
602 2009/12/08 10:45:33 :: cd /var/adm
603 2009/12/08 10:45:33 :: s
604 2009/12/08 10:45:37 :: ls
605 2009/12/08 10:45:56 :: ls -l
606 2009/12/08 10:46:04 :: cd acct
607 2009/12/08 10:46:04 :: ls
608 2009/12/08 10:46:05 :: ls -al
609 2009/12/08 10:47:01 :: cd
610 2009/12/08 10:47:03 :: fc -t
611 2009/12/08 10:54:56 :: at /.sh_history
612 2009/12/08 10:56:27 :: fc –t



마지막 100라인을 확인하실 때는

# fc –t 100

이라고 입력 하시면 됩니다.

해당 parameter는 AIX 5.3에서부터 가능한 옵션이며 AIX5.2 이하에서는 적용이 되지 않음을 유의하시기 바랍니다.

참고로 history 의 사이즈를 늘리기 위해서는 /etc/environment 파일에

HISTSIZE=#####

라고 입력하시면 됩니다. Default 는 128 입니다



'IBM AIX' 카테고리의 다른 글

HMC 에서 vio 설치  (0) 2016.09.01
S824 MES  (0) 2016.08.16
RMC  (0) 2016.07.18
sh history  (1) 2016.06.01
Power Systems - 2015 4Q Announce overview  (0) 2016.04.04

 

리눅스 CPU 사용률 확인 :

top -n 1 | grep -i cpu\(s\)| awk '{print $5}' | tr -d "%id," | awk '{print 100-$1}'


리눅스 메모리 사용률 확인 :

top -n1 | grep Mem:


 


리눅스 메모리 점유율 프로세스 TOP 5 조회

ps -eo user,pid,ppid,rss,size,vsize,pmem,pcpu,time,cmd --sort -rss | head -n 6

(6 = TOP 5 + 1 을 해준다)

 

 

방법 1: cpuinfo[편집]

[root@zetawiki ~]# cat /proc/cpuinfo | egrep 'siblings|cpu cores' | head -2
siblings	: 8
cpu cores	: 4
→ siblings가 cpu cores의 2배이므로 하이퍼스레딩 활성된 것임

방법 2: dmidecode #1[편집]

[root@zetawiki ~]# dmidecode -t processor | egrep 'Core Count|Thread Count' | head -2
	Core Count: 4
	Thread Count: 8
→ Thread Count가 Core Count의 2배이므로 하이퍼스레딩 활성된 것임

방법 3: dmidecode #2[편집]

dmidecode -t processor | grep HTT
실행예시 (활성화됨)
[root@zetawiki ~]# dmidecode -t processor | grep HTT | head -1
		HTT (Hyper-threading technology)
실행예시 (비활성화됨)
[root@zetawiki2 ~]# dmidecode -t processor | grep HTT | head -1
                     HTT (Multi-threading)

'IBM PowerLinux' 카테고리의 다른 글

Linux에서 소프트웨어 RAID 구성  (0) 2017.02.21
linux scan device   (0) 2016.10.22
Hot add, remove, rescan of SCSI devices on Linux  (0) 2016.07.27
Linux Monitoring  (0) 2016.05.26
Linux CPU information 정보 확인  (0) 2016.05.26


출처 http://blog.naver.com/xerosda/30174926338


Linux 2.6.32-279.el6.x86_64 #1 SMP Wed Jun 13 18:24:36 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux

 시작글

이 장에서는 CPU core 수와 정보를 확인 하는 부분입니다.

다만 인텔 하이퍼스레딩의 경우, OS(윈도우, 리눅스 등) 에서는 코어수가 실제 코어 수의 2배로 인식이 되는 경우가 발생합니다.

밑에서 확인 해보도록 하겠습니다.

CPU 모델 확인 하기

[root@localhost ~]# grep "model name" /proc/cpuinfo | tail -1
model name : Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz

 CPU 코어 전체 개수

[root@localhost ~]# grep -c processor /proc/cpuinfo
16

-> 가상 CPU 코어수는 16 개, 그러니까 물리적으로 8 코어가 됩니다.

물리 CPU 수

[root@localhost ~]# grep "physical id" /proc/cpuinfo | sort -u | wc -l
1

CPU당 물리 코어 수

[root@localhost ~]# grep "cpu cores" /proc/cpuinfo | tail -1
cpu cores : 8

CPU 정보 전부 확인 하기

[root@localhost ~]# cat /proc/cpuinfo

processor : 15
vendor_id : GenuineIntel
cpu family : 6
model  : 45
model name : Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz
stepping : 7
cpu MHz  : 1200.000
cache size : 20480 KB
physical id : 0
siblings : 16
core id  : 7
cpu cores : 8
apicid  : 15
initial apicid : 15
fpu  : yes
fpu_exception : yes
cpuid level : 13
wp  : yes
flags  : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 sse4_2 x2apic popcnt aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dts tpr_shadow vnmi flexpriority ept vpid
bogomips : 3999.82
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:

 

-> processor 는 0 부터 시작

정리

위에 나온 정보들을 정리하면 아래와 같습니다.

물리 CPU : 1 개

물리 CPU당 물리 코어 수 : 8 core

전체 물리코어 수 : 8 core

전체 가상코어 수 : 16 core


'IBM PowerLinux' 카테고리의 다른 글

Linux에서 소프트웨어 RAID 구성  (0) 2017.02.21
linux scan device   (0) 2016.10.22
Hot add, remove, rescan of SCSI devices on Linux  (0) 2016.07.27
Linux Monitoring  (0) 2016.05.26
Linux CPU 하이퍼쓰레드 HT 기능 확인  (0) 2016.05.26

 

 

 

SAP HANA System Replication on SLES for SAP Applications

 

https://www.suse.com/docrep/documents/wvhlogf37z/sap_hana_system_replication_on_sles_for_sap_applications.pdf

 

 

Current information about SAPHanaSR – the resource agents for SAP HANA system replication

https://www.suse.com/communities/blog/current-information-saphanasr-resource-agents-sap-hana-system-replication/

 

'IBM PowerLinux > SUSE' 카테고리의 다른 글

SUSE 12 root multipath  (0) 2017.08.20
SUSE 12 multipath 설정 후 부팅 안될때  (0) 2017.01.12
Storage Administration Guide  (0) 2016.05.19
multipath  (0) 2016.05.19

+ Recent posts