https://ts2019-usa.readthedocs.io/en/latest/karbon/karbon.html

 

Karbon

The estimated time to complete this lab is 60 minutes.

Overview

Nutanix Karbon is an on-prem turnkey curated enterprise-grade Kubernetes service offering that simplifies the provisioning, operations and lifecycle management of Kubernetes.

Karbon provides a consumer-grade experience for delivering Kubernetes on-prem providing huge savings on OpEx of dedicated DevOps or SRE teams to keep Kubernetes online, up to date or integrated with 3rd party components and tooling.

In this lab you will deploy a Kubernetes cluster using Karbon and then deploy multiple containers, referred to as Kubernetes pods, to run a sample application.

If you already have an understanding of containers, Kubernetes, challenges, and use cases, jump to Lab Setup and Creating a Karbon Cluster.

What are Containers?

A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. Unlike traditional virtualization, containers are substantially more light-weight than VMs, with many containers capable of sharing the same host OS instance.

Containers provide two key features: a packaging mechanism and a runtime environment.

At the runtime level, the container allows an application to run as an isolated process with its own view of the operating system. While VMs provide isolation via virtualized hardware, containers leverage the ability of the Linux kernel to provide isolated namespaces for individual processes. This lightweight nature means each application gets its own container, preventing dependency conflicts.

As a packaging mechanism, a container is typically just a tarball: a way to bundle the code, configuration and dependencies of an application into a single file. This eliminates the problem of “It worked on my environment, why doesn’t it work on yours,” because everything necessary to run the application consistently is transported with the container. Ideally, applications produce the same output regardless of environment, and containerization makes that ideal a lot easier to reach. The result is a containerized application that will start, stop, make requests and log the same way.

Container Benefits and Issues

For any business, containers represent a large opportunity:

  • Developers will spend less time debugging environment issues and more time writing code.
  • Server bills will shrink, because more applications can fit on a server using containers than in traditional deployments.
  • Containers can run anywhere, increasing the available deployment options.

For complex applications consisting of multiple components, containers vastly simplify updates. Placing each component in a container makes it simple to make changes without having to worry about unintended interactions with other components.

Use cases for containerized workloads include:

  • Continuous Integration Continuous Delivery (CI/CD) development
  • Application modernization/Microservices
  • API, web, and backend app development/delivery
  • Application cost containment
  • Enabling hybrid cloud

While breaking down applications into microservices, or discrete functional parts, has clear benefits, having more parts to manage can introduce complexities for configuration, service discovery, load balancing, resource scaling, and discovering and fixing failures. Managing this complexity manually isn’t scalable.

Introducing Kubernetes

Kubernetes is an container orchestration platform, open sourced by Google in 2014, that helps manage distributed, containerized applications at massive scale. According to Redmonk, 54 %of Fortune 100 companies are running Kubernetes in some form, with adoption coming from every sector.

Kubernetes delivers production-grade container orchestration, automating container configuration, simplifying scaling, lifecycle management, and managing resource allocation. Kubernetes can run anywhere. Whether you want your infrastructure to run on-premise, on a public cloud, or a hybrid configuration of both.

However, vanilla Kubernetes presents its own challenges that make it difficult for organizations to adopt the technology. Building your own production-ready deployment that ensures the Kubernetes environment itself is robust, easy to maintain (automated), with integrated features for things like networking, logging, analytics, and secret management, could take months.

Kubernetes aggressive, quarterly release cycle (where releases are deprecated by the community after 3 quarters) can create adoption challenges for enterprises. Finally, from a supportability and risk perspective, maintaining your own custom Kubernetes stack for production applications would be analogous to using a custom-made distribution of Linux - virtually unheard of in the enterprise.

As previously stated, Nutanix Karbon provides a turn-key solution to address these critical Kubernetes challenges.

Lab Setup

This lab requires applications provisioned as part of the Windows Tools VM.

If you have not yet deployed this VM, see the linked steps before proceeding with the lab.

It is highly recommended that you connect to the Tools VM using the Microsoft Remote Desktop client rather than the VM console launched via Prism. An RDP connection will allow you to copy and paste between your device and the VMs.

Creating a Karbon Cluster

In this exercise you will create a production ready Kubernetes cluster with Nutanix Karbon.

In Prism Central, select :fa:`bars` > Services > Karbon.

Note

If Karbon has not already been enabled on your cluster, click the Enable Karbon button when prompted. Once clicked, the process should take approximately 2 minutes to complete. During this time Prism Central is deploying the Karbon control plane, which runs as a set of containers within the Prism Central VM.

Click the provided link to launch the Karbon Console.

Note

If at any point your Karbon session times out, you can log in again using your Prism Central admin credentials.

To begin provisioning a Karbon cluster, click + Create Cluster.

On the Name and Environment tab, fill out the following fields:

  • Name - Initials-karbon
  • Cluster - Select Your Nutanix cluster
  • Kubernetes Version - 1.10.3
  • Host OS Image - centos

Do NOT use the 1.8.x Kubernetes Version selected by default.

Note

Your cluster has be pre-staged with a compatible CentOS image for use with Karbon.

Karbon currently supports CentOS 7.5.1804 and Ubuntu 16.04 and requires that these images be downloaded directly from Nutanix.

To stage another cluster with the supported CentOS image, add http://download.nutanix.com/karbon/0.8/acs-centos7.qcow2 as “acs-centos”.

To stage another cluster with the supported CentOS image, add http://download.nutanix.com/karbon/0.8/acs-ubuntu1604.qcow2 as “acs-ubuntu”.

Click Next.

Next you will define the number of container host VMs and compute requirements, starting with Worker VMs.

Worker nodes are responsible for running containers deployed onto the Kubernetes cluster. Each Worker node runs the kubelet and kube-proxy services.

For the purposes of this non-production exercise you will reduce the amount of memory consumed by default by each worker and etcd VM.

On the Worker Configuration tab, fill out the following fields:

  • Number of Workers - 3 (Default)
  • Memory - 6 GiB
  • Size - 120 GiB (Default)
  • VCPU - 4 (Default)

Click Next.

Next you will define the compute requirements for the Master and etcd nodes.

The Master node controls the Kubernetes cluster and provides the kube-apiserver, kube-controller-manager. and kube-scheduler services.

The etcd nodes provide a distributed key-value store which Kubernetes uses to manage cluster state, similar to how Nutanix leverages Zookeeper.

On the Master Configuration tab, fill out the following fields:

  • Master Resources > Memory - 4 GiB (Default)
  • Master Resources > Size - 120 GiB (Default)
  • Master Resources > VCPU - 2 (Default)
  • etcd Resources > Number of VMs - 3 (Default)
  • etcd Resources > Memory - 4 GiB
  • etcd Resources > Size - 40GiB (Default)
  • etcd Resources > VCPU - 2 (Default)

Click Next.

Next you will configure the networking for both the host VMs and pods. Karbon utilizes Flannel to provide layer 3 IPv4 network between multiple nodes within the Karbon cluster.

Platforms like Kubernetes assume that each pod (container) has a unique, routable IP inside the cluster. The advantage of this model is that it removes the port mapping complexities that come from sharing a single host IP.

The Service CIDR defines the network range on which services (like etcd) are exposed. The Pod CIDR defines the network range used to IP pods. The default configuration allows for a maximum of 256 nodes with up to 256 pods per node.

On the Network tab, fill out the following fields:

  • Network Provider - Flannel (Default)
  • VM Network - Primary (Default)
  • Service CIDR - 172.19.0.0/16 (Default)
  • Pod CIDR - 172.20.0.0/16 (Default)

Click Next.

On the Storage Class tab, fill out the following fields:

  • Storage Class Name - default-storageclass-Initials
  • Prism Element Cluster - Your Nutanix cluster
  • Nutanix Cluster Username - admin
  • Nutanix Cluster Password - techX2019!
  • Storage Container Name - Default
  • File System - ext4 (Default)

Click Create.

Deployment of the cluster should take approximately 10 minutes. During this time, Karbon is pulling images from public image repositories for the master, etcd, and worker nodes, as well as flannel, the Nutanix Volumes plugin, and any additional Karbon plugins. Support for authenticated proxy and dark site image repositories will be added post-GA.

Filtering VMs for Initials-karbon in Prism Central will display the master, etcd, and worker VMs provisioned by Karbon.

In Prism Element > Storage > Volume Group, Karbon has created the pvc-… Volume Group, used as persistent storage for logging. Karbon leverages the Nutanix Kubernetes Volume Plug-In to present Nutanix Volumes to Kubernetes pods via iSCSI. This allows containers to take advantage of native Nutanix storage capabilities such as thin provisioning, zero suppression, compression, and more.

The Karbon cluster has finished provisioning when the Status of the cluster is Running.

Click on your cluster name (Initials-karbon) to access the Summary Page for your cluster.

Explore this view and note the ability to create and add additional storage classes and persistent storage volumes to the cluster. Additional persistent storage volumes could be leveraged for use cases such as containerized databases.

In 15 minutes or less, you have deployed a production-ready Kubernetes cluster with logging (EFK), networking (flannel), and persistent storage services.

Getting Started with Kubectl

Kubectl is the command line interface for running commands against Kubernetes clusters. Kubeconfig files contain information about clusters, users, namespaces, and authentication. The kubectl tool uses kubeconfig files to find and communicate with a Kubernetes cluster.

In this exercise you will use kubectl to perform basic operations against your newly provisioned Karbon cluster.

From within your Initials-Windows-ToolsVM VM, browse to Prism Central and open Karbon.

Select your Initials-karbon cluster and click Download kubeconfig.

Open PowerShell.

Note

If installed, you can also use a local instance of kubectl. The Tools VM is provided to ensure a consistent experience.

Instructions for setting up kubectl in Windows and macOS can be found here.

From PowerShell, run the following commands to configure kubectl:

cd ~ mkdir .kube cd .kube mv ~\Downloads\kubectl* ~\.kube\config kubectl get nodes

Note

By default, kubectl looks like a file named config in the ~/.kube directory. Other locations can be specified using environment variables or by setting the --kubeconfig flag.

Verify that the output of the last command shows 1 master node and 3 worker nodes as Ready.

Next you will check the versions of the Kubernetes client and server by running the following command:

kubectl version

Deploying an Application

Now that you have successfully run commands against your Kubernetes cluster using kubectl, you are now ready to deploy an application. In this exercise you will be deploying the popular open-source content management system used for websites and blogs, Wordpress.

Using Initials-Windows-ToolsVM, open PowerShell and create a wordpress directory using the following command:

mkdir ~\wordpress cd ~\wordpress

Kubernetes depends on YAML files to provision applications and define dependencies. YAML files are a human-readable text-based format for specifying configuration information. This application requires two YAML files to be stored in the wordpress directory.

Note

To learn more about Kubernetes application deployment and YAML files, click here.

Using your Initials-Windows-ToolsVM web browser, download the following YAML files for Wordpress and the MySQL deployment used by Wordpress:

Move both files to the wordpress directory using the following command:

mv ~\Downloads\*.yaml ~\wordpress\ cd ~\wordpress\

Open the wordpress-deployment.yaml file with your preferred text editor.

Note

Sublime Text has been pre-installed on Initials-Windows-ToolsVM.

Under spec: > type:, change the value from LoadBalancer to NodePort and save the file. This change is required as Karbon does not yet support LoadBalancer.

Note

You can learn more about Kubernetes publishing service types here.

Open the mysql-deployment.yaml file and note that it requires an environmental variable to define the MYSQL_ROOT_PASSWORD as part of deployment. No changes are required to this file.

Define the secret to be used as the MySQL password by running the following command:

kubectl create secret generic mysql-pass --from-literal=password=Nutanix/4u!

Verify the command returns secret/mysql-pass created.

You can also verify the secret has been created by running the following command:

kubectl get secrets

Verify mysql-pass appears in the NAME column.

You will now provision the MySQL database by running the following command:

kubectl create -f mysql-deployment.yaml

In addition to the MySQL service, the mysql-deployment.yaml also specifies that a persistent volume be created as part of the deployment. You can get additional details about the volume by running:

kubectl get pvc

You will note that the STORAGECLASS matches the default-storageclass-Initials provisioned by Karbon.

The volume also appears in Karbon under Initials-karbon > Volume.

To view all running pods on the cluster, which should currently only be your Wordpress MySQL database, run the following command:

kubectl get pods

To complete the application, deploy Wordpress by running the following command:

kubectl create -f wordpress-deployment.yaml

Verify both pods are displayed as Running using kubectl get pods.

Accessing Wordpress

You have confirmed the Wordpress application and its MySQL database are running. Configuration of Wordpress is done via web interface, but to access the web interface you must first determine the IP addresses of our worker VMs and the port on which the pod is running.

The IP addresses of all cluster VMs is returned by the kubectl describe nodes command. You can run this and search for the InternalIP of any of your worker VMs, or run the following command to return only the hostnames and IP addresses:

kubectl describe nodes | Select-String -Pattern "Hostname:","InternalIP"

To determine the port number of the Wordpress application, run the following command and note the TCP port mapped to port 80:

kubectl get services wordpress

Open http://WORKER-VM-IP:WORDPRESS SERVICE PORT/ in a new browser tab to access to Wordpress installation.

Note

In the example shown, you would browse to http://10.21.78.72:32160. You environment will have a different IP and port.

Click Continue and fill out the following fields:

  • Site Title - Initials’s Karbon Blog
  • Username - admin
  • Password - nutanix/4u
  • Your Email - noreply@nutanix.com

Click Install Wordpress.

After setup completes (a few seconds), click Log In and provide the credentials just configured.

Congratulations! Your Wordpress application and MySQL database setup is complete.

Exploring Logging & Visualization

Karbon provides a plug-in architecture to continually add additional functionality on top of vanilla Kubernetes. The firdst plug-in Karbon will provide is an integrated logging services stack called EFK, short for Elasticsearch, fluentd and Kibana.

Elasticsearch is a real-time, distributed, and scalable search engine which allows for full-text and structured search, as well as analytics. It is commonly used to index and search through large volumes of log data, but can also be used to search many different kinds of documents.

Elasticsearch is commonly deployed alongside Kibana, a powerful data visualization frontend and dashboard for Elasticsearch. Kibana allows you to explore your Elasticsearch log data through a web interface, and build dashboards and queries to quickly answer questions and gain insight into your Kubernetes applications.

Fluentd is a popular data collector that runs on all Kubernetes nodes to tail container log files, filter and transform the log data, and deliver it to the Elasticsearch cluster, where it will be indexed and stored.

Return to the Karbon Console and select your Initials-karbon cluster.

Select Add-on from the sidebar to view and manage available Karbon plugins.

Select Logging to launch the Kibana user interface.

Select Discover from the sidebar and define * as the Index Pattern. This wildcard will retrieve all available indices within Elastisearch, including etcd, kubernetes, and systemd.

Click Next Step.

Select @timestamp from the Time Filter field name drop down menu to allow you to sort logging entries by their respective timestamps.

Click Create index pattern.

Select Discover again from the sidebar to view all logs from the Karbon cluster. You can reduce the amount of Kubernetes metadata displayed by adding the log entry under Available Fields.

Advanced Kibana usage, including time series data visualization that can answer questions such as “What is the difference in service error rates between our last 3 application upgrades,” is covered in the Kibana User Guide.

Coming Soon!

The upcoming Karbon 1.0 GA is ready for production workloads. Additional features and functionality include:

  • Pre-configured Production and Dev/Test cluster profiles to further simplify provisioning
  • Multi-Master VM support to provide an HA Kubernetes control plane
    • Active/passive Multi-Master HA out of the box
    • Support for 3rd party load balancers
  • The ability to add/remove worker node(s) to deployed clusters
  • Cluster level monitoring & alerting using Prometheus, an open-source systems monitoring and alerting system with an embedded time-series database originally developed by SoundCloud.
  • New Nutanix Container Storage Interface (CSI) Driver Support
    • CSI is the standard for exposing arbitrary block and file storage storage systems to Kubernetes
    • Support for Nutanix Volumes and Nutanix Files
  • Upgrades & Patching
    • Non-disruptive Karbon upgrades
    • Immutable OS upgrades of all cluster nodes
  • Support for native Kubernetes RBAC
  • Rotating 24-hour key-based access to cluster to minimize malicious activity
  • Darksite Support
    • Local read-only image repository for offline cluster deployments for customers that do not allow internet access

Takeaways

What are the key things you should know about Nutanix Karbon?

  • Any Nutanix AHV customer is a potential target for Karbon, including:
    • Customers that perform internal development
    • Customers who have or plan to adopt CI/CD
    • Customers with Digital Transformation or Application Modernization initiatives
  • The primary benefit of Karbon is reduced CapEX and OpEX of managing and operating Kubernetes environments, reducing learning curve and enabling DevOps/ITOps teams to quickly support their development teams to start deploying containerized workloads.
  • Karbon delivers One-Click operations for Kubernetes provisioning and lifecycle management, enabling enterprises to provide a private-cloud Kubernetes solution with the simplicity and performance of public clouds.
  • Karbon is included in all AOS software editions at no additional cost.
  • Karbon can provide additional functionality to Kubernetes over time through its plugin architecture.
  • Karbon will be a certified Kubernetes distribution and has passed the Kuberentes Conformance Certification.
  • Karbon is listed on the official Kubernetes Solutions and Cloud Native Computing Foundation Landscape pages.

Cleanup

Once your lab completion has been validated, PLEASE do your part to remove any unneeded VMs to ensure resources are available for all users on your shared cluster.

IF you DO NOT intend on completing either the Cloud Native or Xi Epoch labs, THEN you should delete the Initials-karbon cluster deployed as part of this exercise. This can be done directly from the Karbon web interface.

IF you DO intend on completing either the Cloud Native or Xi Epoch labs, THEN you should leave your Initials-karbon cluster in place.

Getting Connected

Have a question about Nutanix Karbon? Please reach out to the resources below:

Karbon Product Contacts

Slack Channel #karbon
Product Manager Denis Guyadeen, dguyadeen@nutanix.com
Product Marketing Manager Maryam Sanglaji, maryam.sanglaji@nutanix.com
Technical Marketing Engineer Dwayne Lessner, dwayne@nutanix.com
NEXT Community Forum https://next.nutanix.com/kubernetes-containers-30

Additional Kubernetes Training Resources

'Nutanix' 카테고리의 다른 글

K8s  (0) 2019.09.18
Bonding active-slave 변경  (0) 2019.07.01
Helpful Nutanix Commands Cheat Sheet  (0) 2019.07.01
Nutanix - component master node 찾기  (0) 2019.07.01
Data intensive ??  (0) 2019.03.28

 

PVS 확인

#kubectl get pvc

 

 

POD 확인 및 접속

#kubectl get pods

 

#kubectl exec -it [pod_name] -- /bin/bash

 

 

 

 

 

'Nutanix' 카테고리의 다른 글

Karbon  (0) 2019.09.18
Bonding active-slave 변경  (0) 2019.07.01
Helpful Nutanix Commands Cheat Sheet  (0) 2019.07.01
Nutanix - component master node 찾기  (0) 2019.07.01
Data intensive ??  (0) 2019.03.28

 

 

Active - Slave 바꾸기

 

AHV# ovs-appctl bond/set-active-slave bond0 eth1

 

참고 : https://www.thegeekdiary.com/redhat-centos-how-to-change-currently-active-slave-interface-of-bonding-online/

 

RedHat / CentOS : How to change currently active slave interface of bonding online – The Geek Diary

 

www.thegeekdiary.com

 

'Nutanix' 카테고리의 다른 글

Karbon  (0) 2019.09.18
K8s  (0) 2019.09.18
Helpful Nutanix Commands Cheat Sheet  (0) 2019.07.01
Nutanix - component master node 찾기  (0) 2019.07.01
Data intensive ??  (0) 2019.03.28

출처 : https://acropolis.ninja/helpful-nutanix-commands-cheat-sheet/

AHV

configure mgt IP address / network

vi /etc/sysconfig/network-scripts/ifcfg-br0

VLAN tag mgt network

ovs-vsctl set port br0 tag=####

Show OVS configuration

ovs-vsctl show

Show configuration of bond0

ovs-appctl bond/show bond0

Show br0 configuration (for example to confirm VLAN tag)

ovs-vsctl list port br0

List VMs on host / find CVM

virsh list –all | grep CVM

Power On powered off CVM

virsh start [name of CVM from above command]

Increase RAM configuration of CVM

virsh setmaxmem [name of CVM from above command] –config –size ram_gbGiB

virsh setmem [name of CVM from above command] –config –size ram_gbGiB

 

ESXi

Show vSwitch configurations

esxcfg-vswitch -l

Show physical nic list

esxcfg-nics -l

Show vmkernel interfaces configured

esxcfg-vmknic -l

Remove vmnic from vSwitch0

esxcfg-vswitch -U vmnic# vSwitch0

Add vmnic to vSwitch0

esxcfg-vswitch -L vmnic# vSwitch0

Set VLAN for default VM portgroup

esxcfg-vswitch -v [vlan####] -p “VM Network” vSwitch0

Set VLAN for default management portgroup

esxcfg-vswitch -v [vlan id####] -p “Management Network” vSwitch0

Set IP address for default management interface (vmk0)

esxcli network ip interface ipv4 set -i vmk0 -I [ip address] -N [netmask] -t static

Set default gateway

esxcfg-route [gateway ip]

List VMs on host/find CVM

vim-cmd vmsvc/getallvms | grep -i cvm

Power on powered off CVM

vim-cmd vmsvc/power.on [vm id# from above command]

 

CVM

VLAN tag CVM  (only for AHV or ESXi using VGT)

change_cvm_vlan ####

Show AHV host physical uplink configuration

manage_ovs show_uplinks

Remove 1gb pNICs from bond0 on AHV host

manage_ovs –bridge_name br0 –bond_name bond0 –interfaces 10g update_uplinks

Configure mgt IP address / network

vi /etc/sysconfig/network-scripts/ifcfg-eth0

Create cluster

cluster -s [cvmip1,cvmip2,cvmipN…] create

Get cluster status

cluster status

Get detailed local to current CVM services’ status

genesis status

Restart specific service across entire cluster (example below:  cluster_health)

allssh genesis stop cluster_health; cluster start

Show Prism leader

curl localhost:2019/prism/leader

Stop cluster

cluster stop

Start a stopped cluster

cluster start

Destroy cluster

cluster destroy

Discover nodes

discover_nodes

Gracefully shutdown CVM

cvm_shutdown -P now

Upgrade non-cluster joined node from cluster CVM without expanding the cluster

cluster -u [remote node cvmip] upgrade_node

Check running AOS upgrade status for cluster

upgrade_status

Check running hypervisor upgrade status for cluster

host_upgrade_status

Get CVM AOS version

cat /etc/nutanix/release_version

Get cluster AOS version

ncli cluster version

Create Prism Central instance (should be ran on deployed PC vm, not cluster CVM)

cluster –cluster_function_list multicluster -s [pcipaddress] create

Run all NCC health checks

ncc health_checks run_all

Export all logs (optionally scrubbed for IP info)

ncc log_collector –anonymize_output=true run_all

 

ipmitool (NX platform)

(hypervisor agnostic), leading / required for ESXi (/ipmitool)

Configure IPMI to use static ip

ipmitool lan set 1 ipsrc static

Configure IPMI IP address

ipmitool lan set 1 ipaddr [ip address]

Configure IPMI network mask

ipmitool lan set 1 netmask [netmask]

Configure IPMI default gateway

ipmitool lan set 1 defgw ipaddr [gateway ip]

Configure IPMI VLAN tag

ipmitool lan set 1 vlan id [####]

Remove IPMI VLAN tag

ipmitool lan set 1 vlan id off

Show current IPMI configuration

ipmitool lan print 1

Show IPMI mode (failover/dedicated)

ipmitool raw 0x30 0x70 0x0c 0

The result will be one of the following

  1. 00 = Dedicated
  2. 01 = Onboard / Shared
  3. 02 = Failover (default mode)

Get IPMI user list

ipmitool user list

Reset IPMI ADMIN user password back to factory (trailing ADMIN is the password)

ipmitool user set password [# of ADMIN user from command above] ADMIN

Reboot the BMC (reboot the IPMI only)

ipmitool mc reset cold

 

URLs

CVM built-in foundation

http://[cvmip]:8000/gui

Legacy cluster-init (should attempt redirect to foundation on newer AOS)

http://[cvmip]:2100/cluster_init.html

Get cluster status

http://[cvmip]:2100/cluster_status.html

'Nutanix' 카테고리의 다른 글

Karbon  (0) 2019.09.18
K8s  (0) 2019.09.18
Bonding active-slave 변경  (0) 2019.07.01
Nutanix - component master node 찾기  (0) 2019.07.01
Data intensive ??  (0) 2019.03.28

 

[Acropolis master]

 


nutanix@cvm:10.1.1.11:~$ allssh "links -dump http:0:2030 | grep Master"
Executing links -dump http:0:2030 | grep Master on the cluster
================== 10.1.1.11 =================
   Acropolis Master: [5]10.1.1.13:2030
Connection to 10.1.1.11 closed.
================== 10.1.1.12 =================
   Acropolis Master: [5]10.1.1.13:2030
Connection to 10.1.1.12 closed.
================== 10.1.1.13 =================
Connection to 10.1.1.13 closed.

 

 


nutanix@cvm:10.1.1.13:~$ cluster status | grep -v UP
The state of the cluster: start
Lockdown mode: Disabled

CVM: 10.1.1.11 Up, ZeusLeader

CVM: 10.1.1.12 Up

CVM: 10.1.1.13 Up

 

 

 

'Nutanix' 카테고리의 다른 글

Karbon  (0) 2019.09.18
K8s  (0) 2019.09.18
Bonding active-slave 변경  (0) 2019.07.01
Helpful Nutanix Commands Cheat Sheet  (0) 2019.07.01
Data intensive ??  (0) 2019.03.28

Data Intensive Computing은 대용량 데이터를 처리하기 위해 데이터 병렬 처리를 사용하는 병렬 컴퓨팅 클래스입니다. 이 데이터의 크기는 일반적으로 테라 바이트 단위 또는 페타 바이트 단위입니다. 이 대량의 데이터는 매일 생성되며 큰 데이터로 참조됩니다.

정확히 10 년 전, EMC Corporation이 후원 하는 백서는 2007 년 현재 디지털 형식으로 저장된 정보량을 281 엑사 바이트로 추정했습니다. 오늘날 얼마나 거대한 지 상상 만 할 수 있습니다.

IDC에 의해 밝혀진 수치는 생성 된 데이터의 양이 데이터를 분석 할 수있는 능력을 넘어서는 것을 증명합니다. 이 경우에는 동일한 방법을 사용할 수 없으며 일반적으로 전산 과학의 일반적인 문제를 해결하는 데 사용됩니다.

문제를 해결하기 위해 회사는 도구 또는 도구 모음을 제공합니다.

데이터 집약적 인 컴퓨팅은 다른 형태의 컴퓨팅과 다른 몇 가지 특성을 가지고 있습니다. 그들은:

  • 데이터 집약적 인 컴퓨팅에서 높은 성능을 달성하려면 데이터 이동을 최소화해야합니다. 이렇게하면 시스템 오버 헤드가 줄어들고 데이터가있는 노드에서 알고리즘을 실행할 수 있으므로 성능이 향상됩니다.
  • 데이터 집약적 컴퓨팅 시스템은 런타임 시스템이 스케줄링, 실행,로드 밸런싱, 통신 및 프로그램 이동을 제어하는 ​​기계 독립적 접근법을 사용합니다.
  • 데이터 집약적 인 컴퓨팅은 데이터의 신뢰성 및 가용성에 크게 중점을 둡니다. 기존의 대규모 시스템은 하드웨어 오류, 통신 오류 및 소프트웨어 버그의 영향을 받기 쉽고 데이터 집약적 인 컴퓨팅은 이러한 문제를 해결할 수 있습니다.
  • 데이터 집약적 인 컴퓨팅은 확장 성을 위해 설계 되었기 때문에 모든 양의 데이터를 수용 할 수 있으므로 시간이 중요한 요구 사항을 충족시킬 수 있습니다. 하드웨어 및 소프트웨어 아키텍처의 확장 성은 데이터 집약적 인 컴퓨팅의 가장 큰 장점 중 하나입니다.

'Nutanix' 카테고리의 다른 글

Karbon  (0) 2019.09.18
K8s  (0) 2019.09.18
Bonding active-slave 변경  (0) 2019.07.01
Helpful Nutanix Commands Cheat Sheet  (0) 2019.07.01
Nutanix - component master node 찾기  (0) 2019.07.01


root@ubuntu:/etc/apt#

root@ubuntu:/etc/apt# cat sources.list

#

deb [trusted=yes] file:/cdrom/dists/stable/main/binary-ppc64el ./




https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.2/html/release_notes/sect-subscriptions


http://www.cbs1.com.my/WebLITE/Applications/news/uploaded/docs/Linux%20on%20Power%20Update%20and%20Strategy%20V1.0.pdf


https://www.ibm.com/developerworks/community/blogs/mhhaque/entry/How_Install_RHVM_Red_Hat_oVirt_Virtualization_Manager_on_Power_System_VM_or_LPAR?lang=en


https://www.ibm.com/developerworks/library/l-rhv-environment-trs/



https://www.ibm.com/developerworks/linux/library/l-openpower-firmware-ipmi/index.html




This article describes the capabilities and features of OpenPOWER hardware and reliability, availability, and serviceability (RAS) components. This article also describes different OpenPOWER firmware features that use Intelligent Platform Management Interface (IPMI).

Firmware plays a critical role in the initialization and booting processes in servers. IBM Power Systems servers are developed based on the IBM® POWER® processor architecture and can be categorized into the following types:

  • IBM PowerVM®) based systems where the IBM POWER Hypervisor™ firmware is the hypervisor
  • IBM Power® non-virtualization based systems that are running under Open Power Abstraction Layer (OPAL) firmware

Based on the management controller, there are two types of Power non-virtualization systems:

  • Flexible service processor (FSP) based systems, where FSP is an IBM PowerPC® fourth-generation processor that manages the system.
  • Baseboard management controller (BMC) or OpenPOWER based system, where BMC is an ASPEED Technology (AST) 2400/2500 system on chip (SOC) that manages the system.

This article focuses on OpenPOWER BMC-based Power non-virtualization systems that are running in the OPAL mode. The firmware for the OpenPOWER systems is a combination of the BMC firmware component and the host firmware component.

  • BMC firmware stack is used to manage the server mostly for out-of-band or remote system operations such as power control operations, firmware flashing, field-replaceable unit (FRU) inventory information, local area network (LAN) configuration, and sensor readings.
  • OpenPOWER host firmware is a system initialization and boot time firmware for Power Systems servers. It contains the following subcomponents:
    • Self-boot engine
    • Hostboot (system initialization firmware for POWER)
    • Skiboot (OPAL boot and runtime firmware for POWER)
    • Skiroot kernel
    • Petitboot (kexec based bootloader)
    • On Chip Controller (OCC) firmware
    • Op-build (buildroot overlay for OpenPOWER)

The host firmware is the main boot firmware that stores in the processor NOR (PNOR) flash chip, and is used to initialize the host processor and make the system boot until runtime state after which the boot process is taken over by the Linux® operating system. OpenPOWER systems firmware has the following boot sequence after you power on the system:

  1. The hostboot firmware initializes the processor and memory subsystem.
  2. The Skiboot firmware (OPAL) initializes Peripheral Component Interconnect (PCI) and the devices that are outside the chip subsystem.
  3. The hostboot and Skiboot firmware generates a flatten-device-tree. 
    Device tree is a binary tree structure, where all the devices are nodes, and different features of the devices are property value pairs that are exposed to the Linux kernel to configure different device drivers.
  4. After the host firmware (OPAL) completes the booting and generation of the device tree, Skiroot is loaded. Skiroot is a combination of the Linux kernel and boot loader (petitboot). Petitboot is a kexec-based boot loader that can detect and load the boot devices. Skiboot (OPAL) firmware also contains an OPAL application programming interface (API) which are runtime hardware abstraction calls used by the Linux operating system to interact with low-level hardware.

The important OpenPOWER firmware IPMI features that are used to monitor, control, and debug the OpenPOWER system are described here.

  • System power control
  • Sensors
  • Thermal and Power management
  • System event log (SEL)
  • Field-replaceable unit (FRU) data
  • Boot progress codes
  • Serial-over LAN (SOL) console
  • System time and boot management
  • Firmware update procedure
  • Virtual Universal Asynchronous Receiver/Transmitter (VUART) to video graphics array (VGA) rendering
  • IPMI lock and unlock feature
  • IPMI PNOR reprovision feature

For all the IPMI features described in this article, invoke the following out-of-band ipmitool command as the base string when you run the commands for all operations:

$IPMI_CMD = ipmitool -I lanplus -H $BMCIP -U $USER -P $PASSWORD
$BMCIP     BMC IP address
$USER      BMC IPMI Username
$PASSWORD  BMC IPMI Password
$IPMI_CMD  IPMI tool command invocation
$CMD = $IPMI_CMD $Operation

System power control

All system power control operations are controlled by the BMC. The different power control operations are power status, power on, power off, power cycle, and power reset.

  • Power status: To know the power status of the OpenPOWER systems, run the following command:
    $IPMI_CMD chassis power status
  • Power on: The system can be powered on by using one of the following methods:
    • Run the following command to power on the system.
      $IPMI_CMD chassis power on
    • Push the power on button.
    • Wake on local area network (LAN).

    To track the power status during the boot process of an OpenPOWER system, the host status sensor provides a system Advanced Configuration and Power Interface (ACPI) power state. This sensor is set to the following states during booting.

    • Soft off: Before power is on (that is, when the system is in off state)
    • Legacy on: When the hostboot firmware boots (that is, when hostboot firmware is able to communicate with the BMC)
    • Working state: When OPAL starts the Linux kernel.

    More details of sensors are provided in the Sensors section.

  • Power off: The system can be powered off by using one of the following methods:
    • Graceful shutdown: Push the power button for less than 3 seconds or run the IPMI power soft command. BMC sends a System Management Software (SMS) attention (graceful operating system shutdown request) to the host operating system. Wait for the host status sensor to be set to the soft off state before BMC trigger hardware power off.
      $IPMI_CMD chassis power soft

      Before BMC issues hardware power off, it must set the host status sensor to the soft off state.

    • Forced power off: Push the power button for more than 3 seconds or run the following IPMI power off command:
      $IPMI_CMD chassis power off

      On forced power off, the system immediately goes to the standby state irrespective of the operating system status, and BMC sets the host status sensor to the softoff state.

  • Power reset: This operation is a hardware power off followed by the IPMI power on sequence.
    $IPMI_CMD chassis power reset
  • Power cycle: When the system is in the working state, the power cycle operation reboots the system. It is a graceful operating system shutdown followed by an IPMI power reset.
    $IPMI_CMD chassis power cycle
  • Power policy: Based on the system power policy, BMC must restore to a particular power state when power supply is lost. What this means is that a server can be powered on automatically according to this power policy setting. To list the available supported power policies, run the following command:
    $IPMI_CMD chassis power policy list

    To set to a particular power policy, run the following command:

    $IPMI_CMD chassis power policy <always-on/always-off/previous>

Sensors

Sensors are software or hardware physical entities that are used to monitor the health of different system hardware devices or software components. There are two types of sensors.

  • A virtual sensor that does not exist physically. However, it represents the status of some software or hardware operation such as processor/Dual Inline Memory Module (DIMM) functioning or host status.
    $IPMI_CMD sensor list

    or

    $IPMI_CMD sdr elist

    The following example shows the virtual sensor Host Status usage.

    During system run-time, the sensor status shows as S0/G0: WORKING, and when the system is in standby state or powered off state, the sensor status shows as S5/G2: SOFT-OFF.

    HOST STATUS      | 04H | OK  | 35.0 | S0/G0: WORKING
    HOST STATUS      | 04H | OK  | 35.0 | S5/G2: SOFT-OFF
  • A physical sensor on the motherboard that reads the status of hardware devices such as processor/DIMM temperatures and frequencies.

    CPU Temp         | 64h | ok  |  3.0 | 42 degrees C
    Membuf Temp 0    | 65h | ok  |  7.0 | 42 degrees C
    DIMM Temp 0      | 69h | ok  | 32.0 | 28 degrees C
    DIMM Temp 1      | 6Ah | ns  | 32.1 | No Reading

System thermal or power management

This feature is mainly used to monitor and control system power and thermal levels. BMC keeps monitoring certain temperatures (processor, DIMM, and so on) with the help of the OCC and sets the fan speed based on the current readings. You can use the power reading and capping functions to read and maintain the power level of the system. You can run the following commands to deal with platform power limits:

$IPMI_CMD dcmi power
power <command>
reading       Get power related readings from the system
get_limit     Get the configured power limits
set_limit     Set a power limit option
activate      Activate the set power limit
deactivate    Deactivate the set power limit
  • Run the following command to find the platform power readings:

    $IPMI_CMD dcmi power reading
    Instantaneous power reading:                   237 Watts
    Minimum during sampling period:                233 Watts
    Maximum during sampling period:                240 Watts
    Average power reading over sample period:      236 Watts
    IPMI timestamp:                                Mon Nov  7 13:20:22 2016
    Sampling period:                               00000010 Seconds.
    Power reading state is:                        activated
  • Run the following command to find the active power limit. The following output shows that the current power limit is 1000 watts.

    $IPMI_CMD dcmi power get_limit
    Current Limit State: Power Limit Active
    Exception actions:   Log Event to SEL
    Power Limit:         1000 Watts
    Correction time:     1000 milliseconds
    Sampling period:     10 seconds
  • Run the following command to set the active power limit:

    $IPMI_CMD dcmi power set_limit limit 1050
    Current Limit State: Power Limit Active
    Exception actions:   Log Event to SEL
    Power Limit:         1050 Watts
    Correction time:     1000 milliseconds
    Sampling period:     10 seconds
  • Run the following command to activate the power limit:

    $IPMI_CMD  dcmi power activate
    Power limit successfully activated
  • Run the following command to deactivate the power limit:

    $IPMI_CMD dcmi power deactivate
    Power limit successfully deactivated

System event log

The BMC is the master of system event log repository and maintains all firmware system event logs. All low-level events in the system firmware happen during system boot and runtime. Also, all the critical failures in the hardware and firmware are logged in the BMC System event log repository. 64 KB of system event log and extended system event log data can be stored.

$IPMI_CMD SEL LIST
1 | 11/07/2016 | 01:44:49 | TEMPERATURE #0X30 | UPPER CRITICAL GOING HIGH | ASSERTED
2 | 11/07/2016 | 01:45:08 | VOLTAGE #0X60 | LOWER CRITICAL GOING LOW  | ASSERTED
3 | 11/07/2016 | 01:45:25 | MEMORY #0X53 | CORRECTABLE ECC | ASSERTED

Boot progress

System boot progress can be tracked by seeing the SOL console where hostboot and OPAL emit their appropriate progress codes or by updating system firmware progress and operating system boot sensors to their respective boot states (that is, processor initialization, motherboard initialization, and PCI initialization).

Hostboot progress codes:

3.42145|Ignoring boot flags, incorrect version 0x0
3.68858|ISTEP  6. 3
4.13466|ISTEP  6. 4
4.13535|ISTEP  6. 5
15.27375|HWAS|PRESENT> DIMM[03]=AAAAAAAAAAAAAAAA
15.27375|HWAS|PRESENT> Membuf[04]=CCCC000000000000
15.27376|HWAS|PRESENT> Proc[05]=C000000000000000
28.57643|ISTEP  6. 6
28.66284|ISTEP  6. 7
28.66337|ISTEP  6. 8
28.69192|ISTEP  6. 9
31.32017|ISTEP  6.10
31.36741|ISTEP  6.11
33.17372|ISTEP  6.12
33.17531|ISTEP  6.13
33.17579|ISTEP  7. 1

OPAL or Skiboot progress codes:

[   44.095518490,5] OPAL skiboot-5.6.0-158-ga1e0a047b2a0 starting...
[   44.095526941,7] initial console log level: memory 7, driver 5
[   44.095530349,6] CPU: P8 generation processor (max 8 threads/core)
[   44.095533485,7] CPU: Boot CPU PIR is 0x0068 PVR is 0x004d0200
[   44.095536954,7] CPU: Initial max PIR set to 0x1fff
[   44.095993908,7] OPAL table: 0x300dc240 .. 0x300dc730, branch table: 0x30002000
[   44.095999625,7] Assigning physical memory map table for unused
[   44.096003854,7] FDT: Parsing fdt @0xff00000
[   44.099952132,6] CHIP: Initialised chip 0 from xscom@3fc0000000000
[   44.100047333,5] CHIP: Chip ID 0000 type: P8 DD2.0
[   44.100050694,7] XSCOM: Base address: 0x3fc0000000000
[   44.100059469,7] XSTOP: XSCOM addr = 0x2010c82, FIR bit = 31
[   44.100063231,6] MFSI 0:0: Initialized
[   44.100065694,6] MFSI 0:2: Initialized
[   44.100068188,6] MFSI 0:1: Initialized
[   44.100451213,5] LPC: LPC[000]: Initialized, access via XSCOM @0xb0020
[   44.100459497,7] LPC: Default bus on chip 0x0
[   44.100585382,6] MEM: parsing reserved memory from node /ibm,hostboot/reserved-memory
[   44.100602925,7] HOMER: Init chip 0

System inventory or vital product data (VPD)

System VPD is stored on the BMC in the IPMI FRU inventory. The BMC collects the FRU data of the hardware that is directly connected to the BMC [for example, backplane, power supply, and voltage regulator module (VRM)]. Hostboot updates the processor, centaur, and DIMM VPD. OPAL updates Peripheral Component Interconnect Express (PCIe) VPD. By running the following command, you can collect the complete system VPD including data that contains manufacturing information, product name, serial, and part numbers.
$IPMI_CMD fru print

The example FRU description for the processor is as follows:

FRU Device Description : CPU (ID 1)
Board Mfg Date         : Mon Jan  1 05:30:00 1996
Board Mfg              : IBM
Board Product          : PROCESSOR MODULE
Board Serial           : YA1932735603
Board Part Number      : 00UM003
Board Extra            : ECID:019A007301180718050A000000C031C2
Board Extra            : EC:20

SOL console

SOL is a mechanism that redirects the input and output of a serial port of the remote system over LAN IP. BMC provides the IPMI SOL console with the help of Ethernet and serial ports that are attached to it. You can see the boot progress and failure messages on the console screen. SOL console is the main interface between the overall system and the user. By running the following command, you can connect to the SOL console: 
$IPMI_CMD sol activate

Also, BMC has 32 KB of circular log buffer that is assigned for SOL console data, which you can get from the BMC busy box.

# cat /extlog/sollog/
/extlog/sollog/SOLHostCapture.log    /extlog/sollog/SOLHostCapture.log.1  /extlog/sollog/archive/
 
# cat /extlog/sollog/SOLHostCapture.log.1
 17.88935|ISTEP 11.12
 17.89009|ISTEP 11.13
 17.89083|ISTEP 12. 1
 17.98714|ISTEP 12. 2
 18.08120|ISTEP 12. 3
 18.11037|ISTEP 12. 4
 18.47909|ISTEP 12. 5
 18.48009|ISTEP 13. 1
 18.56525|ISTEP 13. 2
 18.64082|ISTEP 13. 3
 18.64194|ISTEP 13. 4
 18.67988|ISTEP 13. 5
 18.68156|ISTEP 13. 6
 19.57066|ISTEP 13. 7
 19.74316|ISTEP 13. 8
 19.89404|ISTEP 13. 9

When the system is at a checkstop (for example, when the processor is not able to complete any instructions for some time or it is in an impossible state) the corresponding console log data is archived for further debugging and analysis.

System time management

System time is maintained in the Real-Time Clock (RTC), which is controlled by the BMC. When the BMC boots the system, it sets its own time by reading the current Inter-Integrated Circuit (I2C) RTC time. The host (hostboot or OPAL) or the user can read or write the RTC time by running the following IPMI commands. BMC sets or obtains the current RTC time.

$IPMI_CMD sel time get
03/12/2017 09:32:12
$IPMI_CMD sel time set “03/12/2017 09:34:12”
03/12/2017 09:34:12

Boot failure management

OpenPOWER systems have two sides of the PNOR firmware: Primary side and golden side. BMC always boots the system from the primary side of the PNOR firmware. If the primary side boot fails twice due to system checkstop, watchdog conditions (conditions where the watchdog timer detects malfunctions of the operating system or server and recovers from them), or any other reason, BMC boots the system from the golden side of the PNOR firmware. For this process, BMC uses a boot count sensor, which is initially set to two. During the start of every boot operation, BMC decrements this sensor value by one. When the system reaches to the user-accessible level (that is, petitboot or when Linux boots), OPAL resets this sensor value back to two. Whenever the value of the boot count reaches zero due to boot failure, BMC starts booting the system from the golden side of the PNOR firmware.

During any boot failures, run the following command to know the value of boot count:

$IPMI_CMD sensor list | grep -i "boot count"
Boot Count | 0x0 | discrete | 0x0280| na | na | na | na | na | na

The BIOS golden side sensor has two discrete values (0x0080, 0x0180). The value of this sensor determines the side of the PNOR firmware boot.

The following example shows that the value of this sensor is set to 0x0180, which means that the system boots from the golden side of the PNOR firmware:

$IPMI_CMD sensor list | grep -i golden
BIOS Golden Side | 0x0 | discrete | 0x0180| na | na | na | na | na | na

The following example shows that the value of this sensor is set to 0x0080, which means that the system boots from the primary side of the PNOR firmware:

$IPMI_CMD sensor list | grep -i golden
BIOS Golden Side | 0x0 | discrete  | 0x0080 | na | na | na  | na  | na | na

Out-of-band firmware update

Usually, systems have general availability firmware installed. If the firmware gets corrupted or you want to upgrade the firmware to install bug fixes, you can select the out-of-band firmware update method. System firmware is a combination of the host firmware component (PNOR) and the BMC firmware component. It is a Hardware Platform Management (.hpm) file. To update the firmware code (xxxx.hpm) by using the out-of-band method, you need to complete the following steps:

  1. Power off the system by running the following command: 
    $IPMI_CMD chassis power off
  2. Issue a cold reset to BMC by running the following command:
    $IPMI_CMD mc reset cold

    Then wait until the BMC starts.

  3. Before you update the BMC network settings, back up the network settings by running the following command: 
    $IPMI_CMD raw 0x32 0xba 0x18 0x00
  4. Update the BMC and PNOR levels by the using the HPM upgrade option.
    $IPMI_CMD hpm upgrade <xxxx.hpm file> -z 30000 force

    Wait until the firmware upgrade is successful.

  5. Power on the system by running the following command: 
    $IPMI_CMD chassis power on

VUART-to-VGA rendering

Generally, most of the users want to see boot messages or any boot failures during system boot. To view early boot messages in a system boot, the VUART-to-VGA rendering feature is implemented in OpenPOWER systems. By using this feature, when you power the system on, you can see all the early boot messages in a local VGA console. The BMC renders the output of the host UART to the VGA display.

Figure 1. Sample remote VGA console output

Firmware settings

The boot loader (petitboot) has the following settings in the user interface:

  • Network settings
  • Turbo mode setting (enable/disable)
  • Boot device selection
  • Boot device order

All these features can also be controlled by the BMC. By using IPMI commands, you can enable or disable the turbo mode, select a boot device, and configure the host network.

The settings that are changed by using the IPMI commands take preference, and the settings that are changed at petitboot in nonvolatile random access memory (NVRAM) configuration are overridden. Also, system information provides complete firmware versions (BMC and PNOR versions, including the golden side version), and the BMC MAC address.

IPMI lock and unlock feature

When a system is deployed to a customer, unauthenticated in-band IPMI must not be able to access certain BMC configuration information and functions. The OpenPOWER original equipment manufacturer (OEM) IPMI Lock command can be run in an authenticated interface to lock certain IPMI commands. After BMC is in the IPMI lockdown mode, further unauthenticated in-band IPMI messages must satisfy the allowed commands, added in a list called white list that is matched by NetFn command specification. To revert the BMC to a fully permitted configuration, you can run the OEM IPMI Unlock command in an authenticated interface.

  • Lock IPMI interface command: You can run this command to lock the IPMI interface for safer executions. Only a predefined set of commands are allowed in this mode of operations. This is similar to the safe mode operations. 

    $IPMI_CMD raw 0x32 0xf3 0x4c 0x4f 0x43 0x4b 0x00; echo $?
     
    0
    # ipmitool chassis status
    Error sending Chassis Status command: Insufficient privilege level
     
    Get SEL time
    # ipmitool raw 0x0a 0x48; echo $?
     00 ae c0 57
    0

    From these examples, you can see that the Chassis Status command does not produce any output because the command is not listed in the white list. However, when you run the Get SEL Time command, output is displayed because the command is available in the white list. Commands that are listed in the white list can work when BMC is in the IPMI lockdown mode.

  • Unlock IPMI Interface command: By unlocking the IPMI interface, you can run all the commands again. 

    $IPMI_CMD raw 0x32 0xF4 0x55 0x4e 0x4c 0x4f 0x43 0x4b 0x00

IPMI PNOR reprovision

You can revert the modified system settings and modified configurations back to default settings. The OEM IPMI PNOR Reprovision command in OpenPOWER resets the system to the default settings. BMC clears any persistent data that is set by the user. Currently, in PNOR, the erasable partitions are GARD (hardware guard entries), NVRAM (boot loader configuration), hostboot attribute overrides (turbo mode non-default value to default), and FIRDATA.

The following procedure shows the PNOR re-provision process for an NVRAM partition:

  1. Update the NVRAM partition with test data by running the following command: 

    # nvram --print-config
    "common" Partition
    ---------------------
     
    # nvram --update-config test-name=test-value
    # nvram --print-config
    "common" Partition
    ---------------------
    test-name=test-value
  2. Run the PNOR reset/reprovision command. 

    # $IPMI_CMD raw 0x3A 0x1C; echo $?
     
    [392812400894,5] IPMI: PNOR access requested
  3. Run the following command to get the reprovisioning status. Wait for reprovision to complete. 00 indicates successful reprovision. 03 indicates that the re-provisioning is still in progress. 

    # $IPMI_CMD raw 0x3A 0x1D; echo $?
    03
    0
    # $IPMI_CMD raw 0x3A 0x1D; echo $?
    03
    0
    [163479624724,5] IPMI: PNOR access released
                     
    # $IPMI_CMD raw 0x3A 0x1D; echo $?
    00
    0
  4. Read NVRAM data after reboot by running the following command:

    # nvram --print-config
    "common" Partition
    ---------------------
    test-name=test-value
    # reboot
                                     
    # nvram  --print-config

NVRAM data is erased after a PNOR reset or reprovision operation.

The IPMI raw command data might change between different OpenPOWER BMC vendors. However, the functionality remains the same. All the examples mentioned in this article mainly refer to American Megatrends BMC vendor-based OpenPOWER systems.

References


'IBM PowerLinux' 카테고리의 다른 글

H/W check and status check  (0) 2018.07.23
Amber off  (0) 2018.07.05
Scale-out LC System Event Log Collection Tool  (0) 2018.06.19
RHEL7: How to get started with CPU governor  (0) 2018.03.20
리눅스에서 LVM 구성 및 사용방법  (0) 2018.03.06


To check host hardware using ipmitool as an alternative to dsa, as root run the following commands:


ipmitool sdr (status on envir, fans, cpu's, etc)

ipmitool fru (fru's for memory/system bd, etc) 

ipmitool sel elist (errors in event log)



-----------------------------------------------


ipmitool sel save sellistsave.txt


ipmitool sel elist

ipmitool sel list

ipmitool sel get <entry>

ipmitool sel save <filename>

+ Recent posts