Saturday 29 March 2014

It’s Our Blog's 1 Year Anniversary ...!



Its  been an year ago I have started this blog , I was  thrilled and motivated  by  the tremendous  response you gave to me with over 2.6+ lakhs patronage.

I hope ,we  produce more and more informative articles  moving forward.

One thing I can say ,  Thank you All ;)

Our Moto is  " Stay Hungry, Stay Foolish"

Happy Unixing 

Stay In Touch:  feeds, twitter ,facebook & google+.

Thanks,
Ram





Friday 28 March 2014

A Simple Way to Send Multiple Line Commands Over SSH

 A Simple Way to Send Multiple Line Commands Over SSH

Below are three methods to send multiple line commands over SSH. The first method is a quick overview of running remote commands over SSH, the second method uses the bash command to run remote commands over SSH, and the third method uses HERE documents to run remote commands over SSH. Each have their limitations which I will cover.

Contents

Running Remote Commands Over SSH

To run one command on a remote server over SSH:
ssh $HOST ls
To run two commands on a remote server over SSH:
ssh $HOST 'ls; pwd'
To run the third, fourth, fifth, etc. commands on a remote server over SSH keep appending commands with a semicolon inside the single quotes.

But, what if you want to remotely run many more commands, or if statements, or while loops, etc., and make it all readable?
#!/bin/bash
ssh $HOST '
ls

pwd

if true; then
    echo "This is true"
else
    echo "This is false"
fi

echo "Hello world"
'
The above shell script works but begins to break if local variables are added.

For example, the following shell script will run, but the local variable HELLO will not be parsed inside the remote if statement:
#!/bin/bash

HELLO="world"

ssh $HOST '
ls

pwd

if true; then
    echo $HELLO
else
    echo "This is false"
fi

echo "Hello world"
'
In order to parse the local variable HELLO so it is used in the remote if statement, read onto the next section.

Using SSH with the BASH Command

As mentioned above, in order to parse the local variable HELLO so it is used in the remote if statement, the bash command can to be used:
#!/bin/bash

HELLO="world"

ssh $HOST bash -c "'
ls

pwd

if true; then
    echo $HELLO
else
    echo "This is false"
fi

echo "Hello world"
'"
Perhaps you want to use a remote sudo command within the shell script:
#!/bin/bash

HELLO="world"

ssh $HOST bash -c "'
ls

pwd

if true; then
    echo $HELLO
else
    echo "This is false"
fi

echo "Hello world"

sudo ls /root
'"
When the above shell script is run, everything will work as intended until the remote sudo command which will throw the following error:
sudo: sorry, you must have a tty to run sudo
This error is thrown because the remote sudo command is prompting for a password which needs an interactive tty/shell. To force a pseudo interactive tty/shell, add the -t command line switch to the ssh command:
#!/bin/bash

HELLO="world"

ssh -t $HOST bash -c "'
ls

pwd

if true; then
    echo $HELLO
else
    echo "This is false"
fi

echo "Hello world"

sudo ls /root
'"
With a pseudo interactive tty/shell available, the remote sudo command’s password prompt will be displayed, the remote sudo password can then be entered, and the contents of the remote root’s home directory will be displayed.

However, recently I needed to run a specific remote sed command over SSH to find and delete one line and the subsequent three lines and another specific remote sed command over SSH to find a line and insert another line with some text above it, so I naturally tried using the bash method mentioned above:
#!/bin/bash

ssh $HOST bash -c "'
cat << EOFTEST1 > /tmp/test1
line one
line two
line three
line four
EOFTEST1

cat << EOFTEST2 > /tmp/test2
line two
EOFTEST2

sed -i -e '/line one/,+3 d' /tmp/test1

sed -i -e '/^line two$/i line one' /tmp/test2
'"
Everytime I would run the above shell script, I would get the following error:
sed: -e expression #1, char 5: unterminated address regex
However, the same commands work when run by themselves:
ssh $HOST "sed -i -e '/line one/,+3 d' /tmp/test1"
ssh $HOST "sed -i -e '/^line two$/i line one' /tmp/test2"
I thought the problem may be because of single quotes within single quotes. The bash command above requires everything to be wrapped in single quotes and a sed command requires the regular expression to be wrapped in single quotes as well. As mentioned in the BASH manual, “a single quote may not occur between single quotes, even when preceded by a backslash”.

However, I debunked this single quote theory being my problem because running a simple remote sed search and replace command inside of the bash command worked just fine:
#!/bin/bash

ssh $HOST bash -c "'

echo "Hello" >> /tmp/test3

sed -i -e 's/Hello/World/g' /tmp/test3
'"
I can only assume the problem with the specific remote sed commands is something with the syntax that I have not yet figured out.

Despite all this, I eventually figured out that the specific remote sed commands I wanted to run would work when using SSH with HERE documents.

Using SSH with HERE Documents

As mentioned above, the specific remote sed commands I wanted to run did work when using SSH with HERE documents:
ssh $HOST << EOF
cat << EOFTEST1 > /tmp/test1
line one
line two
line three
line four
EOFTEST1

cat << EOFTEST2 > /tmp/test2
line two
EOFTEST2

sed -i -e '/line one/,+3 d' /tmp/test1

sed -i -e '/^line two$/i line one' /tmp/test2
EOF
Despite the remote sed commands working, the following warning message was thrown:
Pseudo-terminal will not be allocated because stdin is not a terminal.
To stop this warning message from appearing, add the -T command line switch to the ssh command to disable pseudo-tty allocation (a pseudo-terminal can never be allocated when using HERE documents because it is reading from standard input):
ssh -T $HOST << EOF
cat << EOFTEST1 > /tmp/test1
line one
line two
line three
line four
EOFTEST1

cat << EOFTEST2 > /tmp/test2
line two
EOFTEST2

sed -i -e '/line one/,+3 d' /tmp/test1

sed -i -e '/^line two$/i line one' /tmp/test2
EOF
With this working, I later discovered remote sudo commands that require a password prompt will not work with HERE documents over SSH.
ssh $HOST << EOF
sudo ls /root
EOF
The above ssh command will throw the following error if the SSH user you are logging into requires a password when using the remote sudo command:
Pseudo-terminal will not be allocated because stdin is not a terminal.
user@host's password: 
sudo: no tty present and no askpass program specified
However, the remote sudo command will work if the SSH user’s sudo settings allow that user to use sudo without a password by setting user ALL=(ALL) NOPASSWD:ALL in /etc/sudoers.

References

What’s the Cleanest Way to SSH and Run Multiple Commands in Bash?
Chapter 19. Here Documents

Saturday 22 March 2014

Starting and Stopping Software via RC Directories

 STARTING AND STOPPING SOFTWARE VIA RC DIRECTORIES

Question

How can I start up and stop my software on AIX?

Answer:

Starting and Stopping Software via RC Directories

This document describes how to start and stop software using run level directories via /etc/inittab. A run level is a software configuration that allows only a selected group of processes to exist.

For another method to start or stop an application during a reboot or shutdown, refer to the document, Automated Startup and Shutdown of Custom Software. 

The objective of run level script feature is to allow customers to start and stop selected applications by changing the run level. The directories are provided for customers to place their own stop and start scripts.



Background:


During system startup, after the root file system has been mounted in the pre-initialization process, the init command is run as the last step of the startup process. The init command attempts to read the /etc/inittab file. If the file exists, init attempts to locate an initdefault in /etc/inittab. If initdefault entry exists, the init command uses the specified run level as the initial system run level. Run level 2 is defined by default to contain all of the terminal processes and daemons that are run in the multiuser environment. This can be seen in the /etc/inittab file:
 # lsitab init:
init:2:initdefault:
NOTE: Booting the system into mulitiuser mode using a runlevel other than the default, 2, is not supported by IBM. For a list of valid run levels, see man pages for init or telinit.

Upon the installation of the feature, the following are added to your system:

  1. Nine directories:
    /etc/rc.d
    /etc/rc.d/rc2.d
    /etc/rc.d/rc3.d
    /etc/rc.d/rc4.d
    /etc/rc.d/rc5.d
    /etc/rc.d/rc6.d
    /etc/rc.d/rc7.d
    /etc/rc.d/rc8.d
    /etc/rc.d/rc9.d
  2. Eight new entries are added to your /etc/inittab
    l2:2:wait:/etc/rc.d/rc 2
    l3:3:wait:/etc/rc.d/rc 3
    l4:4:wait:/etc/rc.d/rc 4
    l5:5:wait:/etc/rc.d/rc 5
    l6:6:wait:/etc/rc.d/rc 6
    l7:7:wait:/etc/rc.d/rc 7
    l8:8:wait:/etc/rc.d/rc 8
    l9:9:wait:/etc/rc.d/rc 9
    The system will automatically run the "K" or kill scripts when entering a given run level, then proceed to run all "S" or start scripts to start up the applications necessary at that level. In this manner, some applications could be stopped while others started when entering a run level.

  3. When shutting down the system or rebooting using the /usr/sbin/shutdown command, all "K" or kill scripts for every run level will be run. This ensures all custom applications are finished before fully shutting down AIX. 

  4. /etc/rc.d/rc script - This script is designed to use the input run level to visit the appropriate /etc/rc.d/rc.d. It first executes scripts in this directory starting with K to stop the applications. Then, it executes scripts starting with S to start the applications.
Sample scripts are provided in the /etc/rc.d/samples directory. See the Commands Reference for information about /etc/inittab, telinit, and init command.

Scripts to stop or start applications


  1. Create a shell script that includes the commands (provided by the application vendor) to stop or start that program. To use the scripts, the user must copy them to the appropriate /etc/rc.d/rc.d directory. The /etc/rc.d/rc shell script will only vist the directory structures under rc.d.

  2. NOTE: The script name must start with a K to stop or an S to start the application.

  3. Make the script executable by running the chmod command.
  4. In both cases, it is adviseable to use a file naming convention with a numeric after the K or S, and a short description of the process or service to be killed or started. The controlling RC script will run these in numeric order as it finds them. For example, a script to stop and start lpd daemon can be named K70lpd and S70lpd, respectively.

  5. The run level can be changed by running:
  6.  telinit 
    
    This tells the init command to place the system in one of the run levels. When the init command requests a change to run levels 0-9, it kills all processes at the current run levels and then restarts any processes associated with the new run levels.

    To check current run level, run who -r. It will return something similar to the following output:
     run level 2 Oct 4 14:23  2 0 S
    In this example, the system is running at the default run level 2.

Friday 21 March 2014

Power Systems Capacity on demand(CoD)

 POWER SYSTEMS CAPACITY ON DEMAND(COD)
Capacity on demand

Capacity on Demand (CoD) offerings allow you to dynamically activate one or more resources on your server as your business peaks dictate. You can activate inactive processors or memory units that are already installed on your server on a temporary and permanent basis.

This topic describes Capacity on demand capabilities for IBM® POWER6™ or above  servers managed by the Hardware Management Console (HMC) Version 7 and later, and explains how to order and use each offering.

Capacity on Demand offerings

We are going to discuss about the  "Capacity on Demand offerings" and learn basic information about each offering.The following table provides a brief description of each CoD offering. Consult your IBM Business Partner or IBM sales representative to select the CoD offering most appropriate for your environment.

Offering
Description
Capacity Upgrade on Demand
You can permanently activate inactive processors and memory units with code without restart of server
Trial Capacity on Demand
You can evaluate the use of inactive processors, memory, or both, at no charge using Trial CoD. After it is started, the trial period is available for 30 power-on days.
On/Off Capacity on Demand
You can activate processors or memory units for a number of days by using the HMC to activate resources on a temporary basis.
Utility Capacity on Demand
Utility CoD is used when you have unpredictable, short workload spikes.
Capacity BackUp
You can use Capacity BackUp to provide an off-site, disaster recovery server using On/Off CoD capabilities.
PowerVM™ Editions (PowerVM)
PowerVM Editions (PowerVM Editions) deliver advanced virtualization functions
for AIX®, Linux® and IBM i™ clients.
PowerVM Editions (PowerVM Editions) include the following offerings:

Ø  Micro-Partitioning™
Ø  Virtual I/O Server
Ø  Integrated Virtualization Manager
Ø  Live Partition Mobility
Ø  The ability to run x86 Linux applications on Power Systems

PowerVM Editions (Express, Standard, and Enterprise) offer different capabilities.

Capacity on Demand software licensing considerations:

Typically a tool, such as a license manager, is used to manage the licenses. A license manager detects use of the software, compares it to the entitlement, and then takes action based on the results. A licensemanager can be provided by IBM or can be made available by the software provider.

This table shows Capacity on Demand software licensing considerations.

Table 2. Capacity on Demand software licensing considerations

*Note: It is possible that you use a combination of these licensing types.

Capacity Upgrade on Demand activation codes:

After you decide to permanently activate some or all of your resources, you must order and purchase one or more activation features. When you order and purchase activation features, you are provided with one or more activation codes that you use to activate resources on your server.

The activation codes are posted on an IBM Web site for quick access,usually it will take 24 hrs upon purchase to reflect.   Here is the link http://www-912.ibm.com/pod/pod.

Ordering Capacity Upgrade on Demand activation features:

You can order activation features for a new server, a server model upgrade, or an installed server. After you place your order, you will receive a code that activates inactive processors or memory units.

Notes:
  •  It can take several days to process an order. You can use a one-time no-charge Trial Capacity on Demand for 30 days to satisfy workload requirements while your order for permanent activation of additional capacity is being fulfilled. 
  •  An order for activation features will process more quickly if you do not include any miscellaneous features with the order.
To order one or more CUoD activation features:

1. Determine the number of inactive processors or memory units that you want to activate.
2. Contact your IBM Business Partner or IBM sales representative to place your order for one or more activation features.

After ordering, see “Activating Capacity Upgrade on Demand” to activate inactive resources permanently.

Using Capacity Upgrade on Demand from ASMI

You can use the Hardware Management Console (HMC) or the Advanced System Management Interface (ASMI) to manage Capacity Upgrade on Demand (CUoD).

Most Capacity on Demand (CoD) tasks on the HMC require the HMC Super Administrator user role.
If you are not using the HMC, you can use the ASMI.
Activating Capacity Upgrade on Demand
When you purchase one or more activation features, you will receive corresponding activation codes to permanently activate your inactive processors or memory units.

To permanently activate your inactive resources by retrieving and entering your activation code:

1. Retrieve the activation code by going to http://www-912.ibm.com/pod/pod .
2. Enter the system type and serial number of your server.
3. Record the activation code that is displayed on the Web site.
4. Enter your activation code on your server using the HMC. To enter your code:
      a. In the navigation area of the HMC window, expand Systems Management.
      b. Select Servers.
      c. In the contents area, select the server on which you want enter your activation code.
      d. Select Tasks > Capacity on Demand (CoD) > Enter CoD Code.
      e. Type your activation code in the Code field.
      f. Click OK.

Any newly activated processors are now available for use by uncapped logical partitions. If there are no uncapped logical partitions, you must assign the processors to one or more logical partitions in order to begin using the processors. Any newly activated memory must be assigned to one or more logical partitions to begin using the newly activated memory.
Viewing settings for Capacity on Demand resources
You can use the Hardware Management Console (HMC) to view Capacity on Demand (CoD) settings.You can see how many processors or memory units you have, how many are active, and how many are available for activation using CoD with these settings. You can also view information about your On/Off CoD processors and memory units, Trial CoD processors and memory units, and your Utility CoD processors.
To view the capacity settings for processors or memory, do the following:
  1. In the navigation area of the HMC window, expand Systems Management.
  2. Select Servers.
  3. In the contents area, select the server on which you want to view capacity settings.
  4. Select Capacity on Demand.
  5. Select either Processor or Memory.
  6. Select the CoD offering that you want to view.
  7. Select View Capacity Settings
Viewing and saving Capacity on Demand code-generation information:
   1. In the navigation area of the HMC window, expand Systems Management.
   2. Select Servers.
   3. In the contents area, select the server on which you want to view and save the CoD code
        information.
4. Select Tasks.
5. Select Capacity on Demand (CoD).
6. Select Processors (or Memory).
7. Select the CoD offering you want to view or save.
8. Select View Code Information.
9. In the CoD Code Information window, click Save to save the CoD code information to a file on a
     remote system or to a file on removable media.
10. In the Save CoD Code Information panel, select one of these options, and then perform the tasks
     associated with that option.

Trial Capacity on Demand:

Trial Capacity on Demand (CoD) provides no-charge temporary capacity to enable you to test new function on your server.

 Activating Trial Capacity on Demand:

You can activate your inactive processors or memory for a trial period by obtaining and entering a trial processor code or a trial memory code.
To activate Trial CoD, do the following:
1. Retrieve the activation code by going to the following Web address: http://www-912.ibm.com/pod/pod
2. Enter your activation code on your server using the HMC. To enter your code:
   a. In the navigation area of the HMC window, expand Systems Management.
   b. Select Servers.
   c. In the contents area, select the server on which you want enter your activation code.
   d. Select Tasks > Capacity on Demand (CoD) > Enter CoD Code.
   e. Type your activation code in the Code field.
   f. Click OK. 

Stopping Trial Capacity on Demand

Trial Capacity on Demand ends when the trial period is over and the resources have been reclaimed by the server. You must return the resources before the trial period has ended.
 
To stop a current Trial Capacity on Demand, follow these steps:

1. Return the trial resources. 
2. In the navigation area of the HMC window, expand Systems Management.
3. Select Servers.
4. In the contents area, select the server on which you want to stop Trial Capacity on Demand.
5. Select Tasks.
6. Select Capacity on Demand (CoD).
7. Select either Processor or Memory.
8. Select Trial CoD.
9. Select Stop.
10. In the confirmation window, click Yes to stop the trial. Click No to cancel the request to stop the trial (the trial will remain active).

   Trial Capacity on Demand is now stopped and cannot be restarted.

On/Off Capacity on Demand:

On/Off Capacity on Demand (CoD) allows you to temporarily activate and deactivate processor cores and memory units to help meet the demands of business peaks. After you request that a number of processor cores or memory units are to be made temporarily available for a specified number of days,those processor cores and memory units are available immediately. You can start and stop requests for On/Off CoD, and you are billed for usage at the end of each quarter.

Using On/Off Capacity on Demand

You must use the Hardware Management Console (HMC) to use and manage On/Off Capacity on Demand (CoD).Most Capacity on Demand (CoD) tasks on the HMC require the HMC Super Administrator user role.After you have enabled and activated On/Off CoD, minimal day-to-day management of your temporary capacity is required.

Enabling On/Off Capacity on Demand

Before requesting temporary capacity on your server, you must enable your server for On/Off Capacity on Demand (CoD). To enable your server for On/Off CoD, you must use the Hardware Management Console (HMC).

Most CoD tasks on the HMC require the HMC super administrator user role.
To enable your server for On/Off CoD:

1. Retrieve the On/Off CoD enablement code by going to http://www-912.ibm.com/pod/pod .
2. Enter your activation code on your server using the HMC. To enter your code:
a. In the navigation area of the HMC window, expand Systems Management.
b. Select Servers.
c. In the contents area, select the server on which you want enter your activation code.
d. Select Tasks > Capacity on Demand (CoD) > Enter CoD Code.
e. Type your activation code in the Code field.
f. Click OK.

Your server is now enabled for On/Off CoD.

Activating On/Off Capacity on Demand

After you have ordered On/Off CoD and enabled On/Off CoD, you can request temporary activation of On/Off CoD resources.

To request activation of On/Off CoD resources:

1. In the navigation area of the Hardware Management Console window, expand Systems      Management.
2. Select Servers.
3. In the contents area, select the server on which you want to activate processors or memory temporarily.
4. Select Tasks.
5. Select Capacity on Demand.
6. Select either Processor or Memory.
7. Select On/Off CoD.
8. Select Manage.
9. Type the number of On/Off CoD resources you want and the number of days you want them for, and then click OK.

Any newly activated processors are now available for use by uncapped logical partitions. If there are no uncapped logical partitions, you must assign the processors to one or more logical partitions in order to begin using the processors. Any newly activated memory must be assigned to one or more logical partitions to begin using the newly activated memory.

 Utility Capacity on Demand

Utility Capacity on Demand automatically delivers additional processor capacity on a temporary basis within the system’s default Shared Processor Pool.

Utility Capacity on Demand concepts

The Utility Capacity on Demand (CoD) offering is for customers with unpredictable, short workload increases who need an automated and affordable way to help assure that adequate server resource is
available as needed.
When you add utility CoD processors, they are automatically placed in the default shared processor pool. These processors are available to any uncapped partition in any shared processor pool.
Entering Utility CoD enablement codes and reporting codes

Learn more about how to use the HMC to enter enablement and reporting codes.
To enter the Utility CoD enablement and reporting codes, do the following:
1. In the navigation area of the HMC window, expand Systems Management.
2. Select Servers.
3. In the contents area, select the server on which you want to enter your Utility CoD enablement or reporting code.
4. Select Tasks > Capacity on Demand > Enter CoD Code.
5. Type your enablement or reporting code in the Code field.
6. Click OK

Capacity BackUp:

Capacity BackUp uses On/Off Capacity on Demand (CoD) capabilities to provide an off-site,disaster-recovery server.
The Capacity BackUp offering has a minimum set of active processor cores that can be used for any workload and a large number of inactive processor cores that can be activated using On/Off CoD in the event of a disaster. A specified number of no-charge On/Off CoD processor days is provided with Capacity BackUp.

Make sure that you have prepared your server before continuing. 

You can test your Capacity BackUp activations while not incurring duplicate charges. 

PowerVM Editions (PowerVM)

PowerVM Editions (also referred to asPowerVM) is activated with a code, similar to the way that capacity is activated on IBM Systems and IBM eServer hardware.

When you purchase a PowerVM Editions feature, a code is provided that can be entered on the Hardware Management Console (HMC) to activate the technology. You can enter PowerVM activation codes by using the Integrated Virtualization Manager (IVM).

PowerVM Editions concepts

This information describes the virtualization technologies that are available.

The following virtualization technologies are available:
PowerVM is a Virtualization Engine technology that enables the system for the following features:
– Micro-Partitioning
– Virtual I/O Server
– Integrated Virtualization Manager
– Live Partition Mobility
– The ability to run x86 Linux applications on Power Systems

The following table describes the features each PowerVM Edition offers:


Sunday 16 March 2014

How to Install Wine 1.6.2 in RHEL/Fedora/CentOS/Ubuntu/Linux Mint ?

 HOW TO INSTALL WINE 1.6.2 IN RHEL/FEDORA/CENTOS/UBUNTU/LINUX MINT ?
"Wine" is a free  open source application  for Linux operating system which will  enable running Windows applications on several POSIX-compliant operating systems, such as Linux, Mac OSX, & BSD.

Instead of simulating internal Windows logic like a virtual machine or emulator, Wine translates Windows API calls into POSIX calls on-the-fly, eliminating the performance and memory penalties of other methods and allowing you to cleanly integrate Windows applications into your desktop.
Benefits:
  • Its uses all  possibilities of Unix Operating systems like stability, flexibility, remote administration
  • Wine allows  to call windows application from shell scripts ,so as to make best usage of  Unix Systems.
  • Wine makes it possible to access Windows applications remotely.
  • Wine makes it economical to use thin clients
  • Wine is Open Source Software, you can customise it as per your requirements
Wine comes with a software library Winelibthat lets the coders to compile Windows applications on Linux/Unix platform. Wine has now reached it's latest stable version- Wine 1.6.2.

Going forward , we will walk you through the procedure to install wine.

Download S/W: http://www.winehq.org/announce/1.6.2

Installing Wine 1.6.2 in RHEL, CentOS and Fedora:
1) Installing Dependency Packages
# yum -y groupinstall 'Development Tools'
# yum -y install libX11-devel freetype-devel

2) Downloading Wine 1.6.2
#wget http://prdownloads.sourceforge.net/wine/wine-1.6.2.tar.bz2

3)Extract & Installing Wine 1.6.2
$ tar -xvjf wine-1.6.2.tar.bz2

if its 32 bit

#cd wine-1.6.2/
#./tools/wineinstall

On 64-Bit Systems

$ cd wine-1.6.2/
$ ./configure --enable-win64
$ make
# make installRunning Wine 1.6.2

4) Run wine
$ wine notepad++.exe

Install Wine 1.6.2 in Ubuntu/Linux Mint
1) Add the PPA
# add-apt-repository ppa:ubuntu-wine/ppa

2) Update the local repository.
# apt-get update

3) Install the package.
# apt-get install wine1.6

Sunday 9 March 2014

Troubleshooting GPFS Issues

In this article ,we are going to discuss about most general methods of  GPFS issues troubleshooting.
 TROUBLESHOOTING GPFS ISSUES

When you got GPFS issue?

Got a problem? Don’t panic! 
Check for possible basic problems: 
  • Is Network OK? 
  • Check status of the cluster: “mmgetstate–a” 
  • Check status of NSDs: “mmlsdisk fsname” 
Take a 5 min break  
  • In major cases GPFS will recover by it self without need of any intervention from the administrator 
If not recovered 
  • Ensure that you are the only person who is doing the work! 
  • check gpfslogs (first on cluster manager, then on FS manager, then on NSD servers) 
  • check syslog(/var/log/messages) for eventual errors 
  • Check disks availability (mmlsdisk fsname
  • Consult “Problem determination guide” 

Some usefull commands:

  • “mmfsadm dump waiters” will help to find long lasting processes 
  • “mmdiag --network|grep pending” helps to individuate non-responsive node 
  • “mmdiag --iohist” lists last 512 I/O operations performed by GPFS on current node (helps to find malfunctioning disk) 
  • “gpfs.snap” will garter all logs and configurations from all nodes in the cluster 
  • the first thing to send to IBM support when opening service reques

GPFS V3.4 Problem Determination Guide:

NFS stale file handle:

When a GPFS mount point is in the "NFS stale file handle" status, example 
[root@um-gpfs1 root]# df 
Filesystem 1K-blocks Used Available Use% Mounted on !
/dev/gpfs_um1 8125032448 8023801088 101231360 99% /storage/gpfs_um
df: `/storage/gpfs_um': Stale NFS file handle 
Then check if there is any NSD with status "down" 
[root@um-gpfs1 root]# mmlsdisk gpfs_um 
disk driver sector failure holds holds 
name type size group metadata data status availability 
------------ -------- ------ ------- -------- ----- ------------- ------------ 
disk21 nsd 512 4015 yes yes ready up !
disk22 nsd 512 4015 yes yes ready down !
disk23 nsd 512 4015 yes yes ready down !
disk24 nsd 512 4013 yes yes ready up !
restart the NSDs (important: do it for all NSD with status "down" in one command): 
[root@um-gpfs1 root]# mmchdisk gpfs_um start -d "disk21;disk24”
re-mount filesystems

Recovery of GPFS configuration:

If a node of the cluster lost its configuration (has been re-installed) but still present as member of this cluster
(“mmgetstate” lists it in “unknown” state) use this command to recover the node: 
/usr/lpp/mmfs/bin/mmsdrrestore -p diskserv-san-5 -R /usr/bin/scp

Checking existing NSD:

  • If get this warning while creating new nsd Disk descriptor xxx system refers to an existing NSD 
Use this command to verify if this device is actually used in one of the file systems 
mmfsadm test readdescraw /dev/emcpowerax

Saturday 8 March 2014

A Quick Guide To Unix Shell Scripting

1) what is shell script ?

Normally shells are interactive. It means shell accept command from you (via keyboard) and execute them. But if you use command one by one (sequence of 'n' number of commands) , the you can store this sequence of command to text file and tell the shell to execute this text file instead of entering the commands. This is know as shell script.
Shell script defined as:

"Shell Script is series of command written in plain text file. Shell script is just like batch file is MS-DOS but have more power than the MS-DOS batch file."

2)  Shebang:

Naturally, a shell script should start with a line such as the following:
#!/bin/bash
This indicates that the script should be run in the bash shell regardless of which interactive shell the user has chosen. This is very important, since the syntax of different shells can vary greatly.

3) How to write shell script ?

Now i will walk you through  how to write shell script,execute them etc.We will getting started with writing small shell script, that will print "Hello UnixMantra" on screen. Before starting with this you should know.
Following steps are required to write shell script:
(1) Use any editor like vi or exedit to write shell script.

(2) After writing shell script set execute permission for your script as follows

syntax: chmod permission your-script-name

Examples:
$ chmod +x your-script-name
$ chmod 755 your-script-name

Note: This will set read write execute(7) permission for owner, for group and other permission is read and execute only(5).

(3) Execute your script as

syntax: bash your-script-name
sh your-script-name
./your-script-name

Examples:
$ bash bar
$ sh bar
$ ./bar

NOTE: In the last syntax ./ means current directory, But only . (dot) means execute given command file in current shell without starting the new copy of shell, The syntax for . (dot) command is as follows

Syntax: . command-name

Example:
$ . foo

Now you are ready to write first shell script that will print "Hello UnixMantra" on screen. See the common vi command list , if you are new to vi.

$ vi firstscript
#
# My first shell script
#
clear
echo "Hello UnixMantra"

After saving the above script, you can run the script as follows:
$ ./firstscript

This will not run script since we have not set execute permission for our script first; to do this type command

$ chmod 755 firstscript
$ ./firstscript

4) Commenting Commands:

Any line beginning with a hash '#' character in the first column is taken to be a comment and is ignored. The only exception is the first line (shebang #!/bin/sh)  in the file, where the comment is used to indicate which shell should be used.

5) Shell Variables:

Like every programming language, shells support variables. Shell variables may be assigned values, manipulated, and used. Some variables are automatically assigned for use by the shell.
there are two types of variable:

(1) System variables - Created and maintained by Unix OS itself. This type of variable defined in CAPITAL LETTERS.
(2) User defined variables (UDV) - Created and maintained by user. This type of variable defined in lower letters.
Any programming language needs variables. You define a variable as follows:
Y="hello"
and refer to it as follows:
$Y
More specifically, $Y is used to denote the value of the variable Y.

$ no=10
# this is ok
$ 10=no
# Error, NOT Ok, Value must be on right side of = sign.

To define variable called 'vech' having value car
$ vech=car

To define variable called n having value 10
$ n=10
Caution: Do not modify System variable this can some time create problems.
Yon can  print the value of the variable or command using  "echo"  or  "print"
#echo "$Y"
hello
I always suggest you to use "curly braces {}" to protect the variables, we have a good advantage when grabbing the actual values of variables.
Eg:
# X=Hello
#echo "$XWorld"
There wont be any output  by above command because the shell looks for "Xworld" as variable rather X.We can avoid this embracing situation using curly braces.
#echo "${X}World"
HelloWorld

6)  Analysing  quotes:

There are three types of quotes

Quotes
Name
Meaning
"
Double Quotes
"Double Quotes" - Anything enclose in double quotes removed meaning of that characters (except \ and $).
'
Single quotes
'Single quotes' - Enclosed in single quotes remains unchanged.
`
Back quote

`Back quote` - To execute command
Eg:
MY_VALUE=Hello
$ echo '$MY_VALUE'
$MY_VALUE
$ echo "$MY_VALUE"
Hello
$ echo "Today is date"
Can't print message with today's date.
$ echo "Today is `date`".
It will print today's date as, Today is  Fri Mar 07 15:35:08 EDT 2014

7)  Conditional Statement:

if or elif
Conditionals are used where an action is appropriate only under certain circumstances. The most frequently used conditional operator is the if-statement. For example, the shell below displays the contents of a file on the screen using cat, but lists the contents of a directory using ls.
#!/bin/sh
# show script
if [ -d $1 ]
then
  ls $1
else
  cat $1
fi
Here, we notice a number of points:

  • The if-statement begins with the keyword if, and ends with the keyword fi (if, reversed).
  • The if keyword is followed by a condition, which is enclosed in square brackets. In this case, the condition -d $1 may be read as: if $1 is a directory.
  • The line after the if keyword contains the keyword then.
  • Optionally, you may include an else keyword.
If the condition is satisfied (in this case, if $1 is a directory) then the commands between the then and else keywords are executed; if the condition isn't satisfied then the commands between the else and fi keywords are executed. If an else keyword isn't included, then the commands between the then and fi keywords are executed if the condition is true; otherwise the whole section is skipped.

Type1
Type2
Type3

if condition

then

    statement1

    statement2
    ..........
fi

if condition
then
    statement1
    statement2
    ..........
else
    statement3
fi

"if condition1
then
    statement1
    statement2
    ..........
elif condition2
then
    statement3
    statement4
    ........   
elif condition3
then
    statement5
    statement6
    ........   
fi

To run simple test

If you wish to specify an alternate action when the condition fails

it is possible to test for another condition if the first "if" fails. Note that any number of elifs can be added.
The Test Command and Operators
The command used in conditionals nearly all the time is the test command. Test returns true or false (more accurately, exits with 0 or non zero status) depending respectively on whether the test is passed or failed. It works like this:
test operand1 operator operand2
for some tests, there need be only one operand (operand2) The test command is typically abbreviated in this form:
[ operand1 operator operand2 ]
To bring this discussion back down to earth, we give a few examples:
#!/bin/bash
X=3
Y=4
empty_string=""
if [ $X -lt $Y ] # is $X less than $Y ? 
then
 echo "\$X=${X}, which is smaller than \$Y=${Y}"
fi

if [ -n "$empty_string" ]; then
 echo "empty string is non_empty"
fi

if [ -e "${HOME}/.surya" ]; then    # test to see if ~/.surya exists
 echo "you have a .surya file"
 if [ -L "${HOME}/.surya" ]; then   # is it a symlink ?  
  echo "it's a symbolic link
 elif [ -f "${HOME}/.surya" ]; then  # is it a regular file ?
  echo "it's a regular file"
 fi
else
 echo "you have no .surya file"
fi
A brief summary of test operators
Here's a quick list of test operators. It's by no means comprehensive, but its likely to be all you'll need to remember (if you need anything else, you can always check the bash manpage ... )

operator produces true if... number of operands
-n operand non zero length 1
-z operand has zero length 1
-d there exists a directory whose name is operand 1
-f there exists a file whose name is operand 1
-eq the operands are integers and they are equal 2
-neq the opposite of -eq 2
= the operands are equal (as strings) 2
!= opposite of = 2
-lt operand1 is strictly less than operand2 (both operands should be integers) 2
-gt operand1 is strictly greater than operand2 (both operands should be integers) 2
-ge operand1 is greater than or equal to operand2 (both operands should be integers) 2
-le operand1 is less than or equal to operand2 (both operands should be integers) 2
Case Statements:
The case construct has the following syntax:
case word in
pattern) list ;;
...esac      
An example of this should make things clearer:
!#/bin/sh
case $1
in
1) echo 'First Choice';;
2) echo 'Second Choice';;
*) echo 'Other Choice';;
esac
"1", "2" and "*" are patterns, word is compared to each pattern and if a match is found the body of the corresponding pattern is executed, we have used "*" to represent everything, since this is checked last we will still catch "1" and "2" because they are checked first. In our example word is "$1", the first parameter, hence if the script is ran with the argument "1" it will output "First Choice", "2" "Second Choice" and anything else "Other Choice". In this example we compared against numbers (essentially still a string comparison however) but the pattern can be more complex, see the SH man page for more information.

8) Looping Commands:

Whereas conditional statements allow programs to make choices about what to do, looping commands support repetition. Many scripts are written precisely because some repetitious processing of many files is required, so looping commands are extremely important.

Loops are constructions that enable one to reiterate a procedure or perform the same procedure on several different items. There are the following kinds of loops available in bash

  • for loops
  • while loops
'For' loops
The syntax for the for loops is best demonstrated by example.
#!/bin/bash
for X in red green blue
do
 echo $X
done
The for loop iterates the loop over the space seperated items. Note that if some of the items have embedded spaces, you need to protect them with quotes. Here's an example:
#!/bin/bash
colour1="red"
colour2="light blue"
colour3="dark green"
for X in "$colour1" $colour2" $colour3"
do
 echo $X
done
Can you guess what would happen if we left out the quotes in the for statement ? This indicates that variable names should be protected with quotes unless you are pretty sure that they do not contain any spaces.
'While' Loops
While loops iterate "while" a given condition is true. An example of this:
#!/bin/bash
X=0
while [ $X -le 20 ]
do
 echo $X
 X=$((X+1))
done
This raises a natural question: why doesn't bash allow the C like for loops
for (X=1,X<10; X++)
As it happens, this is discouraged for a reason: bash is an interpreted language, and a rather slow one for that matter. For this reason, heavy iteration is discouraged.

9) Functions:

When program gets complex we need to use divide and conquer technique. It means whenever programs gets complicated, we divide it into small chunks/entities which is know as function.
Function is series of instruction/commands. Function performs particular activity in shell i.e. it had specific work to do or simply say task. To define function use following syntax:
Syntax:
           function-name ( )
           {
                command1
                command2
                .....
                ...
                commandN
                return
           }
Where function-name is name of you function, that executes series of commands. A return statement will terminate the function. Example:
Type SayHello() at $ prompt as follows
$ SayHello()
{ echo "Hello $LOGNAME, Have nice computing"
return}
To execute this SayHello() function just type it name as follows:
$ SayHello
Hello surya, Have nice computing.
This way you can call function.

10)  Command Substitution:

Command Substitution is a very handy feature of the bash shell. It enables you to take the output of a command and treat it as though it was written on the command line. For example, if you want to set the variable X to the output of a command, the way you do this is via command substitution.

There are two means of command substitution: brace expansion and backtick expansion.
Brace expansion workls as follows:
$(commands) expands to the output of commands

This permits nesting, so commands can include brace expansions

Backtick expansion expands
`commands` to the output of commands
An example is given;:
#!/bin/bash
files="$(ls)"
web_files=`ls public_html`
echo "$files"      # we need the quotes to preserve embedded newlines in $files
echo "$web_files"  # we need the quotes to preserve newlines 
X=`expr 3 \* 2 + 4` # expr evaluate arithmatic expressions. man expr for details.
echo "$X"
The advantage of the $() substitution method is almost self evident: it is very easy to nest. It is supported by most of the bourne shell varients (the POSIX shell or better is OK). However, the backtick substitution is slightly more readable, and is supported by even the most basic shells (any #!/bin/sh version is just fine)

Note that if strings are not quote-protected in the above echo statement, new lines are replaced by spaces in the output.

11)  Shell Arithmetic:

Use to perform arithmetic operations.
Syntax:
expr op1 math-operator op2
Examples:
$ expr 1 + 3
$ expr 2 - 1
$ expr 10 / 2
$ expr 20 % 3
$ expr 10 \* 3
$ echo `expr 6 + 3`

12)  How to de-bug the shell script?

While programming shell sometimes you need to find the errors (bugs) in shell script and correct the errors (remove errors - debug). For this purpose you can use -v and -x option with sh or bash command to debug the shell script.
General syntax is as follows:
sh   option   { shell-script-name }
OR
bash   option   { shell-script-name }

Option can be

-v Print shell input lines as they are read.
-x After expanding each simple-command, bash displays the expanded value of PS4 system variable, followed by the command and its expanded arguments.

Thursday 6 March 2014

How to Collect a GPFS snap

Question:

How do I collect a gpfs snap for review by  IBM GPFS support?

Answer

The command to collect GPFS Snap is  "gpfs,snap" , this is very generic but very helpful.

# gpfs.snap

Note:  The  purpose of above command is to collect the  snap on one node in the cluster,preferably the problematic one.

If the cluster is experiencing problems as a whole use -a flag

# gpfs.snap -a

Note1:  -a flag only recommended for smaller clusters as it collects the logs for all of the nodes in the cluster.

Note2: -d  option can  specify the Output Directory but  the default is /tmp/gpfs.snapOut.

==> Another recommendation is collect an mmfsadm dump waiters

/usr/lpp/mmfs/bin/mmfsadm dump waiters > gpfs.waiters
/usr/lpp/mmfs/bin/mmfsadm dump all > gpfs.dump.all
/usr/lpp/mmfs/bin/mmfsadm dump kthreads > gpfs.dump.kthreads

==> Another file you need to check is log file /var/adm/ras/mmfs.log.latest

This information will greatly assist development in reviewing performance or problematic issues in GPFS.

Saturday 1 March 2014

Find NFS Clients Connected to NFS Server

 FIND NFS CLIENTS CONNECTED TO NFS SERVER

Question: How to find Find  NFS Clients Connected to NFS Server

Ans:
You can say it very simple by using 

showmount -e  <NFS Server Name>

# showmount -e umbox04
export list for umbox04:
/gpfs/edw/common  umlpar1 umlpar2 umlpar3 umlpar4 umlpar5
/export/nim      udaixserv1,udaixserv2,udaixserv3,udaixserv4,udaixserv5
/bigdata (everyone)

but there is a problem here, if the nfs mount is shared as everyone ( say /bigdata in the above example) you will not be able to tell ,to which clients are  using it.

In-order to overcome above issues ,there are two ways to find the NFS clients connected to  NFS server.

1. Using "netstat":

The idea with  "netstat" is an indirect method . We use nfs port which is 2049 to get the clients information.
netstat -an | grep nfs.server.ip:port
If your nfs server IP address 10.6.55.21 and port is 2049, enter:
Sample outputs:
# netstat -an | grep 10.6.55.21:2049
tcp        0      0 10.6.55.21:2049       10.6.55.33:757         ESTABLISHED
tcp        0      0 10.6.55.21:2049       10.6.55.34:892         ESTABLISHED

Where,

  •     10.6.55.21 - NFS serer IP address
  •     2049 - NFS server port
  •     10.6.55.33 and 10.6.55.34 - NFS clients IP address

2. Using showmount command

You can to use the showmount command to see mount information for an NFS server. The following command should not be used as it may produce unreliable result (you can type this command on any one of the nfs client):
showmount -a <NFS Server Name>
Sample outputs:
All mount points on mynfsserv01:

#showmount -a mynfsserv01
10.6.55.33:/umdata
10.6.55.34:/umdata
10.6.55.69:/umdata
10.6.55.3:/umdata
10.6.55.6:/umdata
10.6.55.16:/umdata

Where,
  • -a: List both the client hostname or IP address and mounted directory in host:dir format. This info should not be considered reliable.

As per the rpc.mountd(8) man page:
he rpc.mountd daemon registers every successful MNT request by adding an entry to the /var/lib/nfs/rmtab file. When receivng a UMNT request from an NFS client, rpc.mountd simply removes the matching entry from /var/lib/nfs/rmtab, as long as the access control list for that export allows that sender to access the export.

Clients can discover the list of file systems an NFS server is currently exporting, or the list of other clients that have mounted its exports, by using the showmount(8) command. showmount(8) uses other procedures in the NFS MOUNT protocol to report information about the server's exported file systems.

Note, however, that there is little to guarantee that the contents of /var/lib/nfs/rmtab are accurate. A client may continue accessing an export even after invoking UMNT. If the client reboots without sending a UMNT request, stale entries remain for that client in /var/lib/nfs/rmtab.