Translate

jueves, 28 de junio de 2012

Procedimiento para la generación de Tunel con vncserver y putty para AIX


1. Intalación en el, o los servers

a. Instalación en el Nodo1 del VNC

[root@ XXXXXXXXXX  ]/>rpm -ivh vnc-3.3.3r2-3.aix5.1.ppc.rpm
vnc                         ##################################################
[root@ XXXXXXXXXX  ]/>

b. Instalación en el Nodo2 del VNC

[root@NODO2]/>rpm -ivh vnc-3.3.3r2-3.aix5.1.ppc.rpm
vnc                         ##################################################
[root@NODO2]/>

c. Editar /etc/ssh/sshd_config y eliminar el comentario (#) de las lineas X11 así:

[root@nodo1]vi /etc/ssh/ssh_config
#GatewayPorts no
X11Forwarding yes
X11DisplayOffset 10
X11UseLocalhost yes



2. Lanzar el vncserver en el servidor

a. Desde el Putty nueva session cree el password para el vncserver asi: (asegurse de tener configurado el Tunel en el putty)

[grid@ XXXXXXXXXX  ]/home/grid>vncserver
You will require a password to access your desktops.
Password: 
Verify: 

b. lanzar el vncserver

[grid@ XXXXXXXXXX  ]/home/grid>vncserver
New 'X' desktop is  XXXXXXXXXX :1
Creating default startup script /home/grid/.vnc/xstartup
Starting applications specified in /home/grid/.vnc/xstartup
Log file is /home/grid/.vnc/XXXXXXXXXX :1.log
[grid@XXXXXXXXXX ]/home/grid>

No cierre la session del putty  

3. Ejecute el vncserver desde el cliente

a. Desde el cliente

Ejecutar el VNC Viewer con localhost:5901 (valores por defaul) de lo contrario consulte el log para ver los valores correctos

Figura 1. Ventana Conección Ultr@VNC


Si falla saldrá una ventana similar as:

Figura 2 Ventana de mensaje de error Ultr@VNC

Si no falla saldrá una ventana solicitando el password el mismo que configuro en el paso 2.a:

Figura 3 Ventana de Password VNC Client

Para solucionar la falla reportada el procedimiento seria

1. Indague sobre el log y verifique que el error es similar a el señalado aquí abajo

[root@ XXXXXXXXXX ]/>tail -f /.vnc/ XXXXXXXXXX :1.log
could not open default font 'fixed'
xrdb: Connection refused
xrdb: Can't open display ' XXXXXXXXXX :1'
1356-265 xsetroot:  Unable to open display:   XXXXXXXXXX :1.
Warning: This program is an suid-root program or is being run by the root user.
The full text of the error or warning message cannot be safely formatted
in this environment. You may get a more descriptive message by running the
program as a non-root user or by removing the suid bit on the executable.
xterm Xt error: Can't open display: %s
twm:  unable to open display " XXXXXXXXXX :1"

2. SOLUCION

a. Edite el archivo vncserver y adicione en la linea 151 la siguiente instrucción

$cmd .= " -fp /usr/lib/X11/fonts/,/usr/lib/X11/fonts/misc/";

[root@XXXXXXXXXX ]/>cd /usr/bin/X11
[root@XXXXXXXXXX]/usr/bin/X11>ls -la vnc* 
lrwxrwxrwx    1 root     system           39 Jun 22 11:08 vncconnect -> ../../../../opt/freeware/bin/vncconnect
lrwxrwxrwx    1 root     system           38 Jun 22 11:08 vncpasswd -> ../../../../opt/freeware/bin/vncpasswd
lrwxrwxrwx    1 root     system           38 Jun 22 11:08 vncserver -> ../../../../opt/freeware/bin/vncserver
lrwxrwxrwx    1 root     system           38 Jun 22 11:08 vncviewer -> ../../../../opt/freeware/bin/vncviewer
[root@ XXXXXXXXXX ]/usr/bin/X11>cd /opt/freeware/bin/ 
[root@ XXXXXXXXXX ]/opt/freeware/bin>chmod u+w vncsercer
[root@ XXXXXXXXXX ]/opt/freeware/bin>ls -la vncserver
-rwxr-xr-x    1 root     system        13312 Oct 26 2000  vncserver
[root@ XXXXXXXXXX ]/opt/freeware/bin>vi  vncserver

:151 $cmd .= " -fp /usr/lib/X11/fonts/,/usr/lib/X11/fonts/misc/";


miércoles, 27 de junio de 2012

Pre-requisitos Instalación Oracle RAC 11gR2 en AIX 6.1 (paso a paso)




1.     Certificación


Para realizar la certificación de la plataforma y la versión de oracle que desea instalar siga las siguientes instrucciones:

1.     Conectece en (MOS) My Oracle Support

2.     Ir al tab de "Certificaciones" 

3.     En el campo Product selecciona "Oracle Real Application Cluster" 

4.     En el campo Release selecciona <La versión de Base de Datos> ejemplo "11.2.0.3"

5.     En el campo Platform selecciona <La plataforma S/O deseada> ejemplo "IBM AIX on POWER System (64-bit) (6.1)" 

6.     Presionar el boton "Search" 

7.     Revisar los productos certificados 






 2.     Revisión de Requisitos Mínimos 


(En este punto es también importante contar con un especialista en S/O para que nos apoye)



Con el fin de mantener los mínimos requeridos es recomendable guiarse por la siguiente nota  (Oracle® Real Application Clusters Installation Guide 11g Release 2 (11.2) for Linux and UNIX )



Resumiendo les comparto un formato para que sea completado antes de iniciar la instalación si alguno de los requisitos siguientes no cumple se debe solicitar solucionar antes de iniciar la instalación


3. Verificaciones de los requisitos (SE DEBE REPETIR EN LOS DOS NODOS)

·         Memoria Física (al menos 1.5 (GB) de RAM)  =>Recomiendo Mínimo 2 Gigas




[root@XXXXXXXXXX]/>/usr/sbin/lsattr -E -l sys0 -a realmem
realmem 100663296 Amount of usable physical memory in Kbytes False
[root@ XXXXXXXXXX ]/>



·          Área de Swap entre igual a la RAM => recomiendo el doble con un máximos de 16 GB.

[root@ XXXXXXXXXX ]/>/usr/sbin/lsps –a
Page Space      Physical Volume   Volume Group    Size %Used Active Auto  Type Chksum
paging00        hdisk0            rootvg        8448MB     1   yes   yes    lv     0
hd6             hdisk0            rootvg        8448MB     1   yes   yes    lv     0
[root@ XXXXXXXXXX ]/>

·         Espacio en /tmp de al menos de 1GB => Recomiendo al menos el doble  2GB libres. (procedimiento de solución)




[root@ XXXXXXXXXX ]/>df -k /tmp
Filesystem    1024-blocks      Free %Used    Iused %Iused Mounted on
/dev/hd3          1310720    948400   28%      716     1% /tmp
 
[root@ XXXXXXXXXX ]/>
 [root@oraprcrac1pro]/>df -g /tmp
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/hd3           1.25      0.90   28%      716     1% /tmp
 
[root@ XXXXXXXXXX ]/>chfs -a size=2G /tmp
Filesystem size changed to 4194304
 
[root@ XXXXXXXXXX ]/>df -g /tmp
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/hd3           2.00      1.65   18%      716     1% /tmp
[root@oraprcrac1pro]/>

·         Operating System Versión




[root@nodo1]/>oslevel
6.1.0.0
[root@nodo1]/>uname -a
AIX nodo1 1 6 00F6BE614C00
[root@nodo1]/>
[root@nodo1]/>oslevel -s
6100-06-01-1043=> AIX 6.1 Update 6
[root@nodo1]/>
[root@nodo1]/>/usr/bin/getconf HARDWARE_BITMODE
64
[root@nodo1]/>

·         FILESET




[root@nodo1]/>lslpp -l bos.adt.base bos.adt.lib bos.adt.libm bos.perf.libperfstat bos.perf.perfstat bos.perf.proctools rsct.basic.rte rsct.compat.clients.rte  xlC.aix61.rte
  Fileset                      Level  State      Description
  ----------------------------------------------------------------------------
  bos.adt.base              6.1.6.15  COMMITTED  Base Application Development
                                                 Toolkit
  bos.adt.lib                6.1.2.0  COMMITTED  Base Application Development
                                                 Libraries
  bos.adt.libm               6.1.6.0  COMMITTED  Base Application Development
                                                 Math Library
  bos.perf.libperfstat      6.1.6.15  COMMITTED  Performance Statistics Library
                                                 Interface
  bos.perf.perfstat         6.1.6.15  COMMITTED  Performance Statistics
                                                 Interface
  bos.perf.proctools        6.1.6.15  COMMITTED  Proc Filesystem Tools
 
  bos.adt.base              6.1.6.15  COMMITTED  Base Application Development
  bos.perf.libperfstat      6.1.6.15  COMMITTED  Performance Statistics Library
  bos.perf.perfstat         6.1.6.15  COMMITTED  Performance Statistics                                                 
  xlC.aix61.rte             11.1.0.1  COMMITTED  XL C/C++ Runtime for AIX 6.1
  xlC.rte                   11.1.0.1  COMMITTED  XL C/C++ Runtime
[root@nodo1]/>lslpp -l gpfs.base
lslpp: Fileset gpfs.base not installed. (no es necesario cuando se utiliza ASM)



·         Patch




[root@nodo1]/> instfix -i -k "IZ41855 IZ51456 IZ52319 IZ97457 IZ89165"
    There was no data for IZ41855 in the fix database.
    There was no data for IZ51456 in the fix database.
    There was no data for IZ52319 in the fix database.
    There was no data for IZ97457 in the fix database.
    There was no data for IZ89165 in the fix database.
[root@nodo1]/>



No es necesario debido a que es AIX 6.1 Update 6



·         Red


Lo ideal es que se cumpla el siguiente cuadro sin embargo en varias instalaciones he encontrado que se debe realizar la definición de las IP a nivel de archivo “hosts”

Identity
Home Node
Host Node
Given Name
Type
Address
Address Assigned By
Resolved By
Node 1 Public
Node 1
node1
node1Foot 1 
Public
192.0.2.101
Fixed
DNS
Node 1 VIP
Node 1
Selected by Oracle Clusterware
node1-vip
Virtual
192.0.2.104
Fixed
DNS and hosts file
Node 1 Private
Node 1
node1
node1-priv
Private
192.168.0.1
Fixed
DNS and hosts file, or none
Node 2 Public
Node 2
node2
Public
192.0.2.102
Fixed
DNS
Node 2 VIP
Node 2
Selected by Oracle Clusterware
node2-vip
Virtual
192.0.2.105
Fixed
DNS and hosts file
Node 2 Private
Node 2
node2
node2-priv
Private
192.168.0.2
Fixed
DNS and hosts file, or none
SCAN VIP 1
none
Selected by Oracle Clusterware
mycluster-scan
virtual
192.0.2.201
Fixed
DNS
SCAN VIP 2
none
Selected by Oracle Clusterware
mycluster-scan
virtual
192.0.2.202
Fixed
DNS
SCAN VIP 3
none
Selected by Oracle Clusterware
mycluster-scan
virtual
192.0.2.203
Fixed
DNS



Para la instalación se puede utilizar el archivo hosts teniendo en cuenta que solo podría otener una de las tres ips del SCAN asi



Nodo1



[root@nodo1]/>vi /etc/hosts
127.0.0.1    loopback localhost      # loopback (lo0) name/address
192.0.2.101  nodo1
192.168.0.1  nodo1-priv
192.0.2.104   nodo1-vip
192.0.2.102  NODO2
192.0.2.105  NODO2-vip
192.168.0.2  NODO2-priv
# Para que pase la Instalación
192.0.2.201  mycluster-scan



Nodo2

 
[root@nodo2]/>vi /etc/hosts
127.0.0.1    loopback localhost      # loopback (lo0) name/address
192.0.2.101  nodo1
192.168.0.1  nodo1-priv
192.0.2.104   nodo1-vip
192.0.2.102  NODO2
192.0.2.105  NODO2-vip
192.168.0.2  NODO2-priv
# Para que pase la Instalación
192.0.2.201  mycluster-scan

·         Parametros del Kerner para UDP y TCP




[root@nodo1]/>/usr/sbin/no -a | fgrep ephemeral
       tcp_ephemeral_high = 65535
        tcp_ephemeral_low = 32768
       udp_ephemeral_high = 65535
        udp_ephemeral_low = 32768
[root@nodo1]/>
 
[root@nodo1]/>/usr/sbin/no -a | fgrep ephemeral
       tcp_ephemeral_high = 65535
        tcp_ephemeral_low = 32768
       udp_ephemeral_high = 65535
        udp_ephemeral_low = 32768
 
Se soluciona asi:
 
[root@nodo1]/>/usr/sbin/no -p -o tcp_ephemeral_low=9000 -o tcp_ephemeral_high=65500
Setting tcp_ephemeral_low to 9000
Setting tcp_ephemeral_low to 9000 in nextboot file
Setting tcp_ephemeral_high to 65500
Setting tcp_ephemeral_high to 65500 in nextboot file
[root@nodo1]/>/usr/sbin/no -p -o udp_ephemeral_low=9000 -o udp_ephemeral_high=65500
Setting udp_ephemeral_low to 9000
Setting udp_ephemeral_low to 9000 in nextboot file
Setting udp_ephemeral_high to 65500
Setting udp_ephemeral_high to 65500 in nextboot file



·         Habilitar fullCore




[root@nodo1]/>lsattr -El sys0 -a fullcore
fullcore false Enable full CORE dump True
[root@nodo1]/>



·         Establecer en unlimited el valor para core dumps:
[root@nodo1]/>ulimit -c unlimited



·         Establecer en unlimited el valor para core files
[root@nodo1]/>ulimit -f unlimited



·         Ajustar el valor de aioservers processes aio_maxreqs




[root@nodo1]/>/usr/sbin/ioo -a | grep aio
                    aio_active = 0
                   aio_maxreqs = 65536
                aio_maxservers = 30
                aio_minservers = 3
         aio_server_inactivity = 300
              posix_aio_active = 0
             posix_aio_maxreqs = 65536
          posix_aio_maxservers = 30
          posix_aio_minservers = 3
   posix_aio_server_inactivity = 300
[root@nodo1]/>



·         Creación y verificación de usuarios sistema operativo




[root@nodo1]/>mkgroup -'A' id='1000' adms='root' oinstall
[root@nodo1]/>mkgroup -'A' id='1100' adms='root' asmadmin
[root@nodo1]/>mkgroup -'A' id='1200' adms='root' dba
[root@nodo1]/>mkgroup -'A' id='1300' adms='root' asmdba
[root@nodo1]/>mkgroup -'A' id='1301' adms='root' asmoper
[root@nodo1]/>mkuser id='1100' pgrp='oinstall' groups='asmadmin,asmdba,asmoper,dba' home='/home/grid' grid
[root@nodo1]/>mkuser id='1101' pgrp='oinstall' groups='dba,asmdba' home='/home/oracle' oracle
[root@nodo1]/>mkdir -p /orafs/app/11.2.0/grid
[root@nodo1]/>chown -R grid:oinstall /orafs
[root@nodo1]/>mkdir -p /orafs/app/oracle
[root@nodo1]/>chown oracle:oinstall /orafs/app/oracle
[root@nodo1]/>chmod -R 775 /orafs



[root@nodo1]/>id grid
uid=1100(grid) gid=1000(oinstall) groups=1100(asmadmin),1300(asmdba),1301(asmoper),1200(dba)
[root@nodo1]/>id oracle
uid=1101(oracle) gid=1000(oinstall) groups=1200(dba),1300(asmdba)
[root@nodo1]/>



[root@nodo1]/> chuser capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE grid
[root@nodo1]/>lsuser -a capabilities grid
grid capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE
[root@nodo1]/>



·         Setup SSH user equivalency




Crear password para los usuarios grid y oracle



Nodo1 (pass=oracle21)



[root@nodo1]/>passwd grid
Changing password for "grid"
grid's New password:
Re-enter grid's new password:
[root@nodo1]/>passwd oracle
Changing password for "oracle"
oracle's New password:
Re-enter oracle's new password:
[root@nodo1]/>



Nodo2 (pass=oracle21)

[root@NODO2]/tmp>passwd grid
Changing password for "grid"
grid's New password:
Re-enter grid's new password:
[root@NODO2]/tmp>passwd oracle
Changing password for "oracle"
oracle's New password:
Re-enter oracle's new password:
[root@NODO2]/tmp>



Para evitar el problema “WARNING: Your password has expired.” Al momento de la definición  hacer:



Nodo1(pass=oracle01)

[grid@nodo1]/orafs/ORAC/grid>ssh grid@NODO2
grid@NODO2's password:
WARNING: Your password has expired.
You must change your password now and login again!
Changing password for "grid"
grid's Old password:
grid's New password:
Re-enter grid's new password:
[grid@NODO2]/home/grid>



Nodo2 (pass=oracle01)

[grid@NODO2]/home/grid>ssh grid@nodo1
The authenticity of host 'nodo1 (xxx.xx.xxx.xx)' can't be established.
RSA key fingerprint is 9f:42:3e:df:b2:b7:45:e0:ad:3b:b6:14:f2:7b:c3:52.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'nodo1,xxx.xx.xxx.xx' (RSA) to the list of known hosts.
grid@nodo1's password:
WARNING: Your password has expired.
You must change your password now and login again!
Changing password for "grid"
grid's Old password:
grid's New password:
Re-enter grid's new password:
[grid@nodo1]/home/grid>



Ejecutar el utilitario sshUserSetup (Nuevo en 11g) utilizar el password recién cambiado



Nodo1 usuario Grid



 [grid@nodo1]/orafs/ORAC/grid>$GI_OUI/sshsetup/sshUserSetup.sh -user grid -hosts "nodo1 NODO2" -advanced –noPromptPassphrase
The output of this script is also logged into /tmp/sshUserSetup_2012-06-22-09-54-15.log
Hosts are nodo1 NODO2
user is grid
Platform:- AIX
Checking if the remote hosts are reachable
PING nodo1 (xxx.xx.xxx.xx): 56 data bytes
64 bytes from xxx.xx.xxx.xx: icmp_seq=0 ttl=255 time=0 ms
64 bytes from xxx.xx.xxx.xx: icmp_seq=1 ttl=255 time=0 ms
64 bytes from xxx.xx.xxx.xx: icmp_seq=2 ttl=255 time=0 ms
64 bytes from xxx.xx.xxx.xx: icmp_seq=3 ttl=255 time=0 ms
64 bytes from xxx.xx.xxx.xx: icmp_seq=4 ttl=255 time=0 ms
 
--- nodo1 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 0/0/0 ms
PING NODO2 (xxx.xx.xxx.xx): 56 data bytes
64 bytes from xxx.xx.xxx.xx: icmp_seq=0 ttl=255 time=0 ms
64 bytes from xxx.xx.xxx.xx: icmp_seq=1 ttl=255 time=0 ms
64 bytes from xxx.xx.xxx.xx: icmp_seq=2 ttl=255 time=0 ms
64 bytes from xxx.xx.xxx.xx: icmp_seq=3 ttl=255 time=0 ms
64 bytes from xxx.xx.xxx.xx: icmp_seq=4 ttl=255 time=0 ms
 
--- NODO2 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 0/0/0 ms
Remote host reachability check succeeded.
The following hosts are reachable: nodo1 NODO2.
The following hosts are not reachable: .
All hosts are reachable. Proceeding further...
firsthost nodo1
numhosts 2
The script will setup SSH connectivity from the host nodo1 to all
the remote hosts. After the script is executed, the user can use SSH to run
commands on the remote hosts or copy files between this host nodo1
and the remote hosts without being prompted for passwords or confirmations.
 
NOTE 1:
As part of the setup procedure, this script will use ssh and scp to copy
files between the local host and the remote hosts. Since the script does not
store passwords, you may be prompted for the passwords during the execution of
the script whenever ssh or scp is invoked.
 
NOTE 2:
AS PER SSH REQUIREMENTS, THIS SCRIPT WILL SECURE THE USER HOME DIRECTORY
AND THE .ssh DIRECTORY BY REVOKING GROUP AND WORLD WRITE PRIVILEDGES TO THESE
directories.
 
Do you want to continue and let the script make the above mentioned changes (yes/no)?
yes
 
The user chose yes
User chose to skip passphrase related questions.
Creating .ssh directory on local host, if not present already
Creating authorized_keys file on local host
Changing permissions on authorized_keys to 644 on local host
Creating known_hosts file on local host
Changing permissions on known_hosts to 644 on local host
Creating config file on local host
If a config file exists already at /home/grid/.ssh/config, it would be backed up to /home/grid/.ssh/config.backup.
Creating .ssh directory and setting permissions on remote host nodo1
THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR grid. THIS IS AN SSH REQUIREMENT.
The script would create /home/grid/.ssh/config file on remote host nodo1. If a config file exists already at /home/grid/.ssh/config, it would be backed up to /home/grid/.ssh/config.backup.
The user may be prompted for a password here since the script would be running SSH on host nodo1.
Warning: Permanently added 'nodo1,xxx.xx.xxx.xx' (RSA) to the list of known hosts.
grid@nodo1's password:
Done with creating .ssh directory and setting permissions on remote host nodo1.
Creating .ssh directory and setting permissions on remote host NODO2
THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR grid. THIS IS AN SSH REQUIREMENT.
The script would create /home/grid/.ssh/config file on remote host NODO2. If a config file exists already at /home/grid/.ssh/config, it would be backed up to /home/grid/.ssh/config.backup.
The user may be prompted for a password here since the script would be running SSH on host NODO2.
Warning: Permanently added 'NODO2,xxx.xx.xxx.xx' (RSA) to the list of known hosts.
grid@NODO2's password:
Done with creating .ssh directory and setting permissions on remote host NODO2.
Copying local host public key to the remote host nodo1
The user may be prompted for a password or passphrase here since the script would be using SCP for host nodo1.
grid@nodo1's password:
Done copying local host public key to the remote host nodo1
Copying local host public key to the remote host NODO2
The user may be prompted for a password or passphrase here since the script would be using SCP for host NODO2.
grid@NODO2's password:
Done copying local host public key to the remote host NODO2
Creating keys on remote host nodo1 if they do not exist already. This is required to setup SSH on host nodo1.
 
Creating keys on remote host NODO2 if they do not exist already. This is required to setup SSH on host NODO2.
Generating public/private rsa key pair.
Your identification has been saved in .ssh/id_rsa.
Your public key has been saved in .ssh/id_rsa.pub.
The key fingerprint is:
5c:95:93:2f:b7:94:9a:1b:09:c9:e9:b0:77:ae:be:6d grid@NODO2
The key's randomart image is:
+--[ RSA 1024]----+
|            .o   |
|           .+    |
|         ..o o . |
|       ...= . =  |
|        S+ . B . |
|        . o * .  |
|         . o o   |
|           .E    |
|         .++.    |
+-----------------+
Updating authorized_keys file on remote host nodo1
Updating known_hosts file on remote host nodo1
Updating authorized_keys file on remote host NODO2
Updating known_hosts file on remote host NODO2
cat: cannot open /home/grid/.ssh/known_hosts.tmp
cat: cannot open /home/grid/.ssh/authorized_keys.tmp
SSH setup is complete.
 
------------------------------------------------------------------------
Verifying SSH setup
===================
The script will now run the date command on the remote nodes using ssh
to verify if ssh is setup correctly. IF THE SETUP IS CORRECTLY SETUP,
THERE SHOULD BE NO OUTPUT OTHER THAN THE DATE AND SSH SHOULD NOT ASK FOR
PASSWORDS. If you see any output other than date or are prompted for the
password, ssh is not setup correctly and you will need to resolve the
issue and set up ssh again.
The possible causes for failure could be:
1. The server settings in /etc/ssh/sshd_config file do not allow ssh
for user grid.
2. The server may have disabled public key based authentication.
3. The client public key on the server may be outdated.
4. /home/grid or /home/grid/.ssh on the remote host may not be owned by grid.
5. User may not have passed -shared option for shared remote users or
may be passing the -shared option for non-shared remote users.
6. If there is output in addition to the date, but no password is asked,
it may be a security alert shown as part of company policy. Append the
additional text to the <OMS HOME>/sysman/prov/resources/ignoreMessages.txt file.
------------------------------------------------------------------------
--nodo1:--
Running /usr/bin/ssh -x -l grid nodo1 date to verify SSH connectivity has been setup from local host to nodo1.
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR.
Fri Jun 22 09:54:43 CDT 2012
------------------------------------------------------------------------
--NODO2:--
Running /usr/bin/ssh -x -l grid NODO2 date to verify SSH connectivity has been setup from local host to NODO2.
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR.
Fri Jun 22 09:54:44 CDT 2012
------------------------------------------------------------------------
------------------------------------------------------------------------
Verifying SSH connectivity has been setup from nodo1 to nodo1
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL.
Fri Jun 22 09:54:45 CDT 2012
------------------------------------------------------------------------
------------------------------------------------------------------------
Verifying SSH connectivity has been setup from nodo1 to NODO2
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL.
Fri Jun 22 09:54:45 CDT 2012
------------------------------------------------------------------------
-Verification from complete-
SSH verification complete.
 
Para verificar el procedimiento anterior en el Nodo1 usuario grid
 
[grid@nodo1]/orafs/ORAC/grid>/usr/bin/ssh -x -l grid nodo1 date
Fri Jun 22 09:54:54 CDT 2012
[grid@nodo1]/orafs/ORAC/grid>/usr/bin/ssh -x -l grid NODO2 date
Fri Jun 22 09:55:01 CDT 2012
[grid@nodo1]/orafs/ORAC/grid>



Nodo2 usuario Grid



[grid@NODO2]/home/grid>rm -rf $HOME/.ssh
[grid@NODO2]/home/grid>export GI_OUI=/orafs/ORAC/grid/
[grid@NODO2]/home/grid>echo $GI_OUI
/orafs/ORAC/grid/
[grid@NODO2]/home/grid>ssh grid@nodo1
The authenticity of host 'nodo1 (xxx.xx.xxx.xx)' can't be established.
RSA key fingerprint is 9f:42:3e:df:b2:b7:45:e0:ad:3b:b6:14:f2:7b:c3:52.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'nodo1,xxx.xx.xxx.xx' (RSA) to the list of known hosts.
grid@nodo1's password:
WARNING: Your password has expired.
You must change your password now and login again!
Changing password for "grid"
grid's Old password:
grid's New password:
Re-enter grid's new password:
[grid@nodo1]/home/grid>exit
Connection to nodo1 closed.
[grid@NODO2]/home/grid>$GI_OUI/sshsetup/sshUserSetup.sh -user grid -hosts "nodo1 NODO2" -advanced –noPromptPassphrase
The output of this script is also logged into /tmp/sshUserSetup_2012-06-22-09-56-26.log
Hosts are nodo1 NODO2
user is grid
Platform:- AIX
Checking if the remote hosts are reachable
PING nodo1 (xxx.xx.xxx.xx): 56 data bytes
64 bytes from xxx.xx.xxx.xx: icmp_seq=0 ttl=255 time=0 ms
64 bytes from xxx.xx.xxx.xx: icmp_seq=1 ttl=255 time=0 ms
64 bytes from xxx.xx.xxx.xx: icmp_seq=2 ttl=255 time=0 ms
64 bytes from xxx.xx.xxx.xx: icmp_seq=3 ttl=255 time=0 ms
64 bytes from xxx.xx.xxx.xx: icmp_seq=4 ttl=255 time=0 ms
 
--- nodo1 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 0/0/0 ms
PING NODO2 (xxx.xx.xxx.xx): 56 data bytes
64 bytes from xxx.xx.xxx.xx: icmp_seq=0 ttl=255 time=0 ms
64 bytes from xxx.xx.xxx.xx: icmp_seq=1 ttl=255 time=0 ms
64 bytes from xxx.xx.xxx.xx: icmp_seq=2 ttl=255 time=0 ms
64 bytes from xxx.xx.xxx.xx: icmp_seq=3 ttl=255 time=0 ms
64 bytes from xxx.xx.xxx.xx: icmp_seq=4 ttl=255 time=0 ms
 
--- NODO2 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 0/0/0 ms
Remote host reachability check succeeded.
The following hosts are reachable: nodo1 NODO2.
The following hosts are not reachable: .
All hosts are reachable. Proceeding further...
firsthost nodo1
numhosts 2
The script will setup SSH connectivity from the host NODO2 to all
the remote hosts. After the script is executed, the user can use SSH to run
commands on the remote hosts or copy files between this host NODO2
and the remote hosts without being prompted for passwords or confirmations.
 
NOTE 1:
As part of the setup procedure, this script will use ssh and scp to copy
files between the local host and the remote hosts. Since the script does not
store passwords, you may be prompted for the passwords during the execution of
the script whenever ssh or scp is invoked.
 
NOTE 2:
AS PER SSH REQUIREMENTS, THIS SCRIPT WILL SECURE THE USER HOME DIRECTORY
AND THE .ssh DIRECTORY BY REVOKING GROUP AND WORLD WRITE PRIVILEDGES TO THESE
directories.
 
Do you want to continue and let the script make the above mentioned changes (yes/no)?
yes
 
The user chose yes
User chose to skip passphrase related questions.
Creating .ssh directory on local host, if not present already
Creating authorized_keys file on local host
Changing permissions on authorized_keys to 644 on local host
Creating known_hosts file on local host
Changing permissions on known_hosts to 644 on local host
Creating config file on local host
If a config file exists already at /home/grid/.ssh/config, it would be backed up to /home/grid/.ssh/config.backup.
Creating .ssh directory and setting permissions on remote host nodo1
THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR grid. THIS IS AN SSH REQUIREMENT.
The script would create /home/grid/.ssh/config file on remote host nodo1. If a config file exists already at /home/grid/.ssh/config, it would be backed up to /home/grid/.ssh/config.backup.
The user may be prompted for a password here since the script would be running SSH on host nodo1.
Warning: Permanently added 'nodo1,xxx.xx.xxx.xx' (RSA) to the list of known hosts.
Done with creating .ssh directory and setting permissions on remote host nodo1.
Creating .ssh directory and setting permissions on remote host NODO2
THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR grid. THIS IS AN SSH REQUIREMENT.
The script would create /home/grid/.ssh/config file on remote host NODO2. If a config file exists already at /home/grid/.ssh/config, it would be backed up to /home/grid/.ssh/config.backup.
The user may be prompted for a password here since the script would be running SSH on host NODO2.
Warning: Permanently added 'NODO2,xxx.xx.xxx.xx' (RSA) to the list of known hosts.
grid@NODO2's password:
Done with creating .ssh directory and setting permissions on remote host NODO2.
Copying local host public key to the remote host nodo1
The user may be prompted for a password or passphrase here since the script would be using SCP for host nodo1.
Done copying local host public key to the remote host nodo1
Copying local host public key to the remote host NODO2
The user may be prompted for a password or passphrase here since the script would be using SCP for host NODO2.
grid@NODO2's password:
Done copying local host public key to the remote host NODO2
Creating keys on remote host nodo1 if they do not exist already. This is required to setup SSH on host nodo1.
 
Creating keys on remote host NODO2 if they do not exist already. This is required to setup SSH on host NODO2.
 
Updating authorized_keys file on remote host nodo1
Updating known_hosts file on remote host nodo1
Updating authorized_keys file on remote host NODO2
Updating known_hosts file on remote host NODO2
cat: cannot open /home/grid/.ssh/known_hosts.tmp
cat: cannot open /home/grid/.ssh/authorized_keys.tmp
SSH setup is complete.
 
------------------------------------------------------------------------
Verifying SSH setup
===================
The script will now run the date command on the remote nodes using ssh
to verify if ssh is setup correctly. IF THE SETUP IS CORRECTLY SETUP,
THERE SHOULD BE NO OUTPUT OTHER THAN THE DATE AND SSH SHOULD NOT ASK FOR
PASSWORDS. If you see any output other than date or are prompted for the
password, ssh is not setup correctly and you will need to resolve the
issue and set up ssh again.
The possible causes for failure could be:
1. The server settings in /etc/ssh/sshd_config file do not allow ssh
for user grid.
2. The server may have disabled public key based authentication.
3. The client public key on the server may be outdated.
4. /home/grid or /home/grid/.ssh on the remote host may not be owned by grid.
5. User may not have passed -shared option for shared remote users or
may be passing the -shared option for non-shared remote users.
6. If there is output in addition to the date, but no password is asked,
it may be a security alert shown as part of company policy. Append the
additional text to the <OMS HOME>/sysman/prov/resources/ignoreMessages.txt file.
------------------------------------------------------------------------
--nodo1:--
Running /usr/bin/ssh -x -l grid nodo1 date to verify SSH connectivity has been setup from local host to nodo1.
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR.
Fri Jun 22 09:56:48 CDT 2012
------------------------------------------------------------------------
--NODO2:--
Running /usr/bin/ssh -x -l grid NODO2 date to verify SSH connectivity has been setup from local host to NODO2.
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR.
Fri Jun 22 09:56:48 CDT 2012
------------------------------------------------------------------------
------------------------------------------------------------------------
Verifying SSH connectivity has been setup from nodo1 to nodo1
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL.
Fri Jun 22 09:56:49 CDT 2012
------------------------------------------------------------------------
------------------------------------------------------------------------
Verifying SSH connectivity has been setup from nodo1 to NODO2
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL.
Fri Jun 22 09:56:50 CDT 2012
------------------------------------------------------------------------
-Verification from complete-
SSH verification complete.
 
Para verificar el procedimiento anterior en el Nodo2 para el usuario grid





[grid@NODO2]/home/grid>/usr/bin/ssh -x -l grid nodo1 date
Fri Jun 22 09:57:12 CDT 2012
[grid@NODO2]/home/grid>/usr/bin/ssh -x -l grid NODO2 date
Fri Jun 22 09:57:18 CDT 2012
[grid@NODO2]/home/grid>







Nodo1 usuario oracle





Para evitar el problema “WARNING: Your password has expired.” Al momento de la definición  hacer:



Nodo1

[oracle@nodo1]/home/oracle>ssh oracle@NODO2
The authenticity of host 'NODO2 (xxx.xx.xxx.xx)' can't be established.
RSA key fingerprint is f6:94:c6:c7:83:d5:c7:07:bd:eb:6e:32:6c:ba:be:b1.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'NODO2,xxx.xx.xxx.xx' (RSA) to the list of known hosts.
oracle@NODO2's password:
WARNING: Your password has expired.
You must change your password now and login again!
Changing password for "oracle"
oracle's Old password:
oracle's New password:
Re-enter oracle's new password:
[oracle@NODO2]/home/oracle>



Nodo2

[root@NODO2]/tmp>su - oracle
[oracle@NODO2]/home/oracle>ssh oracle@nodo1
The authenticity of host 'nodo1 (xxx.xx.xxx.xx)' can't be established.
RSA key fingerprint is 9f:42:3e:df:b2:b7:45:e0:ad:3b:b6:14:f2:7b:c3:52.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'nodo1,xxx.xx.xxx.xx' (RSA) to the list of known hosts.
oracle@nodo1's password:
WARNING: Your password has expired.
You must change your password now and login again!
Changing password for "oracle"
oracle's Old password:
oracle's New password:
Re-enter oracle's new password:
[oracle@nodo1]/home/oracle>





Nodo1



[oracle@nodo1]/home/oracle>$GI_OUI/sshsetup/sshUserSetup.sh -user oracle -hosts "nodo1 NODO2" -advanced –noPromptPassphrase
The output of this script is also logged into /tmp/sshUserSetup_2012-06-22-10-16-48.log
Hosts are nodo1 NODO2
user is oracle
Platform:- AIX
Checking if the remote hosts are reachable
PING nodo1 (xxx.xx.xxx.xx): 56 data bytes
64 bytes from xxx.xx.xxx.xx: icmp_seq=0 ttl=255 time=0 ms
64 bytes from xxx.xx.xxx.xx: icmp_seq=1 ttl=255 time=0 ms
64 bytes from xxx.xx.xxx.xx: icmp_seq=2 ttl=255 time=0 ms
64 bytes from xxx.xx.xxx.xx: icmp_seq=3 ttl=255 time=0 ms
64 bytes from xxx.xx.xxx.xx: icmp_seq=4 ttl=255 time=0 ms
 
--- nodo1 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 0/0/0 ms
PING NODO2 (xxx.xx.xxx.xx): 56 data bytes
64 bytes from xxx.xx.xxx.xx: icmp_seq=0 ttl=255 time=0 ms
64 bytes from xxx.xx.xxx.xx: icmp_seq=1 ttl=255 time=0 ms
64 bytes from xxx.xx.xxx.xx: icmp_seq=2 ttl=255 time=0 ms
64 bytes from xxx.xx.xxx.xx: icmp_seq=3 ttl=255 time=0 ms
64 bytes from xxx.xx.xxx.xx: icmp_seq=4 ttl=255 time=0 ms
 
--- NODO2 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 0/0/0 ms
Remote host reachability check succeeded.
The following hosts are reachable: nodo1 NODO2.
The following hosts are not reachable: .
All hosts are reachable. Proceeding further...
firsthost nodo1
numhosts 2
The script will setup SSH connectivity from the host nodo1 to all
the remote hosts. After the script is executed, the user can use SSH to run
commands on the remote hosts or copy files between this host nodo1
and the remote hosts without being prompted for passwords or confirmations.
 
NOTE 1:
As part of the setup procedure, this script will use ssh and scp to copy
files between the local host and the remote hosts. Since the script does not
store passwords, you may be prompted for the passwords during the execution of
the script whenever ssh or scp is invoked.
 
NOTE 2:
AS PER SSH REQUIREMENTS, THIS SCRIPT WILL SECURE THE USER HOME DIRECTORY
AND THE .ssh DIRECTORY BY REVOKING GROUP AND WORLD WRITE PRIVILEDGES TO THESE
directories.
 
Do you want to continue and let the script make the above mentioned changes (yes/no)?
yes
 
The user chose yes
User chose to skip passphrase related questions.
Creating .ssh directory on local host, if not present already
Creating authorized_keys file on local host
Changing permissions on authorized_keys to 644 on local host
Creating known_hosts file on local host
Changing permissions on known_hosts to 644 on local host
Creating config file on local host
If a config file exists already at /home/oracle/.ssh/config, it would be backed up to /home/oracle/.ssh/config.backup.
Creating .ssh directory and setting permissions on remote host nodo1
THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR oracle. THIS IS AN SSH REQUIREMENT.
The script would create /home/oracle/.ssh/config file on remote host nodo1. If a config file exists already at /home/oracle/.ssh/config, it would be backed up to /home/oracle/.ssh/config.backup.
The user may be prompted for a password here since the script would be running SSH on host nodo1.
Warning: Permanently added 'nodo1,xxx.xx.xxx.xx' (RSA) to the list of known hosts.
oracle@nodo1's password:
Done with creating .ssh directory and setting permissions on remote host nodo1.
Creating .ssh directory and setting permissions on remote host NODO2
THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR oracle. THIS IS AN SSH REQUIREMENT.
The script would create /home/oracle/.ssh/config file on remote host NODO2. If a config file exists already at /home/oracle/.ssh/config, it would be backed up to /home/oracle/.ssh/config.backup.
The user may be prompted for a password here since the script would be running SSH on host NODO2.
Warning: Permanently added 'NODO2,xxx.xx.xxx.xx' (RSA) to the list of known hosts.
oracle@NODO2's password:
Done with creating .ssh directory and setting permissions on remote host NODO2.
Copying local host public key to the remote host nodo1
The user may be prompted for a password or passphrase here since the script would be using SCP for host nodo1.
oracle@nodo1's password:
Done copying local host public key to the remote host nodo1
Copying local host public key to the remote host NODO2
The user may be prompted for a password or passphrase here since the script would be using SCP for host NODO2.
oracle@NODO2's password:
Done copying local host public key to the remote host NODO2
Creating keys on remote host nodo1 if they do not exist already. This is required to setup SSH on host nodo1.
 
Creating keys on remote host NODO2 if they do not exist already. This is required to setup SSH on host NODO2.
Generating public/private rsa key pair.
Your identification has been saved in .ssh/id_rsa.
Your public key has been saved in .ssh/id_rsa.pub.
The key fingerprint is:
69:28:22:06:3f:6a:0a:6e:ee:2c:a2:a8:77:9d:5b:93 oracle@NODO2
The key's randomart image is:
+--[ RSA 1024]----+
|                 |
|                 |
|.                |
|..     . .       |
|..+ . . S        |
|.o o . . .       |
|o.   . .E        |
|X.. . o. .       |
|&* .  ..         |
+-----------------+
Updating authorized_keys file on remote host nodo1
Updating known_hosts file on remote host nodo1
Updating authorized_keys file on remote host NODO2
Updating known_hosts file on remote host NODO2
cat: cannot open /home/oracle/.ssh/known_hosts.tmp
cat: cannot open /home/oracle/.ssh/authorized_keys.tmp
SSH setup is complete.
 
------------------------------------------------------------------------
Verifying SSH setup
===================
The script will now run the date command on the remote nodes using ssh
to verify if ssh is setup correctly. IF THE SETUP IS CORRECTLY SETUP,
THERE SHOULD BE NO OUTPUT OTHER THAN THE DATE AND SSH SHOULD NOT ASK FOR
PASSWORDS. If you see any output other than date or are prompted for the
password, ssh is not setup correctly and you will need to resolve the
issue and set up ssh again.
The possible causes for failure could be:
1. The server settings in /etc/ssh/sshd_config file do not allow ssh
for user oracle.
2. The server may have disabled public key based authentication.
3. The client public key on the server may be outdated.
4. /home/oracle or /home/oracle/.ssh on the remote host may not be owned by oracle.
5. User may not have passed -shared option for shared remote users or
may be passing the -shared option for non-shared remote users.
6. If there is output in addition to the date, but no password is asked,
it may be a security alert shown as part of company policy. Append the
additional text to the <OMS HOME>/sysman/prov/resources/ignoreMessages.txt file.
------------------------------------------------------------------------
--nodo1:--
Running /usr/bin/ssh -x -l oracle nodo1 date to verify SSH connectivity has been setup from local host to nodo1.
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR.
Fri Jun 22 10:17:17 CDT 2012
------------------------------------------------------------------------
--NODO2:--
Running /usr/bin/ssh -x -l oracle NODO2 date to verify SSH connectivity has been setup from local host to NODO2.
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR.
Fri Jun 22 10:17:17 CDT 2012
------------------------------------------------------------------------
------------------------------------------------------------------------
Verifying SSH connectivity has been setup from nodo1 to nodo1
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL.
Fri Jun 22 10:17:18 CDT 2012
------------------------------------------------------------------------
------------------------------------------------------------------------
Verifying SSH connectivity has been setup from nodo1 to NODO2
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL.
Fri Jun 22 10:17:18 CDT 2012
------------------------------------------------------------------------
-Verification from complete-
SSH verification complete.
 
 

[oracle@nodo1]/home/oracle>cle nodo1 date                    <

ksh: usr/bin/ssh:  not found

[oracle@nodo1]/home/oracle>acle nodo1 date                   <

Fri Jun 22 10:17:37 CDT 2012

[oracle@nodo1]/home/oracle>racle NODO2 date                  <

Fri Jun 22 10:17:43 CDT 2012

[oracle@nodo1]/home/oracle>



Nodo 2

[oracle@NODO2]/home/oracle>export GI_OUI=/orafs/ORAC/grid

[oracle@NODO2]/home/oracle>advanced -noPromptPassphrase              <

The output of this script is also logged into /tmp/sshUserSetup_2012-06-22-10-20-28.log

Hosts are nodo1 NODO2

user is oracle

Platform:- AIX

Checking if the remote hosts are reachable

PING nodo1 (xxx.xx.xxx.xx): 56 data bytes

64 bytes from xxx.xx.xxx.xx: icmp_seq=0 ttl=255 time=0 ms

64 bytes from xxx.xx.xxx.xx: icmp_seq=1 ttl=255 time=0 ms

64 bytes from xxx.xx.xxx.xx: icmp_seq=2 ttl=255 time=0 ms

64 bytes from xxx.xx.xxx.xx: icmp_seq=3 ttl=255 time=0 ms

64 bytes from xxx.xx.xxx.xx: icmp_seq=4 ttl=255 time=0 ms



--- nodo1 ping statistics ---

5 packets transmitted, 5 packets received, 0% packet loss

round-trip min/avg/max = 0/0/0 ms

PING NODO2 (xxx.xx.xxx.xx): 56 data bytes

64 bytes from xxx.xx.xxx.xx: icmp_seq=0 ttl=255 time=0 ms

64 bytes from xxx.xx.xxx.xx: icmp_seq=1 ttl=255 time=0 ms

64 bytes from xxx.xx.xxx.xx: icmp_seq=2 ttl=255 time=0 ms

64 bytes from xxx.xx.xxx.xx: icmp_seq=3 ttl=255 time=0 ms

64 bytes from xxx.xx.xxx.xx: icmp_seq=4 ttl=255 time=0 ms



--- NODO2 ping statistics ---

5 packets transmitted, 5 packets received, 0% packet loss

round-trip min/avg/max = 0/0/0 ms

Remote host reachability check succeeded.

The following hosts are reachable: nodo1 NODO2.

The following hosts are not reachable: .

All hosts are reachable. Proceeding further...

firsthost nodo1

numhosts 2

The script will setup SSH connectivity from the host NODO2 to all

the remote hosts. After the script is executed, the user can use SSH to run

commands on the remote hosts or copy files between this host NODO2

and the remote hosts without being prompted for passwords or confirmations.



NOTE 1:

As part of the setup procedure, this script will use ssh and scp to copy

files between the local host and the remote hosts. Since the script does not

store passwords, you may be prompted for the passwords during the execution of

the script whenever ssh or scp is invoked.



NOTE 2:

AS PER SSH REQUIREMENTS, THIS SCRIPT WILL SECURE THE USER HOME DIRECTORY

AND THE .ssh DIRECTORY BY REVOKING GROUP AND WORLD WRITE PRIVILEDGES TO THESE

directories.



Do you want to continue and let the script make the above mentioned changes (yes/no)?

yes



The user chose yes

User chose to skip passphrase related questions.

Creating .ssh directory on local host, if not present already

Creating authorized_keys file on local host

Changing permissions on authorized_keys to 644 on local host

Creating known_hosts file on local host

Changing permissions on known_hosts to 644 on local host

Creating config file on local host

If a config file exists already at /home/oracle/.ssh/config, it would be backed up to /home/oracle/.ssh/config.backup.

Creating .ssh directory and setting permissions on remote host nodo1

THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR oracle. THIS IS AN SSH REQUIREMENT.

The script would create /home/oracle/.ssh/config file on remote host nodo1. If a config file exists already at /home/oracle/.ssh/config, it would be backed up to /home/oracle/.ssh/config.backup.

The user may be prompted for a password here since the script would be running SSH on host nodo1.

Warning: Permanently added 'nodo1,xxx.xx.xxx.xx' (RSA) to the list of known hosts.

Done with creating .ssh directory and setting permissions on remote host nodo1.

Creating .ssh directory and setting permissions on remote host NODO2

THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR oracle. THIS IS AN SSH REQUIREMENT.

The script would create /home/oracle/.ssh/config file on remote host NODO2. If a config file exists already at /home/oracle/.ssh/config, it would be backed up to /home/oracle/.ssh/config.backup.

The user may be prompted for a password here since the script would be running SSH on host NODO2.

Warning: Permanently added 'NODO2,xxx.xx.xxx.xx' (RSA) to the list of known hosts.

oracle@NODO2's password:

Done with creating .ssh directory and setting permissions on remote host NODO2.

Copying local host public key to the remote host nodo1

The user may be prompted for a password or passphrase here since the script would be using SCP for host nodo1.

Done copying local host public key to the remote host nodo1

Copying local host public key to the remote host NODO2

The user may be prompted for a password or passphrase here since the script would be using SCP for host NODO2.

oracle@NODO2's password:

Done copying local host public key to the remote host NODO2

Creating keys on remote host nodo1 if they do not exist already. This is required to setup SSH on host nodo1.



Creating keys on remote host NODO2 if they do not exist already. This is required to setup SSH on host NODO2.



Updating authorized_keys file on remote host nodo1

Updating known_hosts file on remote host nodo1

Updating authorized_keys file on remote host NODO2

Updating known_hosts file on remote host NODO2

cat: cannot open /home/oracle/.ssh/known_hosts.tmp

cat: cannot open /home/oracle/.ssh/authorized_keys.tmp

SSH setup is complete.



------------------------------------------------------------------------

Verifying SSH setup

===================

The script will now run the date command on the remote nodes using ssh

to verify if ssh is setup correctly. IF THE SETUP IS CORRECTLY SETUP,

THERE SHOULD BE NO OUTPUT OTHER THAN THE DATE AND SSH SHOULD NOT ASK FOR

PASSWORDS. If you see any output other than date or are prompted for the

password, ssh is not setup correctly and you will need to resolve the

issue and set up ssh again.

The possible causes for failure could be:

1. The server settings in /etc/ssh/sshd_config file do not allow ssh

for user oracle.

2. The server may have disabled public key based authentication.

3. The client public key on the server may be outdated.

4. /home/oracle or /home/oracle/.ssh on the remote host may not be owned by oracle.

5. User may not have passed -shared option for shared remote users or

may be passing the -shared option for non-shared remote users.

6. If there is output in addition to the date, but no password is asked,

it may be a security alert shown as part of company policy. Append the

additional text to the <OMS HOME>/sysman/prov/resources/ignoreMessages.txt file.

------------------------------------------------------------------------

--nodo1:--

Running /usr/bin/ssh -x -l oracle nodo1 date to verify SSH connectivity has been setup from local host to nodo1.

IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR.

Fri Jun 22 10:20:51 CDT 2012

------------------------------------------------------------------------

--NODO2:--

Running /usr/bin/ssh -x -l oracle NODO2 date to verify SSH connectivity has been setup from local host to NODO2.

IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR.

Fri Jun 22 10:20:51 CDT 2012

------------------------------------------------------------------------

------------------------------------------------------------------------

Verifying SSH connectivity has been setup from nodo1 to nodo1

IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL.

Fri Jun 22 10:20:52 CDT 2012

------------------------------------------------------------------------

------------------------------------------------------------------------

Verifying SSH connectivity has been setup from nodo1 to NODO2

IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL.

Fri Jun 22 10:20:53 CDT 2012

------------------------------------------------------------------------

-Verification from complete-

SSH verification complete.

[oracle@NODO2]/home/oracle>acle nodo1 date                   <

Fri Jun 22 10:20:59 CDT 2012

[oracle@NODO2]/home/oracle>acle NODO2 date                   <

Fri Jun 22 10:21:07 CDT 2012

[oracle@NODO2]/home/oracle>





VMM Parameter Checking

[grid@nodo1]/orafs/ORAC/grid>exit

[root@nodo1]/>vmo -L minperm%

NAME                      CUR    DEF    BOOT   MIN    MAX    UNIT           TYPE

     DEPENDENCIES

--------------------------------------------------------------------------------

minperm%                  3      3      3      1      100    % memory          D

--------------------------------------------------------------------------------

[root@nodo1]/>vmo -L maxperm%

NAME                      CUR    DEF    BOOT   MIN    MAX    UNIT           TYPE

     DEPENDENCIES

--------------------------------------------------------------------------------

maxperm%                  90     90     90     1      100    % memory          D

     minperm%

     maxclient%

--------------------------------------------------------------------------------

[root@nodo1]/>vmo -L maxclient%

NAME                      CUR    DEF    BOOT   MIN    MAX    UNIT           TYPE

     DEPENDENCIES

--------------------------------------------------------------------------------

maxclient%                90     90     90     1      100    % memory          D

     maxperm%

     minperm%

--------------------------------------------------------------------------------

[root@nodo1]/>vmo -L lru_file_repage

NAME                      CUR    DEF    BOOT   MIN    MAX    UNIT           TYPE

     DEPENDENCIES

--------------------------------------------------------------------------------

lru_file_repage           0      0      0      0      1      boolean           D

--------------------------------------------------------------------------------

[root@nodo1]/>vmo -L strict_maxclient

NAME                      CUR    DEF    BOOT   MIN    MAX    UNIT           TYPE

     DEPENDENCIES

--------------------------------------------------------------------------------

strict_maxclient          1      1      1      0      1      boolean           D

     strict_maxperm

--------------------------------------------------------------------------------

[root@nodo1]/>vmo -L strict_maxperm

NAME                      CUR    DEF    BOOT   MIN    MAX    UNIT           TYPE

     DEPENDENCIES

--------------------------------------------------------------------------------

strict_maxperm            0      0      0      0      1      boolean           D

     strict_maxclient

--------------------------------------------------------------------------------

[root@nodo1]/>





NTP Change



/etc/rc.tcpip Nodo1



# Start up Network Time Protocol (NTP) daemon

start /usr/sbin/xntpd "$src_running" "-x"



/etc/rc.tcpip Nodo2



# Start up Network Time Protocol (NTP) daemon

start /usr/sbin/xntpd "$src_running" "-x"



Nodo1

[root@nodo1]/>startsrc -s xntpd

0513-029 The xntpd Subsystem is already active.

Multiple instances are not supported.

[root@nodo1]/>stopsrc -s xntpd

0513-044 The /usr/sbin/xntpd Subsystem was requested to stop.

[root@nodo1]/>startsrc -s xntpd

0513-059 The xntpd Subsystem has been started. Subsystem PID is 5505214.

[root@nodo1]/>



Nodo2

[root@NODO2]/tmp>stopsrc -s xntpd

0513-044 The /usr/sbin/xntpd Subsystem was requested to stop.

[root@NODO2]/tmp>startsrc -s xntpd

0513-059 The xntpd Subsystem has been started. Subsystem PID is 1638680.

[root@NODO2]/tmp>







PARA ASM



Nodo1

[root@nodo1]/>/usr/sbin/lspv | grep -i none

hdisk2          none                                None

hdisk3          none                                None

hdisk4          none                                None

hdisk5          none                                None

hdisk6          none                                None

hdisk7          none                                None

hdisk8          none                                None

hdisk9          none                                None

hdisk10         none                                None

hdisk11         none                                None

hdisk12         none                                None

hdisk13         none                                None

hdisk14         none                                None

hdisk15         none                                None

hdisk16         none                                None

hdisk17         none                                None

hdisk18         none                                None

hdisk19         none                                None

[root@nodo1]/>chdev -l hdisk2 -a pv=yes

chdev -l hdisk3 -a pv=yes

chdev -l hdisk4 -a pv=yes

chdev -l hdisk5 -a pv=yes

chdev -l hdisk6 -a pv=yes

chdev -l hdisk7 -a pv=yes

chdev -l hdisk8 -a pv=yes

chdev -l hdisk9 -a pv=yes

chdev -l hdisk10 -a pv=yes

chdev -l hdisk11 -a pv=yes

chdev -l hdisk12 -a pv=yes

chdev -l hdisk13 -a pv=yes

chdev -l hdisk14 -a pv=yes

chdev -l hdisk15 -a pv=yes

chdev -l hdisk16 -a pv=yes

chdev -l hdisk17 -a pv=yes

chdev -l hdisk18 -a pv=yes

chdev -l hdisk19 -a pv=yes

hdisk2 changed

[root@nodo1]/>chdev -l hdisk3 -a pv=yes

hdisk3 changed

[root@nodo1]/>chdev -l hdisk4 -a pv=yes

hdisk4 changed

[root@nodo1]/>chdev -l hdisk5 -a pv=yes

hdisk5 changed

[root@nodo1]/>chdev -l hdisk6 -a pv=yes

hdisk6 changed

[root@nodo1]/>chdev -l hdisk7 -a pv=yes

hdisk7 changed

[root@nodo1]/>chdev -l hdisk8 -a pv=yes

hdisk8 changed

[root@nodo1]/>chdev -l hdisk9 -a pv=yes

hdisk9 changed

[root@nodo1]/>chdev -l hdisk10 -a pv=yes

hdisk10 changed

[root@nodo1]/>chdev -l hdisk11 -a pv=yes

hdisk11 changed

[root@nodo1]/>chdev -l hdisk12 -a pv=yes

hdisk12 changed

[root@nodo1]/>chdev -l hdisk13 -a pv=yes

hdisk13 changed

[root@nodo1]/>chdev -l hdisk14 -a pv=yes

hdisk14 changed

[root@nodo1]/>chdev -l hdisk15 -a pv=yes

hdisk15 changed

[root@nodo1]/>chdev -l hdisk16 -a pv=yes

hdisk16 changed

[root@nodo1]/>chdev -l hdisk17 -a pv=yes

hdisk17 changed

[root@nodo1]/>chdev -l hdisk18 -a pv=yes

hdisk18 changed

[root@nodo1]/>chdev -l hdisk19 -a pv=yes

hdisk19 changed

[root@nodo1]/>

 [root@nodo1]/>/usr/sbin/lspv | grep -i none

hdisk2          00f6be6114dd8611                    None

hdisk3          00f6be6114dd864b                    None

hdisk4          00f6be6114dd8679                    None

hdisk5          00f6be6114dd869f                    None

hdisk6          00f6be6114dd86ca                    None

hdisk7          00f6be6114dd86f4                    None

hdisk8          00f6be6114dd871d                    None

hdisk9          00f6be6114dd8743                    None

hdisk10         00f6be6114dd876e                    None

hdisk11         00f6be6114dd879b                    None

hdisk12         00f6be6114dd87c3                    None

hdisk13         00f6be6114dd87eb                    None

hdisk14         00f6be6114dd8814                    None

hdisk15         00f6be6114dd8849                    None

hdisk16         00f6be6114dd8873                    None

hdisk17         00f6be6114dd88a8                    None

hdisk18         00f6be6114dd88dc                    None

hdisk19         00f6be6114dd8908                    None



Nodo2

[root@NODO2]/>/usr/sbin/lspv | grep -i none

hdisk2          00f6be61e74bb87f                    None

hdisk3          00f6be61e74bb8ae                    None

hdisk4          00f6be61e74bb8db                    None

hdisk5          00f6be61e74bb910                    None

hdisk6          00f6be61e74bb941                    None

hdisk7          00f6be61e74bb973                    None

hdisk8          00f6be61e74bb9a2                    None

hdisk9          00f6be61e74bb9d1                    None

hdisk10         00f6be61e74bb9fb                    None

hdisk11         00f6be61e74bba4b                    None

hdisk12         00f6be61e74bba7e                    None

hdisk13         00f6be61e74bbab3                    None

hdisk14         00f6be61e74bbade                    None

hdisk15         00f6be61e74bbb0f                    None

hdisk16         00f6be61e74bbb39                    None

hdisk17         00f6be61e74bbb62                    None

hdisk18         00f6be61e74bbbaa                    None

hdisk19         00f6be61e74bbbd6                    None

[root@NODO2]/>



ESPACIOS



Nodo1

for i in 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 ; do echo hdisk$i ; bootinfo -s hdisk$i ; done

[root@nodo1]/> do echo hdisk$i ; bootinfo -s hdisk$i ; done          <

hdisk2

102400

hdisk3

102400

hdisk4

102400

hdisk5

102400

hdisk6

102400

hdisk7

102400

hdisk8

102400

hdisk9

102400

hdisk10

102400

hdisk11

102400

hdisk12

102400

hdisk13

102400

hdisk14

102400

hdisk15

102400

hdisk16

102400

hdisk17

102400

hdisk18

102400

hdisk19

102400

[root@nodo1]/>



Nodo2

for i in 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 ; do echo hdisk$i ; bootinfo -s hdisk$i ; done

[root@NODO2]/> do echo hdisk$i ; bootinfo -s hdisk$i ; done          <

hdisk2

102400

hdisk3

102400

hdisk4

102400

hdisk5

102400

hdisk6

102400

hdisk7

102400

hdisk8

102400

hdisk9

102400

hdisk10

102400

hdisk11

102400

hdisk12

102400

hdisk13

102400

hdisk14

102400

hdisk15

102400

hdisk16

102400

hdisk17

102400

hdisk18

102400

hdisk19

102400

[root@NODO2]/>



Para habilitar acceso simultaneo a los discos desde ambos nodos, verificar el valor del atributo reserve:



Nodo1

[root@nodo1]/>lsattr -E -l hdisk2 | grep reserve_

reserve_policy  no_reserve                       Reserve policy                   True

[root@nodo1]/>lsattr -E -l hdisk3 | grep reserve_

reserve_policy  no_reserve                       Reserve policy                   True

[root@nodo1]/>lsattr -E -l hdisk4 | grep reserve_

reserve_policy  no_reserve                       Reserve policy                   True

[root@nodo1]/>lsattr -E -l hdisk5 | grep reserve_

reserve_policy  no_reserve                       Reserve policy                   True

[root@nodo1]/>lsattr -E -l hdisk6 | grep reserve_

reserve_policy  no_reserve                       Reserve policy                   True

[root@nodo1]/>lsattr -E -l hdisk7 | grep reserve_

reserve_policy  no_reserve                       Reserve policy                   True

[root@nodo1]/>lsattr -E -l hdisk8 | grep reserve_

reserve_policy  no_reserve                       Reserve policy                   True

[root@nodo1]/>lsattr -E -l hdisk9 | grep reserve_

reserve_policy  no_reserve                       Reserve policy                   True

[root@nodo1]/>lsattr -E -l hdisk10 | grep reserve_

reserve_policy  no_reserve                       Reserve policy                   True

[root@nodo1]/>lsattr -E -l hdisk11 | grep reserve_

reserve_policy  no_reserve                       Reserve policy                   True

[root@nodo1]/>lsattr -E -l hdisk12 | grep reserve_

reserve_policy  no_reserve                       Reserve policy                   True

[root@nodo1]/>lsattr -E -l hdisk13 | grep reserve_

reserve_policy  no_reserve                       Reserve policy                   True

[root@nodo1]/>lsattr -E -l hdisk14 | grep reserve_

reserve_policy  no_reserve                       Reserve policy                   True

[root@nodo1]/>lsattr -E -l hdisk15 | grep reserve_

reserve_policy  no_reserve                       Reserve policy                   True

[root@nodo1]/>lsattr -E -l hdisk16 | grep reserve_

reserve_policy  no_reserve                       Reserve policy                   True

[root@nodo1]/>lsattr -E -l hdisk17 | grep reserve_

reserve_policy  no_reserve                       Reserve policy                   True

[root@nodo1]/>lsattr -E -l hdisk18 | grep reserve_

reserve_policy  no_reserve                       Reserve policy                   True

[root@nodo1]/>lsattr -E -l hdisk19 | grep reserve_

reserve_policy  no_reserve                       Reserve policy                   True

[root@nodo1]/>



Nodo2

[root@NODO2]/>lsattr -E -l hdisk2 | grep reserve_

reserve_policy  no_reserve                       Reserve policy                   True

[root@NODO2]/>lsattr -E -l hdisk3 | grep reserve_

reserve_policy  no_reserve                       Reserve policy                   True

[root@NODO2]/>lsattr -E -l hdisk4 | grep reserve_

reserve_policy  no_reserve                       Reserve policy                   True

[root@NODO2]/>lsattr -E -l hdisk5 | grep reserve_

reserve_policy  no_reserve                       Reserve policy                   True

[root@NODO2]/>lsattr -E -l hdisk6 | grep reserve_

reserve_policy  no_reserve                       Reserve policy                   True

[root@NODO2]/>lsattr -E -l hdisk7 | grep reserve_

reserve_policy  no_reserve                       Reserve policy                   True

[root@NODO2]/>lsattr -E -l hdisk8 | grep reserve_

reserve_policy  no_reserve                       Reserve policy                   True

[root@NODO2]/>lsattr -E -l hdisk9 | grep reserve_

reserve_policy  no_reserve                       Reserve policy                   True

[root@NODO2]/>lsattr -E -l hdisk10 | grep reserve_

reserve_policy  no_reserve                       Reserve policy                   True

[root@NODO2]/>lsattr -E -l hdisk11 | grep reserve_

reserve_policy  no_reserve                       Reserve policy                   True

[root@NODO2]/>lsattr -E -l hdisk12 | grep reserve_

reserve_policy  no_reserve                       Reserve policy                   True

[root@NODO2]/>lsattr -E -l hdisk13 | grep reserve_

reserve_policy  no_reserve                       Reserve policy                   True

[root@NODO2]/>lsattr -E -l hdisk14 | grep reserve_

reserve_policy  no_reserve                       Reserve policy                   True

[root@NODO2]/>lsattr -E -l hdisk15 | grep reserve_

reserve_policy  no_reserve                       Reserve policy                   True

[root@NODO2]/>lsattr -E -l hdisk16 | grep reserve_

reserve_policy  no_reserve                       Reserve policy                   True

[root@NODO2]/>lsattr -E -l hdisk17 | grep reserve_

reserve_policy  no_reserve                       Reserve policy                   True

[root@NODO2]/>lsattr -E -l hdisk18 | grep reserve_

reserve_policy  no_reserve                       Reserve policy                   True

[root@NODO2]/>lsattr -E -l hdisk19 | grep reserve_

reserve_policy  no_reserve                       Reserve policy                   True

[root@NODO2]/>



Camio de Dueño

Nodo1

[root@nodo1]/>chown grid:asmadmin /dev/rhdisk2

[root@nodo1]/>chown grid:asmadmin /dev/rhdisk3

[root@nodo1]/>chown grid:asmadmin /dev/rhdisk4

[root@nodo1]/>chown grid:asmadmin /dev/rhdisk5

[root@nodo1]/>chown grid:asmadmin /dev/rhdisk6

[root@nodo1]/>chown grid:asmadmin /dev/rhdisk7

[root@nodo1]/>chown grid:asmadmin /dev/rhdisk8

[root@nodo1]/>chown grid:asmadmin /dev/rhdisk9

[root@nodo1]/>chown grid:asmadmin /dev/rhdisk10

[root@nodo1]/>chown grid:asmadmin /dev/rhdisk11

[root@nodo1]/>chown grid:asmadmin /dev/rhdisk12

[root@nodo1]/>chown grid:asmadmin /dev/rhdisk13

[root@nodo1]/>chown grid:asmadmin /dev/rhdisk14

[root@nodo1]/>chown grid:asmadmin /dev/rhdisk15

[root@nodo1]/>chown grid:asmadmin /dev/rhdisk16

[root@nodo1]/>chown grid:asmadmin /dev/rhdisk17

[root@nodo1]/>chown grid:asmadmin /dev/rhdisk18

[root@nodo1]/>chown grid:asmadmin /dev/rhdisk19

[root@nodo1]/>



Nodo2

[root@NODO2]/>chown grid:asmadmin /dev/rhdisk2

[root@NODO2]/>chown grid:asmadmin /dev/rhdisk3

[root@NODO2]/>chown grid:asmadmin /dev/rhdisk4

[root@NODO2]/>chown grid:asmadmin /dev/rhdisk5

[root@NODO2]/>chown grid:asmadmin /dev/rhdisk6

[root@NODO2]/>chown grid:asmadmin /dev/rhdisk7

[root@NODO2]/>chown grid:asmadmin /dev/rhdisk8

[root@NODO2]/>chown grid:asmadmin /dev/rhdisk9

[root@NODO2]/>chown grid:asmadmin /dev/rhdisk10

[root@NODO2]/>chown grid:asmadmin /dev/rhdisk11

[root@NODO2]/>chown grid:asmadmin /dev/rhdisk12

[root@NODO2]/>chown grid:asmadmin /dev/rhdisk13

[root@NODO2]/>chown grid:asmadmin /dev/rhdisk14

[root@NODO2]/>chown grid:asmadmin /dev/rhdisk15

[root@NODO2]/>chown grid:asmadmin /dev/rhdisk16

[root@NODO2]/>chown grid:asmadmin /dev/rhdisk17

[root@NODO2]/>chown grid:asmadmin /dev/rhdisk18

[root@NODO2]/>chown grid:asmadmin /dev/rhdisk19

[root@NODO2]/>



Cambio de permisos

Nodo1

[root@nodo1]/>chmod 644 /dev/rhdisk2

[root@nodo1]/>chmod 644 /dev/rhdisk3

[root@nodo1]/>chmod 644 /dev/rhdisk4

[root@nodo1]/>chmod 644 /dev/rhdisk5

[root@nodo1]/>chmod 644 /dev/rhdisk6

[root@nodo1]/>chmod 644 /dev/rhdisk7

[root@nodo1]/>chmod 644 /dev/rhdisk8

[root@nodo1]/>chmod 644 /dev/rhdisk9

[root@nodo1]/>chmod 644 /dev/rhdisk10

[root@nodo1]/>chmod 644 /dev/rhdisk11

[root@nodo1]/>chmod 644 /dev/rhdisk12

[root@nodo1]/>chmod 644 /dev/rhdisk13

[root@nodo1]/>chmod 644 /dev/rhdisk14

[root@nodo1]/>chmod 644 /dev/rhdisk15

[root@nodo1]/>chmod 644 /dev/rhdisk16

[root@nodo1]/>chmod 644 /dev/rhdisk17

[root@nodo1]/>chmod 644 /dev/rhdisk18

[root@nodo1]/>chmod 644 /dev/rhdisk19

[root@nodo1]/>



Nodo2

[root@NODO2]/>chmod 644 /dev/rhdisk2

[root@NODO2]/>chmod 644 /dev/rhdisk3

[root@NODO2]/>chmod 644 /dev/rhdisk4

[root@NODO2]/>chmod 644 /dev/rhdisk5

[root@NODO2]/>chmod 644 /dev/rhdisk6

[root@NODO2]/>chmod 644 /dev/rhdisk7

[root@NODO2]/>chmod 644 /dev/rhdisk8

[root@NODO2]/>chmod 644 /dev/rhdisk9

[root@NODO2]/>chmod 644 /dev/rhdisk10

[root@NODO2]/>chmod 644 /dev/rhdisk11

[root@NODO2]/>chmod 644 /dev/rhdisk12

[root@NODO2]/>chmod 644 /dev/rhdisk13

[root@NODO2]/>chmod 644 /dev/rhdisk14

[root@NODO2]/>chmod 644 /dev/rhdisk15

[root@NODO2]/>chmod 644 /dev/rhdisk16

[root@NODO2]/>chmod 644 /dev/rhdisk17

[root@NODO2]/>chmod 644 /dev/rhdisk18

[root@NODO2]/>chmod 644 /dev/rhdisk19

[root@NODO2]/>chmod 644 /dev/rhdisk2

[root@NODO2]/>chmod 644 /dev/rhdisk3

[root@NODO2]/>chmod 644 /dev/rhdisk4

[root@NODO2]/>chmod 644 /dev/rhdisk5

[root@NODO2]/>chmod 644 /dev/rhdisk6

[root@NODO2]/>chmod 644 /dev/rhdisk7

[root@NODO2]/>chmod 644 /dev/rhdisk8

[root@NODO2]/>chmod 644 /dev/rhdisk9



[root@NODO2]/>



·                     Contar con una emulación gráfica o una consola con más de 1024 x 786 para que el “runInstaller”  trabaje de manera correcta (no utilizar para AIX el Xming para emular el ambiente gráfico presenta problemas en el paso 11 del instalador). (Ver Articulo "Procedimiento para la generación de Tunel con vncverver y putty en AIX")