Oracle 2 node RAC primary and standby installation and setup on Oracle Linux

#################################################################################                           Oracle 2 node RAC primary and standby installation and setup.                                                 
#################################################################################


Servers  Requirement:

Server :  4-Nos( primary 2 server and standby 2 server )
Type   : Virtual/physical
Ram   : 60GB each server ( chose as per your requirement)
Core   : 16 each server         ( chose as per your requirement)


Server Details:

TESTRAC1N01-10.100.70.80-Primary-N1
TESTRAC1N02-10.100.70.81-Primary-N2
TESTRAC2N01-10.100.70.82-Standby-N1
TESTRAC2N02-10.100.70.83-Standby-N2

Storage Requirement: 

--For All 4 Machines . Storage as per your requirement.
Name
Total Size
LUN
Data
5TB
1T *5
Reco
300GB
100G  * 3
Redo
24GB
12G * 2
OCR/Voting
6GB
2G  * 3

--Local Storage to each node , around 200 GB .
• /U01  for Oracle and Grid Home.

Network

-For Each RAC

2 Private IP
2 Public IP
2 Virtual IP
3 Scan IP ( these are DNS entries )

Note :- Port to be open between both RAC ( Default 1521)
Note : All storage requirement need to have as per your requirement.
-After get handover form OS/storage
-Need to check if DNS entry added or not and  enable internet on of node
- Make sure all IP has been added on your /etc/hosts files, here below as example.
-Here blew  /etc/hosts entry example.
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
#Public
 10.100.70.80         testrac1n01.test.com                     testrac1n01
10.100.70.81         testrac1n02.test.com                      testrac1n02
 #Private
 10.100.20.130              testrac1n01-priv1.test.com        testrac1n01
10.100.20.131               testrac1n02-priv2.test.com        testrac1n02
 #Virtual
 10.100.70.90      testrac1n01-vip1.test.com            testrac1n01-vip1
10.100.70.91       testrac1n02-vip2.test.com            testrac1n02-vip2
 #Scan
 10.100.70.85   rac1-test-scan01-test.com               rac1-test-scan01
10.100.70.86   rac1-test-scan01-test.com               rac1-test-scan01
10.100.70.87   rac1-test-scan01-test.com               rac1-test-scan01
-To install oracle required RMP at one at a time( it will work only when internet is enable on server
# yum install oracle-validated.
- It is probably worth doing a full update as well, but this is not strictly speaking necessary.
# yum update
-Manual setup :
-Install the following packages if they are not already present.
yum install binutils -y
yum install compat-libcap1 -y
yum install compat-libstdc++-33 -y
yum install compat-libstdc++-33.i686 -y
yum install gcc -y
yum install gcc-c++ -y
yum install glibc -y
yum install glibc.i686 -y
yum install glibc-devel -y
yum install glibc-devel.i686 -y
yum install ksh -y
yum install libgcc -y
yum install libgcc.i686 -y
yum install libstdc++ -y
yum install libstdc++.i686 -y
yum install libstdc++-devel -y
yum install libstdc++-devel.i686 -y
yum install libaio -y
yum install libaio.i686 -y
yum install libaio-devel -y
yum install libaio-devel.i686 -y
yum install libXext -y
yum install libXext.i686 -y
yum install libXtst -y
yum install libXtst.i686 -y
yum install libX11 -y
yum install libX11.i686 -y
yum install libXau -y
yum install libXau.i686 -y
yum install libxcb -y
yum install libxcb.i686 -y
yum install libXi -y
yum install libXi.i686 -y
yum install make -y
yum install sysstat -y
yum install unixODBC -y
yum install unixODBC-devel -y


To create the directories. -Enter the following commands as the root user:
mkdir -p /u01/app/oraInventory chown -R grid:oinstall /u01/app/oraInventory chmod -R 775 /u01/app/oraInventory mkdir -p /u01/12.1.0/grid chown -R grid:oinstall /u01/12.1.0/grid chmod -R 775 /u01/12.1.0/grid mkdir -p /u01/app/oracle mkdir /u01/app/oracle/cfgtoollog chown -R oracle:oinstall /u01/app/oracle chmod -R 775 /u01/app/oracle mkdir -p /u01/app/oracle/product/11.2.0/db_1 chown -R oracle:oinstall /u01/app/oracle/product/11.2.0/db_1 chmod -R 775 /u01/app/oracle/product/11.2.0/db_1 --create the mandatory user account. groupadd -g 501 oinstall groupadd -g 502 dba groupadd -g 1006 asmadmin groupadd -g 1004 asmdba groupadd -g 1005 asmoper useradd -u 501 -g oinstall -G asmadmin,asmdba,asmoper,dba,grid useradd -u 1001 -g oinstall -G dba,asmdba oracle
--Set the password for the oracle and grid account using the following command. Replace password with your own
#password
- Modifay  /etc/sysctl.conf and the /etc/security/limits.conf

--Add or amend the following lines in the "/etc/sysctl.conf" file.

fs.file-max = 6815744
kernel.sem = 250 32000 100 128
kernel.shmmni = 4096
kernel.shmall = 1073741824
kernel.shmmax = 4398046511104
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500
--Run the following command to change the current kernel parameters.
/sbin/sysctl -p
--Add the following lines to the "/etc/security/limits.conf" file.
oracle   soft   nofile    1024
oracle   hard   nofile    65536
oracle   soft   nproc    16384
oracle   hard   nproc    16384
oracle   soft   stack    10240
oracle   hard   stack    32768
--Install the following packages if they are not already present. --Run the following command to change the current kernel parameters. /sbin/sysctl -p --Add the following lines to the "/etc/pam.d/login" file, if it does not already exist. session required pam_limits.so --Disable secure linux by editing the "/etc/selinux/config" file,
--making sure the SELINUX flag is set as follows. 
   SELINUX=disabled
-- Above steps Need to perform each server.
-- Now create ASM disk.
-- below raw volume name provided by OS team, then just decide which volume need to mark as which ASM DISK.
-- below steps need to perform on each cluster
-- If storage team provided you asm mark disk then it will visible as candidate disk 
then no need for below fdisk partition. 
--Below fdisk creation is for standby cluster machine some need to do at primary cluster as well
root@TESTRAC2N01 ~]# fdisk -l | grep -i /dev/sdb-------data
Disk /dev/sdb: 1099.5 GB, 1099511627776 bytes 
[root@TESTRAC2N01 ~]# fdisk -l | grep -i /dev/sdc-------data 
Disk /dev/sdc: 1099.5 GB, 1099511627776 bytes 
[root@TESTRAC2N01 ~]# fdisk -l | grep -i /dev/sde-------data 
Disk /dev/sde: 1099.5 GB, 1099511627776 bytes 
[root@TESTRAC2N01 ~]# fdisk -l | grep -i /dev/sdh-----------------------reco 
Disk /dev/sdh: 107.4 GB, 107374182400 bytes 
[root@TESTRAC2N01 ~]# fdisk -l | grep -i /dev/sdd-------data 
Disk /dev/sdd: 1099.5 GB, 1099511627776 bytes 
[root@TESTRAC2N01 ~]# fdisk -l | grep -i /dev/sdg-----------------------reco 
Disk /dev/sdg: 107.4 GB, 107374182400 bytes 
[root@TESTRAC2N01 ~]# fdisk -l | grep -i /dev/sdk--------------------redo 
Disk /dev/sdk: 12.9 GB, 12884901888 bytes 
[root@TESTRAC2N01 ~]# fdisk -l | grep -i /dev/sdi-----------------------reco 
Disk /dev/sdi: 107.4 GB, 107374182400 bytes 
[root@TESTRAC2N01 ~]# fdisk -l | grep -i /dev/sdj--------------------redo 
Disk /dev/sdj: 12.9 GB, 12884901888 bytes 
[root@TESTRAC2N01 ~]# fdisk -l | grep -i /dev/sdf--------data 
Disk /dev/sdf: 1099.5 GB, 1099511627776 bytes 
[root@TESTRAC2N01 ~]# fdisk -l | grep -i /dev/sdm--------------------ocrvote 
Disk /dev/sdm: 2147 MB, 2147483648 bytes 
[root@TESTRAC2N01 ~]# fdisk -l | grep -i /dev/sdn--------------------ocrvote 
Disk /dev/sdn: 2147 MB, 2147483648 bytes 
[root@TESTRAC2N01 ~]# fdisk -l | grep -i /dev/sdl---------------------ocrvote 
Disk /dev/sdl: 2147 MB, 2147483648 bytes 
-- Here now create partition
-- Here below is created partition for standby cluster,
-- same need to proceed for primary cluster too as per volume name.
-- This Below Fdisk steps not required if os/storage team provided you volume name which available as candidate disk.
[root@TESTRAC2N01 ~]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel with disk identifier 0x1e09a05a. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u').
Command (m for help): n Command action e extended
 p primary partition (1-4) p
Partition number (1-4): 1
First cylinder (1-133674, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-133674, default 133674):
Using default value 133674
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.

-- like above need to partition all volume, here above i've only done for /dev/sdb
-- Configure oracleasm with below command then add in to ASM.
# /etc/init.d/oracleasm configure Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values will be shown in brackets ('[]'). Hitting <ENTER> without typing an answer will keep that current value. Ctrl-C will abort. Default user to own the driver interface [grid]: Default group to own the driver interface [oinstall]: Start Oracle ASM library driver on boot (y/n) [y]: Scan for Oracle ASM disks on boot (y/n) [y]: Writing Oracle ASM library driver configuration: done Initializing the Oracle ASMLib driver: [ OK ] Scanning the system for Oracle ASMLib disks: [ OK ] [root@TESTRAC1N01 init.d]# /etc/init.d/oracleasm scandisks Scanning the system for Oracle ASMLib disks: [ OK ] [root@TESTRAC2N01 ~]#cd /etc/init.d #/etc/init.d/oracleasm createdisk DATA0 /dev/sdb1 #/etc/init.d/oracleasm createdisk DATA1 /dev/sdc1 #/etc/init.d/oracleasm createdisk DATA2 /dev/sdd1 #/etc/init.d/oracleasm createdisk DATA3 /dev/sde1 #/etc/init.d/oracleasm createdisk DATA4 /dev/sdf1 #/etc/init.d/oracleasm createdisk RECO1 /dev/sdh1 #/etc/init.d/oracleasm createdisk RECO2 /dev/sdg1 #/etc/init.d/oracleasm createdisk RECO3 /dev/sdi1 #/etc/init.d/oracleasm createdisk REDO1 /dev/sdj1 #/etc/init.d/oracleasm createdisk REDO2 /dev/sdk1 #/etc/init.d/oracleasm createdisk OCRVOTE1 /dev/sdm1 #/etc/init.d/oracleasm createdisk OCRVOTE2 /dev/sdn1 #/etc/init.d/oracleasm createdisk OCRVOTE3 /dev/sdl1 ###do it on both node and scan and list. [root@TESTRAC2N01 init.d]# /etc/init.d/oracleasm scandisks Scanning the system for Oracle ASMLib disks: [ OK ] [root@TESTRAC2N01 init.d]# /etc/init.d/oracleasm listdisks DATA0 DATA1 DATA2 DATA3 DATA4 OCRVOTE1 OCRVOTE2 OCRVOTE3 RECO1 RECO2 RECO3 REDO1 REDO2 [root@TESTRAC2N01 disks]# ls -lart total 0 drwxr-xr-x 4 root root 0 Jan 13 09:52 .. drwxr-xr-x 1 root root 0 Jan 13 09:52 . brw-rw---- 1 grid asmadmin 8, 33 Jan 13 09:52 DATA1 brw-rw---- 1 grid asmadmin 8, 17 Jan 13 09:52 DATA0 brw-rw---- 1 grid asmadmin 8, 49 Jan 13 09:52 DATA2 brw-rw---- 1 grid asmadmin 8, 113 Jan 13 09:52 RECO1 brw-rw---- 1 grid asmadmin 8, 145 Jan 13 09:52 REDO1 brw-rw---- 1 grid asmadmin 8, 81 Jan 13 09:52 DATA4 brw-rw---- 1 grid asmadmin 8, 177 Jan 13 09:52 OCRVOTE3 brw-rw---- 1 grid asmadmin 8, 209 Jan 13 09:52 OCRVOTE2 brw-rw---- 1 grid asmadmin 8, 129 Jan 13 09:52 RECO3 brw-rw---- 1 grid asmadmin 8, 65 Jan 13 09:52 DATA3 brw-rw---- 1 grid asmadmin 8, 97 Jan 13 09:52 RECO2 brw-rw---- 1 grid asmadmin 8, 161 Jan 13 09:52 REDO2 brw-rw---- 1 grid asmadmin 8, 193 Jan 13 09:52 OCRVOTE1 [root@TESTRAC2N01 disks]# pwd /dev/oracleasm/disks
- one extensive check also can perform before proceeding for grid installation. [root@TESTRAC2N01 disks]# /etc/init.d/oracleasm querydisk -v -p DATA1

Disk "DATA1" is a valid ASM disk /dev/sdc1: LABEL="DATA1" TYPE="oracleasm" 
-Configure SSH on each node in the cluster. Log in as the "grid" user and perform the following tasks on each node.
su - oracle
mkdir ~/.ssh
chmod 700 ~/.ssh
/usr/bin/ssh-keygen -t rsa # Accept the default settings.
-The RSA public key is written to the ~/.ssh/id_rsa.pub file and the private key to the ~/.ssh/id_rsa file.
-Log in as the "oracle" user on RAC1, generate an "authorized_keys" file on testraC1N01 and copy it to testraC1N02 using the following commands

su - oracle
cd ~/.ssh
cat id_rsa.pub >> authorized_keys
scp authorized_keys testraC1N02:/home/oracle/.ssh/
-Next, log in as the "oracle" user on testraC1N02 and perform the following commands
su - oracle
cd ~/.ssh
cat id_rsa.pub >> authorized_keys
scp authorized_keys testraC1N01:/home/oracle/.ssh/

-The "authorized_keys" file on both servers now contains the public keys generated on all RAC nodes.
-To enable SSH user equivalency on the cluster member nodes issue the following commands on each node.
ssh testraC1N01 date
ssh testraC1N02 date
ssh testrac1n01.testi.com date
ssh testrac1n02.test.com date
exec /usr/bin/ssh-agent $SHELL
/usr/bin/ssh-add
-You should now be able to SSH and SCP between servers without entering passwords
--Before installing the clusterware,check the prerequisites have been met using the 
"runcluvfy.sh" utility in the clusterware root directory. 
--Optional Steps.(NTP and IP table you can stop if its not configured on your environment. 


# service ntpd stop 
# chkconfig ntpd off 
# mv /etc/ntp.conf /etc/ntp.conf.orig 
# rm /var/run/ntpd.pid 
 --Then restart NTP. 
# service ntpd restart 
# service iptables stop 
#su - oracle 
$cd /u01/stage/12cGrid/grid
 $runcluvfy.sh stage -pre crsinst –n TESTRAC1N01,TESTRAC1N02 -fixup -verbose > /u01/stage/clvoutput.txt
 -check for output, recommended to proceed for Grid installation after all check succeeded. 
 -Before start GRID installation make sure DISPLAY configured so installation will proceed with GUI 
 -Not mention any steps for DISPLAY configuration. 
 -Grid s/w unpacked here in below directory 
 $cd /u02/stage/12cGrid/grid 
 $./runinstaller.





- Next

-Next

-Before Next click on below ssh connection , make sure ssh connection is successful.

- Next

_ Next

- Next Here on ASM side OCR_VOTE so mandatory , later by using asmca you can create others ASM disk
.

- Here below warning for swap space and NTP it must fix with help of OS team, so recommended to fix all warning with help of runcluvfy.sh out put. which mention on earlier steps.




   - Next

   -- pass on root credential so that all root.sh script will run automatically during installation.





- Next



- Now  grid installation is completed, now go for Database.

- unpack the database software .
- Here example for 11g DB on 12c grid , but you also can proceed for 12c db installation , make sure grid version and db version are same or grid version is higher than db version.

-- Start DB s/w installation
Su - oracle
[root@TESTRAC2N01 11gsoft]# ls -lart
total 2442064
drwxrwxrwx  8 grid oinstall       4096 Sep 22  2011 database
-rwxrwxrwx  1 grid oinstall 1358454646 Jan 12 08:48 p10404530_112030_Linux-x86-64_1of7.zip
-rwxrwxrwx  1 grid oinstall 1142195302 Jan 12 08:48 p10404530_112030_Linux-x86-64_2of7.zip
drwxrwxrwx  3 grid oinstall       4096 Jan 12 09:19 .
drwxrwxrwx. 6 grid oinstall       4096 Jan 13 04:56 ..
[root@TESTRAC2N01 11gsoft]# cd databse

./runinstaller




- Next

-Next

- Before proceed next click on  " ssh connection " and make sure its success.
- Next  


- Next

- Next
 - Next





-Next
-Install


- Run root.sh then click OK


- Here database installation also  completed.
- Same grid and database installation steps need to continue with standby cluster server as well.
- Now you can load data in to primary DB through backup restore or impdp.
- Once primary DB is complete ready then create standby db through duplicate db .

Source Instance Name: testprd1 (Primary cluster  node1)
Destination Instance Name: testprds1 ( standby cluster  node1)

-- Modify the listener on Source:
vi /u01/12.1.0/grid/network/admin/listener.ora

LISTENER =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
      (ADDRESS = (PROTOCOL = TCP)(HOST = testrac1n01-vip1)(PORT = 1521))
    )
  )

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC = (GLOBAL_DBNAME = testprd1) (ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_1) (SID_NAME = testprd1))
  )


--Restart the listener:

$ lsnrctl stop listener; lsnrctl start listener
-- Add TNSEntry to TNsnames to refer to destination: [in Source server]

TESTPRDS =
  (DESCRIPTION =
    #(ADDRESS = (PROTOCOL = TCP)(HOST = rtest2-scan01)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST=10.100.70.82)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = testprds1)
(UR = A)
    )
  )

-- Modify the listener: [on Destination]
vi /u02/12.1.0/grid/network/admin/listener.ora

LISTENER =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS=  (PROTOCOL = tcp)(HOST= rtest2-scan01)(PORT=1521))
      (ADDRESS=(PROTOCOL=ipc)(KEY=extproc))))

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (SID_NAME = testprds1)
      (ORACLE_HOME = /u02/app/oracle/product/11.2.0/dbhome_1)
    )
  )

-- Add TNSEntry in tnsnames.ora to refer to Source DB: [On Destination]

$ vi $ORACLE_HOME/network/admin/tnsnames.ora

TESTPRD =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = rtest1-scan01)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = testprd)
    )
  )

--Restart the listener:

$ lsnrctl stop listener; lsnrctl start listener

-- Copy the Password file from source to destination:
scp orapwtestprd1 oracle@testrac2n01:/u02/app/oracle/product/11.2.0/dbhome_1/dbs/orapwtestprds1 
scp orapwtestprd1 oracle@testrac2n02:/u02/app/oracle/product/11.2.0/dbhome_1/dbs/orapwtestprds2 


-- Create Standby Logs on Primary: [8 to make them 16 = to normal logs]

SQL> ALTER DATABASE ADD STANDBY LOGFILE '+DATA' SIZE 314572800;
SQL> ALTER DATABASE ADD STANDBY LOGFILE '+DATA' SIZE 314572800;
SQL> ALTER DATABASE ADD STANDBY LOGFILE '+DATA' SIZE 314572800;
SQL> ALTER DATABASE ADD STANDBY LOGFILE '+DATA' SIZE 314572800;
SQL> ALTER DATABASE ADD STANDBY LOGFILE '+RECO' SIZE 314572800;
SQL> ALTER DATABASE ADD STANDBY LOGFILE '+RECO' SIZE 314572800;
SQL> ALTER DATABASE ADD STANDBY LOGFILE '+RECO' SIZE 314572800;
SQL> ALTER DATABASE ADD STANDBY LOGFILE '+RECO' SIZE 314572800;

-On Primary & Destination Server:
-In order to tackle the error "Connection Lost" when trying to connect remotely to one of the databases from the other:

su - root
# chmod 7771 $ORACLE_HOME/bin/oracle
- Create pfile:
$  cd $ORACLE_HOME/dbs
$ vi inittestprds1.ora

testprds1.__db_cache_size=7214202880
testprds1.__java_pool_size=33554432
testprds1.__large_pool_size=33554432
testprds1.__pga_aggregate_target=2952790016
testprds1.__sga_target=8791270184
testprds1.__shared_io_pool_size=0
testprds1.__shared_pool_size=1442840576
testprds1.__streams_pool_size=0
testprds1.thread=1
testprds1.undo_tablespace='UNDOTBS1'
*.cluster_database=false
*.compatible='11.2.0.0.0'
*.control_files='+DATA/testprds/controlfile/current.266.916823241','+RECO/testprds/controlfile/current.258.916823241'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_domain=''
*.db_name='testprds'
*.db_recovery_file_dest='+RECO'
*.db_recovery_file_dest_size=536870912000
*.diagnostic_dest='/u02/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=testprdsXDB)'
*.open_cursors=300
*.processes=150
*.remote_listener='rtest2-scan01:1521'
*.remote_login_passwordfile='exclusive'
*.instance_number=1

# sql

startup nomount pfile='inittestprds1.ora'

--On Primary Nodes: [Duplicate command will run from source]

$vi /home/oracle/duplicate.rman

run {
allocate channel disk1 type disk;
allocate channel disk2 type disk;
allocate channel disk3 type disk;
allocate channel disk4 type disk;
allocate auxiliary channel stby type disk;
duplicate target database for standby from active database
spfile
parameter_value_convert 'testprd','testprds'
set db_unique_name='testprds'
set instance_number='1'
set audit_file_dest='/u02/app/oracle/admin/testprds/adump'
set diagnostic_dest='/u02/app/oracle'
set db_file_name_convert='+DATA/testprd/datafile','+DATA/testprds/datafile','+DATA/testprd/tempfile','+DATA/testprds/tempfile'
set log_file_name_convert='+DATA/testprd/onlinelog','+DATA/testprds/onlinelog','+RECO/testprd/onlinelog','+RECO/testprds/onlinelog'
set control_files='+DATA'
set log_archive_max_processes='5'
set fal_client='testprds'
set fal_server='testprd'
set standby_file_management='AUTO'
set log_archive_config='dg_config=(testprd,testprds)'
set log_archive_dest_2='service=testprd ASYNC valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE)'
;
}
--replace XXXX with sys password.
$ nohup $ORACLE_HOME/bin/rman target / auxiliary sys/XXXX@testprds cmdfile=/home/oracle/duplicate.rman | tee duplicate.log 2>&1 &
-Once After duplicate job finish
-Disable log_archive_dest_state_2: [on Destination]

SQL>alter system set log_archive_dest_state_2=defer;
-Restart log_archive_dest_state_2: [on Source]

SQL>alter system set log_archive_dest_state_2=defer;
SQL>alter system set log_archive_dest_state_2=enable;

-Start the Managed recovery: [on Destination]

SQL>recover managed standby database using current logfile nodelay disconnect;

-Prepare to start Node2 instance on Destination:
-Create SPFILE on +DATA instead of FS: [On Destination]

SQL> alter system set instance_number=2 scope=spfile sid='testprds2';
SQL>alter system set cluster_database=true scope=spfile;

SQL> create pfile='/u02/app/oracle/product/11.2.0/dbhome_1/dbs/inittestprds1.ora' from spfile;
SQL> create spfile='+data/TESTPRDS/PARAMETERFILE/spfiletestprds.ora' from pfile='/u02/app/oracle/product/11.2.0/dbhome_1/dbs/inittestprds1.ora';

$ cd /u02/app/oracle/product/11.2.0/dbhome_1/dbs
$ rm spfiletestprds1.ora
$ vi inittestprds1.ora
SPFILE='+data/TESTPRDS/PARAMETERFILE/spfiletestprds.ora'

[On Destination Node2]:
$ mkdir -p /u02/app/oracle/admin/testprds/adump
$ cd /u02/app/oracle/product/11.2.0/dbhome_1/dbs
$ rm spfiletestprds2.ora
$ vi inittestprds2.ora
SPFILE='+data/TESTPRDS/PARAMETERFILE/spfiletestprds.ora'

-Start Instance2: [On Destination, standby cluster node2]

SQL> startup mount

--Add the database to clusterware resource: [On Destination]

$srvctl add database -d testprds -o /u02/app/oracle/product/11.2.0/dbhome_1
$srvctl add instance -d testprds -i testprds1 -n TESTRAC2N01
$srvctl add instance -d testprds -i testprds2 -n TESTRAC2N02
$srvctl add listener -o /u02/12.1.0/grid -l listener -s
$srvctl add listener -o /u02/12.1.0/grid -l listener -s 


--Stop/start DB resource: [Not mandatory]

srvctl stop  database -d testprds
srvctl start database -d testprds


#######################################################################
<<<<>>> Two node Primary and standby RAC installation and setup completed<<<<>>>

########################################################################



Comments

Popular posts from this blog

How to Configure DB services to start by Automatically on cluster