博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
vbox环境搭建oracle11g RAC过程
阅读量:5037 次
发布时间:2019-06-12

本文共 31441 字,大约阅读时间需要 104 分钟。

安装环境

主机:windows 10 

虚拟机Vbox:两台  R6 U7 x86_64 
Oracle Database software: Oracle11gR2 
Cluster software: Oracle grid infrastructure 11gR2 

本次安装过程中,原来已有的安装软件有些问题,折腾了好长时间,最后重新下载的七个压缩文件,用到了前三个,七个压缩包内容如下:

 

p102025301120——Linux-x86-64_1of7.zip             database安装介质

 

p102025301120——Linux-x86-64_2of7.zip             database安装介质

 

p102025301120——Linux-x86-64_3of7.zip             grid安装介质

 

p102025301120——Linux-x86-64_4of7.zip             client安装介质

 

p102025301120——Linux-x86-64_5of7.zip             gateways安装介质

 

p102025301120——Linux-x86-64_6of7.zip             example

 

p102025301120——Linux-x86-64_7of7.zip             deinstall

 

 

SWAP大小一定要注意,为物理内存的1.5倍为好。

共享存储:ASM

[root@rac1 ~]# lsb_release -aLSB Version:    :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarchDistributor ID: OracleServerDescription:    Oracle Linux Server release 6.5Release:        6.5Codename:       n/a[root@rac1 ~]# uname -r3.8.13-16.2.1.el6uek.x86_64

硬件配置要求: 

- 每个服务器节点至少需要2块网卡,一个对外网络接口,一个私有网路接口(心跳)。 
- 如果你通过OUI安装Oracle集群软件,需要保证每个节点用于外网或私网接口(网卡名)保证一致。比如,node1使用eth0作为对外接口,node2就不能使用eth1作为对外接口。

IP配置要求: 

这里不采用DHCP方式,指定静态的scan ip(scan ip可以实现集群的负载均衡,由集群软件按情况分配给某一节点)。 
每个节点分配一个ip、一个虚拟ip、一个私有ip。 
其中ip、vip和scan-ip需要在同一个网段。

非GNS下手动配置IP实例:

非GNS下手动配置IP实例:

Identity Home Node Host Node Given Name Type Address
RAC-1 Public RAC1 RAC1 rac1 Public 192.168.177.101
RAC-1 VIP RAC1 RAC1 rac1-vip Public 192.168.177.201
RAC-1 Private RAC1 RAC1 rac1-priv Private 192.168.139.101
RAC2 RAC2 RAC2 rac2 Public 192.168.177.102
RAC2 VIP RAC2 RAC2 rac2-vip Public 192.168.177.202
RAC2 Private RAC2 RAC2 rac2-priv Private 192.168.139.102
SCAN IP none Selected by Oracle Clusterware scan-ip virtual 192.168.177.110

  

二. 创建操作系统

1.oraliux 6.x

磁盘分区20G  swap 不能少于内存,最好大于内存1.5倍
选择包时除了 BASEBASE BASE自带的包外,选中以下项:

  • Compatibility libraries
  • ftp server
  • gnome-desktop
  • x windows system
  • Development tools 
  • Chinese support

包可以通过oralinux一键安装包,自动调整linux环境,以满足oracle的安装。

2.关闭防火墙及selinux(在两个节点node1、node2上都配置),否则在安装grid时,有可能卡在65%处。

[root@rac1 ~]# service iptables stopiptables: Setting chains to policy ACCEPT: filter          [  OK  ]iptables: Flushing firewall rules:                         [  OK  ]iptables: Unloading modules:                               [  OK  ][root@rac1 ~]# chkconfig iptables off[root@rac1 ~]#  sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config[root@rac1 ~]# setenforce 0[root@rac1 ~]#

 

3. 把光盘设置为本地YUM源:

mv /etc/yum.repos.d/CentOS-Base.repo CentOS-Base.repo.bakvim /etc/yum.repos.d/CentOS_Media.repo[c6-media]name=CentOS-$releasever - Mediabaseurl=file:///media/gpgcheck=1enabled=1gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6yum clean allyum make cache

oralinux下:

[root@vmac6 ~]# cd /etc/yum.repos.d[root@vmac6 yum.repos.d]# mv public-yum-ol6.repo public-yum-ol6.repo.bak[root@vmac6 yum.repos.d]# touch public-yum-ol6.repo[root@vmac6 yum.repos.d]# vim public-yum-ol6.repo[oel6]name = Enterprise Linux 6.3 DVDbaseurl=file:///media/"OL6.3 x86_64 Disc 1 20120626"/Servergpgcheck=0enabled=1

4.细节说明: 

 安装Oracle Linux时,注意分配两个网卡,一个网卡为Host Only方式,用于两台虚拟机节点的通讯,另一个网卡为Nat方式,用于连接外网,后面再手动分配静态IP。每台主机的内存和swap规划为至少2.5G。硬盘规划为:boot 500M,其他空间分配为LVM方式管理,LVM划分2.5G为swap,其他为/。 

两台Oracle Linux主机名为rac1、rac2 
注意这里安装的两个操作系统最好在不同的硬盘中,否则I/O会很吃力。

检查内存和swap大小

[root@rac1 ~]# grep MemTotal /proc/meminfoMemTotal:        2552560 kB[root@rac1 ~]# grep SwapTotal /proc/meminfoSwapTotal:       2621436 kB

如果swap太小,:  

通过此种方式进行swap 的扩展,首先要计算出block的数目。具体为根据需要扩展的swapfile的大小,以M为单位。block=swap分区大小*1024, 例如,需要扩展64M的swapfile,则:block=64*1024=65536.

然后做如下步骤:

#dd if=/dev/zero of=/swapfile bs=1024 count=65536 #mkswap /swapfile #swapon /swapfile #vi /etc/fstab 增加/swapf swap swap defaults 0 0 # cat /proc/swaps 或者# free –m //查看swap分区大小 # swapoff /swapf //关闭扩展的swap分区

5.配置网络

(1)配置ip 

//这里的网关有vmware中网络设置决定,eth0为连接外网,eth0内网心跳 
//rac1主机下: 
[root@rac1 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0 
IPADDR=192.168.177.101 
PREFIX=24 
GATEWAY=192.168.177.1 
DNS1=192.168.177.1

[root@rac1 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1 

IPADDR=192.168.139.101 
PREFIX=24

//rac2主机下 

[root@rac2 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0 
IPADDR=192.168.177.102 
PREFIX=24 
GATEWAY=192.168.177.1 
DNS1=192.168.177.1

[root@rac2 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1 

IPADDR=192.168.139.102 
PREFIX=24

(2)配置hostname 

//rac1主机下 
[root@rac1 ~]# vi /etc/sysconfig/network 
NETWORKING=yes 
HOSTNAME=rac1 
GATEWAY=192.168.177.1 
NOZEROCONF=yes

//rac2主机下 

[root@rac2 ~]# vi /etc/sysconfig/network 
NETWORKING=yes 
HOSTNAME=rac2 
GATEWAY=192.168.177.1 
NOZEROCONF=yes

(3)配置hosts 

rac1和rac2均要添加: 
[root@rac1 ~]# vi /etc/hosts 
192.168.177.101 rac1 
192.168.177.201 rac1-vip 
192.168.139.101 rac1-priv

192.168.177.102 rac2 

192.168.177.202 rac2-vip 
192.168.139.102 rac2-priv

192.168.177.110 scan-ip

6.添加用户和组及新建安装目录

/usr/sbin/groupadd -g 1000 oinstall/usr/sbin/groupadd -g 1020 asmadmin/usr/sbin/groupadd -g 1021 asmdba/usr/sbin/groupadd -g 1022 asmoper/usr/sbin/groupadd -g 1031 dba/usr/sbin/groupadd -g 1032 operuseradd -u 1100 -g oinstall -G asmadmin,asmdba,asmoper,oper,dba griduseradd -u 1101 -g oinstall -G dba,asmdba,oper oraclemkdir -p /u01/app/11.2.0/gridmkdir -p /u01/app/gridmkdir /u01/app/oraclechown -R grid:oinstall /u01chown oracle:oinstall /u01/app/oraclechmod -R 775 /u01/[root@rac1 ~]# passwd grid[root@rac1 ~]# passwd oracle

7. 修改内核参数

[root@rac1 ~]# vi /etc/sysctl.conf #preinstall包已经修改[root@rac1 ~]# vim /etc/security/limits.conf #需要增加grid内容# grid-rdbms-server-11gR2-preinstall setting for nofile soft limit is 1024grid   soft   nofile    1024# grid-rdbms-server-11gR2-preinstall setting for nofile hard limit is 65536grid   hard   nofile    65536# grid-rdbms-server-11gR2-preinstall setting for nproc soft limit is 2047grid   soft   nproc    2047# grid-rdbms-server-11gR2-preinstall setting for nproc hard limit is 16384grid   hard   nproc    16384# grid-rdbms-server-11gR2-preinstall setting for stack soft limit is 10240KBgrid   soft   stack    10240# grid-rdbms-server-11gR2-preinstall setting for stack hard limit is 32768KBgrid   hard   stack    32768
配置login
[root@rac1 ~]# vi /etc/pam.d/login session required pam_limits.so

8.修改用户环境变量

[root@rac1 ~]# su - grid[grid@rac1 ~]$ vim .bash_profileexport TMP=/tmpexport TMPDIR=$TMPexport ORACLE_SID=+ASM1  # RAC1export ORACLE_SID=+ASM2  # RAC2export ORACLE_BASE=/u01/app/gridexport ORACLE_HOME=/u01/app/11.2.0/gridexport PATH=/usr/sbin:$PATHexport PATH=$ORACLE_HOME/bin:$PATHexport LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/libexport CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlibumask 022[root@rac1 ~]# su - oracle[oracle@rac1 ~]$ vim .bash_profileexport TMP=/tmpexport TMPDIR=$TMPexport ORACLE_SID=orcl1  # RAC1export ORACLE_SID=orcl2  # RAC2export ORACLE_UNQNAME=orclexport ORACLE_BASE=/u01/app/oracleexport ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1export TNS_ADMIN=$ORACLE_HOME/network/adminexport PATH=/usr/sbin:$PATHexport PATH=$ORACLE_HOME/bin:$PATHexport LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib$ source .bash_profile使配置文件生效

9.克隆第二个节点,并进行共享存储设置

修改2号机器的网络cd /etc/udev/rules.dvim 70-persistent-net.rules
编辑这个文件 vim /etc/udev/rules.d/70-persistent-net.rules。把eth2和eth3的mac地址,复制到eth0和eth1中,删除掉原来的eth2和eth3,注意eth0对应公网mac,eth1对应私网mac地址
start_udev
或者重启计算机。
修改eth0和eth1的IP配置可参考此处
vim /etc/sysconfig/network 修改机器名
vim /etc/hosts

手动在vbox中添加多块共享盘,并在节点二上添加各磁盘。

创建存储

 

然后用udv绑定这几个盘,命令在下面的脚本中。也可参考
 
cd /devls -l sd*

两个节点上执行如下脚本,绑定共享磁盘

for i in b c d e f g ;doecho "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\", RESULT==\"`/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\""      >> /etc/udev/rules.d/99-oracle-asmdevices.rulesdone /sbin/start_udev

查看:

ls -l asm*[root@rac1 dev]# ls -l asm*brw-rw---- 1 grid asmadmin 8, 16 Apr 27 12:00 asm-diskbbrw-rw---- 1 grid asmadmin 8, 32 Apr 27 12:00 asm-diskcbrw-rw---- 1 grid asmadmin 8, 48 Apr 27 12:00 asm-diskdbrw-rw---- 1 grid asmadmin 8, 64 Apr 27 12:00 asm-diskebrw-rw---- 1 grid asmadmin 8, 80 Apr 27 12:00 asm-diskfbrw-rw---- 1 grid asmadmin 8, 96 Apr 27 12:00 asm-diskg

 

VMware创建共享存储方式参考: 

进入VMware安装目录,cmd命令下:

C:\Program Files (x86)\VMware\VMware Workstation>vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 F:\VMware\RAC\Sharedisk\ocr.vmdkvmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 F:\VMware\RAC\Sharedisk\ocr2.vmdkvmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 F:\VMware\RAC\Sharedisk\votingdisk.vmdkvmware-vdiskmanager.exe -c -s 20000Mb -a lsilogic -t 2 F:\VMware\RAC\Sharedisk\data.vmdkvmware-vdiskmanager.exe -c -s 10000Mb -a lsilogic -t 2 F:\VMware\RAC\Sharedisk\backup.vmdk

实例:  

C:\Program Files (x86)\VMware\VMware Workstation>vmware-vdiskmanager.exe -c -s 2g -a lsilogic -t 2 d:\vpc\rac\share\ocr.vmdkCreating disk 'd:\vpc\rac\share\ocr.vmdk'  Create: 100% done.Virtual disk creation successful.C:\Program Files (x86)\VMware\VMware Workstation>vmware-vdiskmanager.exe -c -s 5g -a lsilogic -t 2 d:\vpc\rac\share\data.vmdkCreating disk 'd:\vpc\rac\share\data.vmdk'  Create: 100% done.Virtual disk creation successful.C:\Program Files (x86)\VMware\VMware Workstation>vmware-vdiskmanager.exe -c -s 2g -a lsilogic -t 2 d:\vpc\rac\share\fra.vmdkCreating disk 'd:\vpc\rac\share\fra.vmdk'  Create: 100% done.Virtual disk creation successful.注 -a指定磁盘类型 -t表示直接划分一个预分配空间的文件。

这里创建了两个1G的ocr盘,一个1G的投票盘,一个20G的数据盘,一个10G的备份盘。 

10.配置oracle用户ssh互信
这是很关键的一步,虽然官方文档中声称安装GI和RAC的时候OUI会自动配置SSH,但为了在安装之前使用CVU检查各项配置,还是手动配置互信更优。
[root@node1 ~]# su - gridmkdir ~/.sshchmod 700 ~/.sshssh-keygen -t rsassh-keygen -t dsa[root@node1 ~]# su - oraclemkdir ~/.sshchmod 700 ~/.sshssh-keygen -t rsassh-keygen -t dsa[root@node2 ~]# su - gridmkdir ~/.sshchmod 700 ~/.sshssh-keygen -t rsassh-keygen -t dsa[root@node2 ~]# su - oraclemkdir ~/.sshchmod 700 ~/.sshssh-keygen -t rsassh-keygen -t dsa在节点1上进行互信配置:[root@node1 ~]# su - gridtouch ~/.ssh/authorized_keyscd ~/.sshssh rac1 cat ~/.ssh/id_rsa.pub >> authorized_keysssh rac2 cat ~/.ssh/id_rsa.pub >> authorized_keysssh rac1 cat ~/.ssh/id_dsa.pub >> authorized_keysssh rac2 cat ~/.ssh/id_dsa.pub >> authorized_keys[root@node1 ~]# su - oracletouch ~/.ssh/authorized_keyscd ~/.sshssh rac1 cat ~/.ssh/id_rsa.pub >> authorized_keysssh rac2 cat ~/.ssh/id_rsa.pub >> authorized_keysssh rac1 cat ~/.ssh/id_dsa.pub >> authorized_keysssh rac2 cat ~/.ssh/id_dsa.pub >> authorized_keys在node1把存储公钥信息的验证文件传送到node2上[root@node1 ~]# su - gridscp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keys[root@node1 ~]# su - oraclescp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keys验证ssh配置是否正确,以gird、oracle用户在两个节点node1、node2上都配置执行:[root@node1 ~]# su - grid设置验证文件的权限chmod 600 ~/.ssh/authorized_keys启用用户一致性exec /usr/bin/ssh-agent $SHELL/usr/bin/ssh-add验证ssh配置是否正确ssh rac1 datessh rac2 datessh rac1-priv datessh rac2-priv date[root@node1 ~]# su - oracle设置验证文件的权限chmod 600 ~/.ssh/authorized_keys启用用户一致性exec /usr/bin/ssh-agent $SHELL/usr/bin/ssh-add验证ssh配置是否正确ssh rac1 datessh rac2 datessh rac1-priv datessh rac2-priv date如果不需要输入密码就可以输出时间,说明ssh验证配置成功。必须把以上命令在两个节点都运行,每一个命令在第一次执行的时候需要输入yes。如果不运行这些命令,即使ssh验证已经配好,安装clusterware的时候也会出现错误:The specified nodes are not clusterable因为,配好ssh后,还需要在第一次访问时输入yes,才算是真正的无障碍访问其他服务器。请谨记,SSH互信需要实现的就是各个节点之间可以无密码进行SSH访问。

环境配置

默认情况下,下面操作在每个节点下均要进行,密码均设置oracle

1. 通过SecureCRT建立命令行连接

    • sqlplus中Backspace出现^H的乱码 

      Options->Session Options->Terminal->Emulation->Mapped Keys->Other mappings 
      勾选Backspace sends delete

    • vi中不能使用delete和home 

      Options->Session Options->Terminal->Emulation 
      设置Terminal为Linux 
      勾选Select an alternate keyboard emulation为Linux

  

 

3、关闭NTP及端口范围参数修改(在两个节点node1、node2上都配置)

Oracle建议使用Oracle Cluster Time Synchronization Service,因此关闭删除NTP

[root@node1 ~]# service ntpd stop

[root@node1 ~]# chkconfig ntpd off

[root@node1 ~]# mv /etc/ntp.conf /etc/ntp.conf.old

[root@node1 ~]# rm -rf /var/run/ntpd.pid

 

4、检查TCP/UDP端口范围

# cat /proc/sys/net/ipv4/ip_local_port_range

如果已经显示9000 65500,就不用进行下面的步骤了

# echo 9000 65500 > /proc/sys/net/ipv4/ip_local_port_range

# vim /etc/sysctl.conf

# 添加此行:

# TCP/UDP port range

net.ipv4.ip_local_port_range = 9000 65500

# 重启网络

# /etc/rc.d/init.d/network restart

同步时间(在两个节点node1、node2上都配置)

[root@node1 ~]# date -s 23:29:00

[root@node1 ~]# ssh 192.168.7.12 date;date

[root@node1 ~]# clock -w

 

date -s 03/07/2017     时间设定成2017年3月7日

date -s 23:29:00          时间设置成晚上23点29分0秒

clock -w                      同步bios时钟,强制将系统时间写入 

5、 系统文件设置

(1)内核参数设置: 

[root@rac1 ~]# vi /etc/sysctl.conf 
kernel.msgmnb = 65536 
kernel.msgmax = 65536 
kernel.shmmax = 68719476736 
kernel.shmall = 4294967296 
fs.aio-max-nr = 1048576 
fs.file-max = 6815744 
kernel.shmall = 2097152 
kernel.shmmax = 1306910720 
kernel.shmmni = 4096 
kernel.sem = 250 32000 100 128 
net.ipv4.ip_local_port_range = 9000 65500 
net.core.rmem_default = 262144 
net.core.rmem_max = 4194304 
net.core.wmem_default = 262144 
net.core.wmem_max = 1048586 
net.ipv4.tcp_wmem = 262144 262144 262144 
net.ipv4.tcp_rmem = 4194304 4194304 4194304

这里后面检测要改 

kernel.shmmax = 68719476736

确认修改内核 

[root@rac1 ~]# sysctl -p

也可以采用Oracle Linux光盘中的相关安装包来调整 

[root@rac1 Packages]# pwd 
/mnt/cdrom/Packages 
[root@rac1 Packages]# ll | grep preinstall 
-rw-r–r– 1 root root 15524 Dec 25 2012 oracle-rdbms-server-11gR2-preinstall-1.0-7.el6.x86_64.rpm

(2)配置oracle、grid用户的shell限制 

[root@rac1 ~]# vi /etc/security/limits.conf 
grid soft nproc 2047 
grid hard nproc 16384 
grid soft nofile 1024 
grid hard nofile 65536 
oracle soft nproc 2047 
oracle hard nproc 16384 
oracle soft nofile 1024 
oracle hard nofile 65536

(3)配置login 

[root@rac1 ~]# vi /etc/pam.d/login 
session required pam_limits.so 

(4)安装需要的软件包 

  1. binutils-2.20.51.0.2-5.11.el6 (x86_64) 

    compat-libcap1-1.10-1 (x86_64) 
    compat-libstdc++-33-3.2.3-69.el6 (x86_64) 
    compat-libstdc++-33-3.2.3-69.el6.i686 
    gcc-4.4.4-13.el6 (x86_64) 
    gcc-c++-4.4.4-13.el6 (x86_64) 
    glibc-2.12-1.7.el6 (i686) 
    glibc-2.12-1.7.el6 (x86_64) 
    glibc-devel-2.12-1.7.el6 (x86_64) 
    glibc-devel-2.12-1.7.el6.i686 
    ksh 
    libgcc-4.4.4-13.el6 (i686) 
    libgcc-4.4.4-13.el6 (x86_64) 
    libstdc++-4.4.4-13.el6 (x86_64) 
    libstdc++-4.4.4-13.el6.i686 
    libstdc++-devel-4.4.4-13.el6 (x86_64) 
    libstdc++-devel-4.4.4-13.el6.i686 
    libaio-0.3.107-10.el6 (x86_64) 
    libaio-0.3.107-10.el6.i686 
    libaio-devel-0.3.107-10.el6 (x86_64) 
    libaio-devel-0.3.107-10.el6.i686 
    make-3.81-19.el6 
    sysstat-9.0.4-11.el6 (x86_64)

    这里使用的是配置本地源的方式,自己先进行配置: 

    [root@rac1 ~]# mount /dev/cdrom /mnt/cdrom/ 
    [root@rac1 ~]# vi /etc/yum.repos.d/dvd.repo 
    [dvd] 
    name=dvd 
    baseurl=file:///mnt/cdrom 
    gpgcheck=0 
    enabled=1 
    [root@rac1 ~]# yum clean all 
    [root@rac1 ~]# yum makecache 
    [root@rac1 ~]# yum install gcc gcc-c++ glibc* glibc-devel* ksh libgcc* libstdc++* libstdc++-devel* make sysstat

vbox下,执行yum install oracle-rdbms-server-11gR2-preinstall-1.0-6.el6

参考http://www.cnblogs.com/ld1977/articles/6767918.html

6、配置grid和oracle用户环境变量

Oracle_sid需要根据节点不同进行修改 

[root@rac1 ~]# su - grid 
[grid@rac1 ~]$ vi .bash_profile

export TMP=/tmpexport TMPDIR=$TMPexport ORACLE_SID=+ASM1  # RAC1export ORACLE_SID=+ASM2  # RAC2export ORACLE_BASE=/u01/app/gridexport ORACLE_HOME=/u01/app/11.2.0/gridexport PATH=/usr/sbin:$PATHexport PATH=$ORACLE_HOME/bin:$PATHexport LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/libexport CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlibumask 022

需要注意的是ORACLE_UNQNAME是名,创建数据库时指定多个节点是会创建多个实例,ORACLE_SID指的是数据库实例名

[root@rac1 ~]# su - oracle 

[oracle@rac1 ~]$ vi .bash_profile

export TMP=/tmpexport TMPDIR=$TMPexport ORACLE_SID=orcl1  # RAC1export ORACLE_SID=orcl2  # RAC2export ORACLE_UNQNAME=orclexport ORACLE_BASE=/u01/app/oracleexport ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1export TNS_ADMIN=$ORACLE_HOME/network/adminexport PATH=/usr/sbin:$PATHexport PATH=$ORACLE_HOME/bin:$PATHexport LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib

$ source .bash_profile使配置文件生效

7、配置oracle用户ssh互信

这是很关键的一步,虽然官方文档中声称安装GI和RAC的时候OUI会自动配置SSH,但为了在安装之前使用CVU检查各项配置,还是手动配置互信更优。

配置过程如下:各节点生成Keys:[root@node1 ~]# su - gridmkdir ~/.sshchmod 700 ~/.sshssh-keygen -t rsassh-keygen -t dsa[root@node1 ~]# su - oraclemkdir ~/.sshchmod 700 ~/.sshssh-keygen -t rsassh-keygen -t dsa在节点1上进行互信配置:[root@node1 ~]# su - gridtouch ~/.ssh/authorized_keyscd ~/.sshssh rac1 cat ~/.ssh/id_rsa.pub >> authorized_keysssh rac2 cat ~/.ssh/id_rsa.pub >> authorized_keysssh rac1 cat ~/.ssh/id_dsa.pub >> authorized_keysssh rac2 cat ~/.ssh/id_dsa.pub >> authorized_keys[root@node1 ~]# su - oracletouch ~/.ssh/authorized_keyscd ~/.sshssh rac1 cat ~/.ssh/id_rsa.pub >> authorized_keysssh rac2 cat ~/.ssh/id_rsa.pub >> authorized_keysssh rac1 cat ~/.ssh/id_dsa.pub >> authorized_keysssh rac2 cat ~/.ssh/id_dsa.pub >> authorized_keys在node1把存储公钥信息的验证文件传送到node2上[root@node1 ~]# su - gridscp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keys[root@node1 ~]# su - oraclescp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keys验证ssh配置是否正确,以gird、oracle用户在两个节点node1、node2上都配置执行:[root@node1 ~]# su - grid设置验证文件的权限chmod 600 ~/.ssh/authorized_keys启用用户一致性exec /usr/bin/ssh-agent $SHELL/usr/bin/ssh-add验证ssh配置是否正确ssh rac1 datessh rac2 datessh rac1-priv datessh rac2-priv date[root@node1 ~]# su - oracle设置验证文件的权限chmod 600 ~/.ssh/authorized_keys启用用户一致性exec /usr/bin/ssh-agent $SHELL/usr/bin/ssh-add验证ssh配置是否正确ssh rac1 datessh rac2 datessh rac1-priv datessh rac2-priv date如果不需要输入密码就可以输出时间,说明ssh验证配置成功。必须把以上命令在两个节点都运行,每一个命令在第一次执行的时候需要输入yes。如果不运行这些命令,即使ssh验证已经配好,安装clusterware的时候也会出现错误:The specified nodes are not clusterable因为,配好ssh后,还需要在第一次访问时输入yes,才算是真正的无障碍访问其他服务器。请谨记,SSH互信需要实现的就是各个节点之间可以无密码进行SSH访问。

需要注意的是生成密钥时不设置密码,授权文件权限为600,同时需要两个节点互相ssh通过一次。 

8、配置磁盘

使用asm管理存储需要裸盘,前面配置了共享硬盘到两台主机上。配置裸盘的方式有两种(1)oracleasm添加(2)/etc/udev/rules.d/60-raw.rules配置文件添加(字符方式帮绑定udev) (3)脚本方式添加(块方式绑定udev,速度比字符方式快,最新的方法,推荐用此方式)

在配置裸盘之前需要先格式化硬盘:

fdisk /dev/sdbCommand (m for help): nCommand action   e   extended   p   primary partition (1-4)pPartition number (1-4): 1最后 w 命令保存更改

重复步骤,格式化其他盘,得到如下分区 

[root@rac1 ~]# ls /dev/sd* 
/dev/sda /dev/sda2 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sda1 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf
  

添加裸盘:没用上的步骤

[root@rac1 ~]# vi /etc/udev/rules.d/60-raw.rulesACTION=="add",KERNEL=="/dev/sdb1",RUN+='/bin/raw /dev/raw/raw1 %N"ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="17",RUN+="/bin/raw /dev/raw/raw1 %M %m"ACTION=="add",KERNEL=="/dev/sdc1",RUN+='/bin/raw /dev/raw/raw2 %N"ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="33",RUN+="/bin/raw /dev/raw/raw2 %M %m"ACTION=="add",KERNEL=="/dev/sdd1",RUN+='/bin/raw /dev/raw/raw3 %N"ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="49",RUN+="/bin/raw /dev/raw/raw3 %M %m"ACTION=="add",KERNEL=="/dev/sde1",RUN+='/bin/raw /dev/raw/raw4 %N"ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="65",RUN+="/bin/raw /dev/raw/raw4 %M %m"ACTION=="add",KERNEL=="/dev/sdf1",RUN+='/bin/raw /dev/raw/raw5 %N"ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="81",RUN+="/bin/raw /dev/raw/raw5 %M %m"KERNEL=="raw[1-5]",OWNER="grid",GROUP="asmadmin",MODE="660"[root@rac1 ~]# start_udev Starting udev:                                             [  OK  ][root@rac1 ~]# ll /dev/raw/total 0crw-rw---- 1 grid asmadmin 162, 1 Apr 13 13:51 raw1crw-rw---- 1 grid asmadmin 162, 2 Apr 13 13:51 raw2crw-rw---- 1 grid asmadmin 162, 3 Apr 13 13:51 raw3crw-rw---- 1 grid asmadmin 162, 4 Apr 13 13:51 raw4crw-rw---- 1 grid asmadmin 162, 5 Apr 13 13:51 raw5crw-rw---- 1 root disk     162, 0 Apr 13 13:51 rawctl

这里需要注意的是配置的,前后都不能有空格,否则会报错。最后看到的raw盘权限必须是grid:asmadmin用户。  

方法(3):没用上步骤

[root@rac1 ~]# for i in b c d e f ;doecho "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\", RESULT==\"`/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\"">> /etc/udev/rules.d/99-oracle-asmdevices.rulesdone[root@rac1 ~]# start_udev Starting udev:                                             [  OK  ]

[root@rac1 ~]# ll /dev/*asm* 

brw-rw—- 1 grid asmadmin 8, 16 Apr 27 18:52 /dev/asm-diskb 
brw-rw—- 1 grid asmadmin 8, 32 Apr 27 18:52 /dev/asm-diskc 
brw-rw—- 1 grid asmadmin 8, 48 Apr 27 18:52 /dev/asm-diskd 
brw-rw—- 1 grid asmadmin 8, 64 Apr 27 18:52 /dev/asm-diske 
brw-rw—- 1 grid asmadmin 8, 80 Apr 27 18:52 /dev/asm-diskf

用这种方式添加,在后面的添加asm磁盘组的时候,需要指定Change Diskcovery Path为/dev/*asm*

安装ASM这块按这个教程没成功,用的以下方法:

root@node1 ~]# cd /tmp/oracle[root@node1 ~]# rpm -ivh kmod-oracleasm-2.0.6.rh1-3.el6.x86_64.rpm[root@node1 ~]# rpm -ivh oracleasmlib-2.0.4-1.el6.x86_64.rpm [root@node1 ~]# rpm -Uvh oracleasm-support-2.1.8-1.el6.x86_64.rpm[root@node1 ~]# rpm -ivh cvuqdisk-1.0.9-1.rpm在安装 kmod-oracleasm-2.0.6.rh1-3.el6.x86_64.rpm时报错,内核不对,后查找在http://rpm.pbone.net/index.php3/stat/4/idpl/30518374/dir/scientific_linux_6/com/kmod-oracleasm-2.0.8-5.el6_7.x86_64.rpm.html上面下载的2.0.8版本,可以在linux6.7上安装Download mirror.switch.ch	 	kmod-oracleasm-2.0.8-5.el6_7.x86_64.rpmftp.rediris.es	 	kmod-oracleasm-2.0.8-5.el6_7.x86_64.rpmftp.pbone.net	 	kmod-oracleasm-2.0.8-5.el6_7.x86_64.rpmftp.icm.edu.pl	 	kmod-oracleasm-2.0.8-5.el6_7.x86_64.rpm

我使用的在vmware虚拟机手动添加各个硬盘,两个节点都增加:

添加磁盘后,给新的磁盘分区

在node1上[root@node1 ~]# fdisk /dev/sdbm(帮助)p(查看)n(新建分区)p 1 1 1 p(查看)w(保存)q(退出)fdisk /dev/sdcfdisk /dev/sddfdisk /dev/sdefdisk /dev/sdf

配置ASM Libaray(必须在两个节点node1、node2上都配置)

root@node1 ~]# oracleasm configure -iConfiguring the Oracle ASM library driver.This will configure the on-boot properties of the Oracle ASM librarydriver.  The following questions will determine whether the driver isloaded on boot and what permissions it will have.  The current valueswill be shown in brackets ('[]').  Hitting 
without typing ananswer will keep that current value. Ctrl-C will abort.Default user to own the driver interface []: gridDefault group to own the driver interface []: asmadminStart Oracle ASM library driver on boot (y/n) [n]: yScan for Oracle ASM disks on boot (y/n) [y]: yWriting Oracle ASM library driver configuration: done[root@node1 ~]# /usr/sbin/oracleasm initCreating /dev/oracleasm mount point: /dev/oracleasmLoading module "oracleasm": oracleasmMounting ASMlib driver filesystem: /dev/oracleasm

创建ASM磁盘(只需在node1上操作)

/usr/sbin/oracleasm createdisk VDK001 /dev/sdb1/usr/sbin/oracleasm createdisk VDK002 /dev/sdc1/usr/sbin/oracleasm createdisk VDK003 /dev/sdd1/usr/sbin/oracleasm createdisk VDK004 /dev/sde1/usr/sbin/oracleasm createdisk VDK005 /dev/sdf1...........[root@node1 oracle]# /usr/sbin/oracleasm createdisk VDK001 /dev/sdb1Writing disk header: doneInstantiating disk: done[root@node1 oracle]# /usr/sbin/oracleasm createdisk VDK002 /dev/sdc1Writing disk header: doneInstantiating disk: done[root@node1 oracle]# /usr/sbin/oracleasm createdisk VDK003 /dev/sdd1Writing disk header: doneInstantiating disk: done[root@node1 oracle]# /usr/sbin/oracleasm createdisk VDK004 /dev/sde1Writing disk header: doneInstantiating disk: done[root@node1 oracle]# /usr/sbin/oracleasm createdisk VDK005 /dev/sdf1Writing disk header: doneInstantiating disk: done排错Marking disk "VOL5" as an ASM disk: [FAILED]              -----失败的原因 没识别  需要先执行/sbin/partprobe/etc/init.d/oracleasm createdisk VDK001 /dev/sdb1/etc/init.d/oracleasm createdisk VDK002 /dev/sdc1/etc/init.d/oracleasm createdisk VDK003 /dev/sdd1/etc/init.d/oracleasm createdisk VDK004 /dev/sde1/etc/init.d/oracleasm createdisk VDK005 /dev/sdf1删除已有的磁盘/etc/init.d/oracleasm deletedisk vdk001

加载扫描ASM盘(必须在两个节点node1、node2上都配置)

/etc/init.d/oracleasm scandisks/etc/init.d/oracleasm listdisks[root@node1 ~]# /etc/init.d/oracleasm scandisksScanning the system for Oracle ASMLib disks:               [  OK  ][root@node1 ~]# /etc/init.d/oracleasm listdisksVDK001VDK002VDK003VDK004VDK005[root@node2 ~]# /etc/init.d/oracleasm scandisksScanning the system for Oracle ASMLib disks:               [  OK  ][root@node2 ~]# /etc/init.d/oracleasm listdisksVDK001VDK002VDK003VDK004VDK005Device ..... is already labeled for ASM disk .....的错误.note[root@node1 oracle]# /usr/sbin/oracleasm createdisk VDK004 /dev/sde1Device "/dev/sde1" is already labeled for ASM disk "VDK001"[root@node1 oracle]# /usr/sbin/oracleasm renamedisk -f /dev/sde1 VDK004Writing disk header: doneInstantiating disk "VDK004": done

安装grid软件

libaio-0.3.105(i386)、compat-libstdc++-33-3.2.3(i386)、libaio-devel(i386)、libgcc(i386)、libstdc++(i386)、unixODBC(i386)、

unixODBC-devel(i386)、pdksh、几个包在执行预检查时失败,后续忽略,可以过去。

安装前全面检查(DNS可忽略)(只需在node1主机上操作):

[root@node1 ~]# su - grid[root@node1 ~]# cd /tmp/oracle/grid[root@node1 ~]# ./runcluvfy.sh comp nodecon -n oracle-rac1,oracle-rac2 -verbose[grid@oraclerac1 grid]$ ./runcluvfy.sh comp nodecon -n oraclerac1,oraclerac2 -verboseVerifying node connectivity Checking node connectivity...Checking hosts config file...  Node Name                             Status                    ------------------------------------  ------------------------  oraclerac1                           passed                    oraclerac2                           passed                  Verification of the hosts config file successfulInterface information for node "oraclerac1" Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU    ------ --------------- --------------- --------------- --------------- ----------------- ------ eth0   192.168.7.11    192.168.7.0     0.0.0.0         192.168.7.1     08:00:27:17:68:C7 1500   eth2   172.16.16.1     172.16.16.0     0.0.0.0         192.168.7.1     08:00:27:00:27:D9 1500  Interface information for node "oraclerac2" Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU    ------ --------------- --------------- --------------- --------------- ----------------- ------ eth0   192.168.7.12    192.168.7.0     0.0.0.0         192.168.7.1     08:00:27:AB:1D:38 1500   eth2   172.16.16.2     172.16.16.0     0.0.0.0         192.168.7.1     08:00:27:7A:74:50 1500  Check: Node connectivity of subnet "192.168.7.0"  Source                          Destination                     Connected?        ------------------------------  ------------------------------  ----------------  oraclerac1[192.168.7.11]       oraclerac2[192.168.7.12]       yes             Result: Node connectivity passed for subnet "192.168.7.0" with node(s) oraclerac1,oraclerac2Check: TCP connectivity of subnet "192.168.7.0"  Source                          Destination                     Connected?        ------------------------------  ------------------------------  ----------------  oraclerac1:192.168.7.11        oraclerac2:192.168.7.12        passed          Result: TCP connectivity check passed for subnet "192.168.7.0"Check: Node connectivity of subnet "172.16.16.0"  Source                          Destination                     Connected?        ------------------------------  ------------------------------  ----------------  oraclerac1[172.16.16.1]        oraclerac2[172.16.16.2]        yes             Result: Node connectivity passed for subnet "172.16.16.0" with node(s) oraclerac1,oraclerac2Check: TCP connectivity of subnet "172.16.16.0"  Source                          Destination                     Connected?        ------------------------------  ------------------------------  ----------------  oraclerac1:172.16.16.1         oraclerac2:172.16.16.2         passed          Result: TCP connectivity check passed for subnet "172.16.16.0"Interfaces found on subnet "192.168.7.0" that are likely candidates for VIP are:oraclerac1 eth0:192.168.7.11oraclerac2 eth0:192.168.7.12Interfaces found on subnet "172.16.16.0" that are likely candidates for a private interconnect are:oraclerac1 eth2:172.16.16.1oraclerac2 eth2:172.16.16.2Checking subnet mask consistency...Subnet mask consistency check passed for subnet "192.168.7.0".Subnet mask consistency check passed for subnet "172.16.16.0".Subnet mask consistency check passed.Result: Node connectivity check passedVerification of node connectivity was successful.21、安装grid软件[root@node1 ~]# export DISPLAY=:0.0[root@node1 ~]# xhost +access control disabled, clients can connect from any host[root@node1 ~]# su - grid[grid@node1 ~]$ xhost +access control disabled, clients can connect from any host[grid@oraclerac1 grid]$ ./runInstaller

定义集群名字,SCAN Name 为hosts中定义的scan-ip,取消GNS 

界面只有第一个节点rac1,点击“Add”把第二个节点rac2加上 

配置ASM,这里选择前面配置的裸盘raw1,raw2,raw3,冗余方式为External即不冗余。因为是不用于,所以也可以只选一个设备。这里的设备是用来做OCR注册盘和votingdisk投票盘的。 

 

安装grid,在执行/u01/app/11.2.0/grid/root.sh时出现了ohasd failed,报出如下错误

Adding daemon to inittabCRS-4124: Oracle High Availability Services startup failed.CRS-4000: Command Start failed, or completed with errors.ohasd failed to start: Inappropriate ioctl for deviceohasd failed to start at /u01/app/11.2.0/grid/crs/install/rootcrs.pl line 443.

  

解决方案参考http://www.cnblogs.com/ld1977/articles/6765341.html 

根据提示查看日志

[grid@rac1 grid]$ vi /u01/app/oraInventory/logs/installActions2016-04-10_04-57-29PM.log命令模式查找错误:/ERRORWARNING:INFO: Completed Plugin named: Oracle Cluster Verification UtilityINFO: Checking name resolution setup for "scan-ip"...INFO: ERROR:INFO: PRVG-1101 : SCAN name "scan-ip" failed to resolveINFO: ERROR:INFO: PRVF-4657 : Name resolution setup check for "scan-ip" (IP address: 192.168.248.110) failedINFO: ERROR:INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "scan-ip"INFO: Verification of SCAN VIP and Listener setup failed

由错误日志可知,是因为没有配置resolve.conf,可以忽略 

安装grid清单位置 

至此grid集群软件安装完成

2.安装grid后的资源检查

以grid用户执行以下命令。 

[root@rac1 ~]# su - grid

检查crs状态

[grid@rac1 ~]$ crsctl check crsCRS-4638: Oracle High Availability Services is onlineCRS-4537: Cluster Ready Services is onlineCRS-4529: Cluster Synchronization Services is onlineCRS-4533: Event Manager is online

  

检查Clusterware资源

[grid@rac1 ~]$ crs_stat -t -vName           Type           R/RA   F/FT   Target    State     Host        ----------------------------------------------------------------------ora....ER.lsnr ora....er.type 0/5    0/     ONLINE    ONLINE    rac1        ora....N1.lsnr ora....er.type 0/5    0/0    ONLINE    ONLINE    rac1        ora.OCR.dg     ora....up.type 0/5    0/     ONLINE    ONLINE    rac1        ora.asm        ora.asm.type   0/5    0/     ONLINE    ONLINE    rac1        ora.cvu        ora.cvu.type   0/5    0/0    ONLINE    ONLINE    rac1        ora.gsd        ora.gsd.type   0/5    0/     OFFLINE   OFFLINE               ora....network ora....rk.type 0/5    0/     ONLINE    ONLINE    rac1        ora.oc4j       ora.oc4j.type  0/1    0/2    ONLINE    ONLINE    rac1        ora.ons        ora.ons.type   0/3    0/     ONLINE    ONLINE    rac1        ora....SM1.asm application    0/5    0/0    ONLINE    ONLINE    rac1        ora....C1.lsnr application    0/5    0/0    ONLINE    ONLINE    rac1        ora.rac1.gsd   application    0/5    0/0    OFFLINE   OFFLINE               ora.rac1.ons   application    0/3    0/0    ONLINE    ONLINE    rac1        ora.rac1.vip   ora....t1.type 0/0    0/0    ONLINE    ONLINE    rac1        ora....SM2.asm application    0/5    0/0    ONLINE    ONLINE    rac2        ora....C2.lsnr application    0/5    0/0    ONLINE    ONLINE    rac2        ora.rac2.gsd   application    0/5    0/0    OFFLINE   OFFLINE               ora.rac2.ons   application    0/3    0/0    ONLINE    ONLINE    rac2        ora.rac2.vip   ora....t1.type 0/0    0/0    ONLINE    ONLINE    rac2        ora.scan1.vip  ora....ip.type 0/0    0/0    ONLINE    ONLINE    rac1

  

检查集群节点

[grid@rac1 ~]$ olsnodes -nrac1    1rac2    2

检查两个节点上的Oracle TNS监听器进程

[grid@rac1 ~]$ ps -ef|grep lsnr|grep -v 'grep'|grep -v 'ocfs'|awk '{print$9}'LISTENER_SCAN1LISTENER

  

确认针对Oracle Clusterware文件的Oracle ASM功能: 

如果在 Oracle ASM 上暗转过了OCR和表决磁盘文件,则以Grid Infrastructure 安装所有者的身份,使用给下面的命令语法来确认当前正在运行已安装的Oracle ASM:

[grid@rac1 ~]$ srvctl status asm -aASM is running on rac2,rac1ASM is enabled.

  

  

  

 

  

  

转载于:https://www.cnblogs.com/ld1977/p/6743852.html

你可能感兴趣的文章
【cocos2d-x制作别踩白块儿】第一期:游戏介绍
查看>>
发现的最大数量
查看>>
Ubuntu12.04环境搭建遇到的问题和建议(一个)
查看>>
19.最经济app发短信的方法
查看>>
从零開始学android<SeekBar滑动组件.二十二.>
查看>>
教你用笔记本破解无线路由器password
查看>>
网络编程学习小结
查看>>
JS面向对象
查看>>
excel VLOOKUP函数的用法
查看>>
设计模式
查看>>
orm介绍
查看>>
一个简单程序快速入门JDBC
查看>>
DBA_Oracle基本体系内存和进程结构(概念)
查看>>
unisynedit 在Delphi 2010下的编译问题
查看>>
每日定理3
查看>>
在公司就职时应该注意的事项
查看>>
springMVC整合jedis+redis
查看>>
Python基础之 一 文件操作
查看>>
java学习之switch 等值判断
查看>>
hdu5036 Explosion 传递闭包
查看>>