CentOS 7.2安装11g数据库软件

 
preface
 
    yesterday i’ve installed the 11g gi software on centos 7.2.but i still encounter some troubles when i continue to install the database software in my environment.i’ve recorded the main evidences.they will be shown below.
 
precedure
 
    first of all,when i exectued “./runinstaller”,the oui was still stucked in the step of checking semaphore.it shew the similar symptom like i was installing the gi software yesterday.
 
    partial of installation log:

 1 [worker 3] [ 2018-08-28 01:17:33.038 bst ] [runtimeexec.runcommand:77]  /tmp/cvu_11.2.0.4.0_oracle/exectask.sh -getkernelparam semmsl 
 2 [worker 2] [ 2018-08-28 01:17:33.038 bst ] [remoteexeccommand.validatecmdargs:1049]  calling validatecmdargs
 3 [worker 2] [ 2018-08-28 01:17:33.038 bst ] [remoteexeccommand.validatecmdargs:1055]  checking for arguments validity
 4 [performchecks.flowworker] [ 2018-08-28 01:17:33.038 bst ] [semaphore.acquire:109]  clientresource constructor:blocking semaphore owned by performchecks.flowworker:acquire called by thread performchecks.flowworker m_count=0
 5 [worker 2] [ 2018-08-28 01:17:33.038 bst ] [remoteexeccommand.execute:824]  trying to runremoteexeccmd first to check if server is already running
 6 [worker 2] [ 2018-08-28 01:17:33.038 bst ] [remoteexeccommand.executeinternal:990]  calling executeinternal()
 7 [worker 2] [ 2018-08-28 01:17:33.039 bst ] [remoteexeccommand.executeinternal:1006]  executing the command: '/tmp/cvu_11.2.0.4.0_oracle/exectask.sh' with args '-getkernelparam semmsl ', 'm_stdin == null ->true', 'm_localexecution ->false', 'm_chkexception ->false'
 8 [worker 2] [ 2018-08-28 01:17:33.046 bst ] [utils.getlocalhost:481]  hostname retrieved: rac1, returned: rac1
 9 [worker 2] [ 2018-08-28 01:17:33.047 bst ] [nativesystem.iscmdscv:502]  iscmdscv: cmd=[/usr/bin/ssh -o fallbacktorsh=no  -o passwordauthentication=no  -o stricthostkeychecking=yes  -o numberofpasswordprompts=0  rac2 -n ]
10 [worker 2] [ 2018-08-28 01:17:33.047 bst ] [nativesystem.iscmdscv:552]  iscmdscv: /usr/bin/ssh is present.
11 [worker 2] [ 2018-08-28 01:17:33.047 bst ] [nativesystem.iscmdscv:554]  iscmdscv: /usr/bin/ssh is a file.
12 [worker 2] [ 2018-08-28 01:17:33.047 bst ] [nativesystem.iscmdscv:571]  iscmdscv: returned true
13 [worker 2] [ 2018-08-28 01:17:33.048 bst ] [runtimeexec.runcommand:75]  calling runtime.exec() with the command 
14 [worker 2] [ 2018-08-28 01:17:33.048 bst ] [runtimeexec.runcommand:77]  /bin/sh 
15 [worker 2] [ 2018-08-28 01:17:33.048 bst ] [runtimeexec.runcommand:77]  -c 
16 [worker 2] [ 2018-08-28 01:17:33.048 bst ] [runtimeexec.runcommand:77]  /usr/bin/ssh -o fallbacktorsh=no  -o passwordauthentication=no  -o stricthostkeychecking=yes  -o numberofpasswordprompts=0  rac2 -n /tmp/cvu_11.2.0.4.0_oracle/exectask.sh -getkernelparam semmsl 
17 [thread-570] [ 2018-08-28 01:17:33.050 bst ] [streamreader.run:61]  in streamreader.run 
18 [worker 3] [ 2018-08-28 01:17:33.049 bst ] [runtimeexec.runcommand:142]  runcommand: waiting for the process
19 [thread-569] [ 2018-08-28 01:17:33.050 bst ] [streamreader.run:61]  in streamreader.run 
20 [thread-572] [ 2018-08-28 01:17:33.069 bst ] [streamreader.run:61]  in streamreader.run 
21 [thread-571] [ 2018-08-28 01:17:33.071 bst ] [streamreader.run:61]  in streamreader.run 
22 [worker 2] [ 2018-08-28 01:17:33.071 bst ] [runtimeexec.runcommand:142]  runcommand: waiting for the process
23 [thread-571] [ 2018-08-28 01:17:33.183 bst ] [streamreader.run:65]  output><cv_val><cv_cur>kernel.sem = 250    32000    100    128
24 [thread-571] [ 2018-08-28 01:17:33.183 bst ] [streamreader.run:65]  output></cv_cur><cv_cfg>kernel.sem = 250 32000 100 128
25 [thread-571] [ 2018-08-28 01:17:33.183 bst ] [streamreader.run:65]  output></cv_cfg></cv_val><cv_vres>0</cv_vres><cv_log>exectask: kernel param retrieval successful</cv_log><cv_eres>0</cv_eres>
26 [worker 2] [ 2018-08-28 01:17:33.183 bst ] [runtimeexec.runcommand:144]  runcommand: process returns 0
27 [worker 2] [ 2018-08-28 01:17:33.183 bst ] [runtimeexec.runcommand:161]  runtimeexec: output>
28 [worker 2] [ 2018-08-28 01:17:33.183 bst ] [runtimeexec.runcommand:164]  <cv_val><cv_cur>kernel.sem = 250    32000    100    128
29 [worker 2] [ 2018-08-28 01:17:33.183 bst ] [runtimeexec.runcommand:164]  </cv_cur><cv_cfg>kernel.sem = 250 32000 100 128
30 [worker 2] [ 2018-08-28 01:17:33.184 bst ] [runtimeexec.runcommand:164]  </cv_cfg></cv_val><cv_vres>0</cv_vres><cv_log>exectask: kernel param retrieval successful</cv_log><cv_eres>0</cv_eres>
31 [worker 2] [ 2018-08-28 01:17:33.184 bst ] [runtimeexec.runcommand:170]  runtimeexec: error>
32 [worker 2] [ 2018-08-28 01:17:33.184 bst ] [runtimeexec.runcommand:192]  returning from runtimeexec.runcommand
33 [worker 2] [ 2018-08-28 01:17:33.184 bst ] [unixsystem.dorunremoteexeccmd:3232]  retval = 0
34 [worker 2] [ 2018-08-28 01:17:33.184 bst ] [unixsystem.dorunremoteexeccmd:3256]  exitvalue = 0
35 [worker 2] [ 2018-08-28 01:17:33.184 bst ] [remoteexeccommand.executeinternal:1037]  cmdsuccess status: true
36 [worker 2] [ 2018-08-28 01:17:33.184 bst ] [remoteexeccommand.execute:894]  cmdsuccess status: true
37 [worker 2] [ 2018-08-28 01:17:33.184 bst ] [verificationutil.fetchtextbytags:2318]  
38 tags <cv_val> and </cv_val> contains:<cv_cur>kernel.sem = 250    32000    100    128
39 </cv_cur><cv_cfg>kernel.sem = 250 32000 100 128
40 </cv_cfg>
41 
42 [worker 2] [ 2018-08-28 01:17:33.184 bst ] [verificationcommand.execute:234]  formatted exectask output is:
43  <cv_val><cv_cur>kernel.sem = 250    32000    100    128
44 </cv_cur><cv_cfg>kernel.sem = 250 32000 100 128
45 </cv_cfg></cv_val><cv_vres>0</cv_vres><cv_log>exectask: kernel param retrieval successful</cv_log><cv_eres>0</cv_eres>
46 [worker 2] [ 2018-08-28 01:17:33.184 bst ] [verificationutil.fetchtextbytags:2318]  
47 tags <cv_vres> and </cv_vres> contains:0
48 
49 [worker 2] [ 2018-08-28 01:17:33.184 bst ] [verificationcommand.execute:245]  vfycode is: 0
50 [worker 2] [ 2018-08-28 01:17:33.184 bst ] [verificationutil.fetchtextbytags:2318]  
51 tags <cv_eres> and </cv_eres> contains:0
52 
53 [worker 2] [ 2018-08-28 01:17:33.184 bst ] [verificationutil.fetchtextbytags:2318]  
54 tags <cv_cur> and </cv_cur> contains:kernel.sem = 250    32000    100    128
55 
56 [worker 2] [ 2018-08-28 01:17:33.184 bst ] [verificationutil.fetchtextbytags:2318]  
57 tags <cv_cur> and </cv_cur> contains:kernel.sem = 250    32000    100    128
58 
59 [worker 2] [ 2018-08-28 01:17:33.184 bst ] [verificationutil.fetchtextbytags:2318]  
60 tags <cv_cfg> and </cv_cfg> contains:kernel.sem = 250 32000 100 128
61 
62 [worker 2] [ 2018-08-28 01:17:33.185 bst ] [clusterconfig$executecommand.returncommandtoclient:2951]  returncommandtoclient; fillcount=0 is full=false
63 [worker 2] [ 2018-08-28 01:17:33.185 bst ] [semaphore.acquire:109]  syncbufferempty:acquire called by thread worker 2 m_count=200
64 [worker 2] [ 2018-08-28 01:17:33.185 bst ] [semaphore.release:85]  syncbufferfull:release called by thread worker 2 m_count=1
65 [worker 2] [ 2018-08-28 01:17:33.185 bst ] [clientresource.getlistener:157]  calling getlistener
66 [worker 2] [ 2018-08-28 01:17:33.185 bst ] [clusterconfig$executecommand.run:3046]  owner thread name of the blocking semaphore performchecks.flowworker
67 [worker 2] [ 2018-08-28 01:17:33.185 bst ] [clusterconfig$executecommand.run:3054]  obtained semaphore
68 [worker 2] [ 2018-08-28 01:17:33.185 bst ] [semaphore.release:85]  clientresource constructor:blocking semaphore owned by performchecks.flowworker:release called by thread worker 2 m_count=1
69 [worker 2] [ 2018-08-28 01:17:33.185 bst ] [clusterconfig$executecommand.run:3069]  released semaphore by worker=worker 2
70 [worker 2] [ 2018-08-28 01:17:33.185 bst ] [semaphore.acquire:109]  syncbufferfull:acquire called by thread worker 2 m_count=0
71 [performchecks.flowworker] [ 2018-08-28 01:17:33.185 bst ] [clusterconfig.block:608]  block acquired semnum=0
72 [performchecks.flowworker] [ 2018-08-28 01:17:33.185 bst ] [semaphore.acquire:109]  clientresource constructor:blocking semaphore owned by performchecks.flowworker:acquire called by thread performchecks.flowworker m_count=0
73 ^c

 

 

    therefore,i specify “-ignoreprereq” option again to skip the unkown issue.

1 [oracle@rac1 database]$ ./runinstaller -ignoreprereq
2 starting oracle universal installer...
3 
4 checking temp space: must be greater than 120 mb.   actual 5009 mb    passed
5 checking swap space: must be greater than 150 mb.   actual 909 mb    passed
6 checking monitor: must be configured to display at least 256 colors.    actual 16777216    passed
7 preparing to launch oracle universal installer from /tmp/orainstall2018-08-28_01-21-14am. please wait ...[oracle@rac1 database]$ you can find the log of this install session at:
8  /u01/orainventory/logs/installactions2018-08-28_01-21-14am.log

 

    then,i encountered another two making error in the step of linking binaries:

 

 
1. exception string: error in invoking target ‘agent nmhs’ of makefile ‘/u01/oracle/db/sysman/lib/ins_emagent.mk’.
    according to the mos document “error in invoking target ‘agent nmhs’ of make file ins_emagent.mk while installing oracle 11.2.0.4 on linux (id 2299494.1)”,to the two steps below:

1 1. vim $oracle_home/sysman/lib/ins_emagent.mk
2 2. change "$(mk_emagent_nmectl)" into "$(mk_emagent_nmectl) -lnnz11"
3 3. click "retry" to continue the oui installation.

 

2. exception string: error in invoking target ‘irman ioracle’ of makefile ‘/u01/oracle/db/rdbms/lib/ins_rdbms.mk’.
    the lib files of rman is correct:

 1 [oracle@rac1 bin]$ ldd rman
 2     linux-vdso.so.1 =>  (0x00007fff017c4000)
 3     librt.so.1 => /lib64/librt.so.1 (0x00007ffbcd0df000)
 4     libclntsh.so.11.1 => /u01/oracle/db/lib/libclntsh.so.11.1 (0x00007ffbca677000)
 5     libnnz11.so => /u01/oracle/db/lib/libnnz11.so (0x00007ffbca2aa000)
 6     libdl.so.2 => /lib64/libdl.so.2 (0x00007ffbca0a6000)
 7     libm.so.6 => /lib64/libm.so.6 (0x00007ffbc9da4000)
 8     libpthread.so.0 => /lib64/libpthread.so.0 (0x00007ffbc9b88000)
 9     libnsl.so.1 => /lib64/libnsl.so.1 (0x00007ffbc996e000)
10     libc.so.6 => /lib64/libc.so.6 (0x00007ffbc95a1000)
11     libaio.so.1 => /lib64/libaio.so.1 (0x00007ffbc939f000)
12     /lib64/ld-linux-x86-64.so.2 (0x00007ffbcd2e7000)

 

    execute the below command and retry making:

 1 [oracle@rac1 bin]$ /usr/bin/make -f $oracle_home/rdbms/lib/ins_rdbms.mk ioracle
 2 chmod 755 /u01/oracle/db/bin
 3 test ! -f /u01/oracle/db/bin/oracle ||\
 4    mv -f /u01/oracle/db/bin/oracle /u01/oracle/db/bin/oracleo
 5 mv /u01/oracle/db/rdbms/lib/oracle /u01/oracle/db/bin/oracle
 6 chmod 6751 /u01/oracle/db/bin/oracle
 7 [oracle@rac1 bin]$ /usr/bin/make -f $oracle_home/rdbms/lib/ins_rdbms.mk irman
 8 
 9  - linking recovery manager (rman)
10 rm -f /u01/oracle/db/rdbms/lib/rman
11 gcc -o /u01/oracle/db/rdbms/lib/rman -m64 -z noexecstack -l/u01/oracle/db/rdbms/lib/ -l/u01/oracle/db/lib/ -l/u01/oracle/db/lib/stubs/   /u01/oracle/db/lib/s0main.o /u01/oracle/db/rdbms/lib/sskrmed.o /u01/oracle/db/rdbms/lib/skrmpt.o -ldbtools11 -lclient11 -lsql11 -lpls11  -lrt -lplp11 -lsnls11 -lunls11 -lnls11 -lslax11 -lpls11  -lrt -lplp11 /u01/oracle/db/lib/libplc11.a -lclntsh  `cat /u01/oracle/db/lib/ldflags`    -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -lnro11 `cat /u01/oracle/db/lib/ldflags`    -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -lnnz11 -lzt11 -lztkg11 -lclient11 -lnnetd11  -lvsn11 -lcommon11 -lgeneric11 -lmm -lsnls11 -lnls11  -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11 `cat /u01/oracle/db/lib/ldflags`    -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -lnro11 `cat /u01/oracle/db/lib/ldflags`    -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -lclient11 -lnnetd11  -lvsn11 -lcommon11 -lgeneric11   -lsnls11 -lnls11  -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11 -lclient11 -lnnetd11  -lvsn11 -lcommon11 -lgeneric11 -lsnls11 -lnls11  -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11   `cat /u01/oracle/db/lib/sysliblist` -wl,-rpath,/u01/oracle/db/lib -lm    `cat /u01/oracle/db/lib/sysliblist` -ldl -lm   -l/u01/oracle/db/lib
12 test ! -f /u01/oracle/db/bin/rman ||\
13    mv -f /u01/oracle/db/bin/rman /u01/oracle/db/bin/rmano
14 mv /u01/oracle/db/rdbms/lib/rman /u01/oracle/db/bin/rman
15 chmod 751 /u01/oracle/db/bin/rman

 

    finally,it turned out to be okay again:

 

 

    after i executed the scripts showed in the picture above on two nodes,the database software was installed normally and no more error occured.

 

(0)
上一篇 2022年3月22日
下一篇 2022年3月22日

相关推荐