select ge.grantype,ct.component , ge.grannum, ge.grannext,ge.granprev, lpad(' ',level) ||level
from x$ksmge ge, x$kmgsct ct where ge.grantype=ct.grantype
start with granprev=0 connect by ge.grantype=prior ge.grantype and prior grannum= granprev;
This query will give the granules allocation status as well as the allocation link list. If the default buffer cache is too much for the output. Add grantype!=6 to the query.
doclist
Wednesday, July 2, 2014
Friday, May 31, 2013
Change hadoop file storage from /u01/hadoop to /hadoop
Change hadoop file storage from /u01/hadoop to /hadoop
1. Stop hadoop server and hbase from ONLY node 1
Search script is like the following.
After find the related files. backup then modify all of them
4. Restart hadoop and hbase
- cd /home/hadoop/hbase-0.94.2/bin/
./stop-hbase.sh
cd /home/hadoop/hadoop-1.0.4/bin
./stop-dfs.sh
- su
cd /
mkdir /hadoop
chown hadoop:apache /hadoop
su - hadoop
cd /hadoop
cp -r /u01/hadoop/* .
su
mv /u01/hadoop /u01/hadoop1
Search script is like the following.
- cd /home/hadoop
for i in `find . -name "*.xml" -print`
do
cat $i |grep \/u01 && echo $i
done
After find the related files. backup then modify all of them
- [hadoop@qoda-n1 ~]$ vi ./hadoop-1.0.4/conf/hdfs-site.xml
[hadoop@qoda-n1 ~]$ cp ./hadoop-1.0.4/conf/hdfs-site.xml ./hadoop-1.0.4/conf/hdf-site.xml.053113
[hadoop@qoda-n1 ~]$ cp ./hadoop-1.0.4/conf/mapred-site.xml ./hadoop-1.0.4/conf/mapred-site.xml.053113
[hadoop@qoda-n1 ~]$ cp ./hbase-0.94.2/conf/hbase-site.xml ./hbase-0.94.2/conf/hbase-site.xml.053113
[hadoop@qoda-n1 ~]$ vi ./hadoop-1.0.4/conf/hdfs-site.xml
[hadoop@qoda-n1 ~]$ vi ./hadoop-1.0.4/conf/mapred-site.xml
[hadoop@qoda-n1 ~]$ vi ./hbase-0.94.2/conf/hbase-site.xml
4. Restart hadoop and hbase
- cd /home/hadoop/hadoop-1.0.4/bin
./start-dfs.sh
cd /home/hadoop/hbase-0.94.2/bin/
./start-hbase.sh
Thursday, May 30, 2013
create hbase table then put value into it with hbase shell
connect with hbase shell
create table t1 and family t1 and family t2
families are static 99% of the time. They are like column in RDBMS
put colume col1 value v1 into family 'f1' in row 'r1' , put column col2,col3 value 'v2' into family 'f2' in row 'r2'
hbase(main):002:0> put 't1','r1','f1:col1','v1'
0 row(s) in 0.2050 seconds
hbase(main):003:0> put 't1','r2','f2:col2','v2'
0 row(s) in 0.0130 seconds
hbase(main):004:0> put 't1','r2','f2:col3','v2'
0 row(s) in 0.0040 seconds
Show data in table t1
hbase(main):005:0> scan 't1'
ROW COLUMN+CELL
r1 column=f1:col1, timestamp=1369947468296, value=v1
r2 column=f2:col2, timestamp=1369947497236, value=v2
r2 column=f2:col3, timestamp=1369947501384, value=v2
2 row(s) in 0.1020 seconds
Get row 'r2' 's columns and values
hbase(main):006:0> get 't1', 'r2'
COLUMN CELL
f2:col2 timestamp=1369947497236, value=v2
f2:col3 timestamp=1369947501384, value=v2
2 row(s) in 0.0320 seconds
- mapr@cvlqepn3:/opt/mapr/hbase/hbase-0.94.5/conf> hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.94.5-mapr, r1dddcf01de3d37559586994b6727827c9493e0f0, Wed May 1 17:42:07 PDT 2013
Not all HBase shell commands are applicable to MapR tables.
Consult MapR documentation for the list of supported commands.
create table t1 and family t1 and family t2
families are static 99% of the time. They are like column in RDBMS
- hbase(main):001:0> create 't1','f1','f2'
0 row(s) in 1.9250 seconds
put colume col1 value v1 into family 'f1' in row 'r1' , put column col2,col3 value 'v2' into family 'f2' in row 'r2'
hbase(main):002:0> put 't1','r1','f1:col1','v1'
0 row(s) in 0.2050 seconds
hbase(main):003:0> put 't1','r2','f2:col2','v2'
0 row(s) in 0.0130 seconds
hbase(main):004:0> put 't1','r2','f2:col3','v2'
0 row(s) in 0.0040 seconds
Show data in table t1
hbase(main):005:0> scan 't1'
ROW COLUMN+CELL
r1 column=f1:col1, timestamp=1369947468296, value=v1
r2 column=f2:col2, timestamp=1369947497236, value=v2
r2 column=f2:col3, timestamp=1369947501384, value=v2
2 row(s) in 0.1020 seconds
Get row 'r2' 's columns and values
hbase(main):006:0> get 't1', 'r2'
COLUMN CELL
f2:col2 timestamp=1369947497236, value=v2
f2:col3 timestamp=1369947501384, value=v2
2 row(s) in 0.0320 seconds
Tuesday, May 21, 2013
opatch util and Old patch storage cleanup
MetaLink Note 550522 has the answer and tells you: it depends. Normally, this data is used in order to be able to rollback a patch. However, if you have installed a patchset (eg. 10.2.0.4), then the patches for the previous patchset (10.2.0.3) which are located in the .patch_storage directory are not needed anymore and can be removed.
Recently I have Oracle Cluster Infrastructure (11.2.0.3) patchset install on top of my old 11.2.0.1 RAC Infrastructure. The Old patches applied to this cluster is no longer needed. There are 1.8G can be reclaimed. Following are the process.
Here the new CRS Home is /u01/app/oracle/11.2.0.3/grid
The old CRS Home is /u01/app/oracle/11.2.0/grid
[grid@qoda-n2 OPatch]$ pwd
/u01/app/
3.3G ./11.2.0.3/grid
3.3G ./11.2.0.3
1.8G ./11.2.0/grid/.patch_storage/9413827_Oct_25_2010_07_07_18
2.6G ./11.2.0/grid/.patch_storage
1.2G ./11.2.0/grid/bin
5.7G ./11.2.0/grid
5.7G ./11.2.0
8.9G .
Recently I have Oracle Cluster Infrastructure (11.2.0.3) patchset install on top of my old 11.2.0.1 RAC Infrastructure. The Old patches applied to this cluster is no longer needed. There are 1.8G can be reclaimed. Following are the process.
Here the new CRS Home is /u01/app/oracle/11.2.0.3/grid
The old CRS Home is /u01/app/oracle/11.2.0/grid
[grid@qoda-n2 OPatch]$ pwd
/u01/app/
[root@qoda-n2 app]# du -h |grep G |grep -v K
3.3G ./11.2.0.3/grid
3.3G ./11.2.0.3
1.8G ./11.2.0/grid/.patch_storage/9413827_Oct_25_2010_07_07_18
2.6G ./11.2.0/grid/.patch_storage
1.2G ./11.2.0/grid/bin
5.7G ./11.2.0/grid
5.7G ./11.2.0
8.9G .
Go to new CRS Home
[grid@qoda-n2 OPatch]$ pwd
/u01/app/11.2.0.3/grid/OPatch
[grid@qoda-n2 OPatch]$ ./opatch util Cleanup -oh /u01/app/11.2.0/grid
Invoking OPatch 11.2.0.1.7
Oracle Interim Patch Installer version 11.2.0.1.7
Copyright (c) 2011, Oracle Corporation. All rights reserved.
UTIL session
Oracle Home : /u01/app/11.2.0/grid
Central Inventory : /u01/app/oraInventory
from : /etc/oraInst.loc
OPatch version : 11.2.0.1.7
OUI version : 11.2.0.1.0
Log file location : /u01/app/11.2.0/grid/cfgtoollogs/opatch/opatch2013-05-21_10-35-47AM.log
Invoking utility "cleanup"
OPatch will clean up 'restore.sh,make.txt' files and 'rac,scratch,backup' directories.
You will be still able to rollback patches after this cleanup.
Do you want to proceed? [y|n]
y
User Responded with: Y
Size of directory "/u01/app/11.2.0/grid/.patch_storage" before cleanup is 2708420951 bytes.
Size of directory "/u01/app/11.2.0/grid/.patch_storage" after cleanup is 1853977297 bytes.
UtilSession: Backup area for restore has been cleaned up. For a complete list of files/directories
deleted, Please refer log file.
As it suggested, there are only 900 M storage have been removed. We will need to remove the rest through manual process.
Wednesday, May 15, 2013
LVM snapshot backup and restore mongo DB sample
This is LVM snapshot validation test performed on jinlinux
Seup lv
lvcreate -n mongoData demo -L 10000M vg_jinlinux
[root@jinlinux ~]# lvscan
ACTIVE '/dev/vg_jinlinux/lv_root' [50.00 GiB] inherit
ACTIVE '/dev/vg_jinlinux/lv_home' [64.26 GiB] inherit
ACTIVE '/dev/vg_jinlinux/lv_swap' [3.73 GiB] inherit
ACTIVE Original '/dev/vg_jinlinux/demo' [11.09 GiB] inherit
ACTIVE Snapshot '/dev/vg_jinlinux/mongoSnap' [1.46 GiB] inherit
[root@jinlinux ~]# vgdisplay
--- Volume group ---
VG Name vg_jinlinux
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 28
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 5
Open LV 5
Max PV 0
Cur PV 1
Act PV 1
VG Size 148.52 GiB
PE Size 4.00 MiB
Total PE 38021
Alloc PE / Size 33420 / 130.55 GiB
Free PE / Size 4601 / 17.97 GiB
VG UUID BZRyNp-1wf6-VTa2-zOiD-IMyL-gbiS-5AUyFU
Create file system
mkfs ext4 -m 0 /dev/vg_jinlinux/demo
Create directory
[root@jinlinux ~]# mkdir /app/mongoData
Change Owner
chown mongod:mongod /app/mongoData
Mount lv
[root@jinlinux mongod]# mount -t ext4 /dev/vg_jinlinux/demo /app/mongoData
edit /etc/mongod.conf Make changes to point to this new directory
Start mongod
mongod --smallfiles -f /etc/mongod.conf
Begin data inserting
invoke java program "dbMoverOracleToMongo.java" to insert less than 10000 rows into mongo.
During insert at db.usr_sdlc.count() shows 5550 do LVM snapshot
[root@jinlinux ~]# lvcreate -L1500M -s -n mongoSnap /dev/mapper/vg_jinlinux-demo
Let insert keep going for a while till reach db.usr_sdlc.count() show 8040
Validate snapshot created
[root@jinlinux mongod]# lvscan
ACTIVE '/dev/vg_jinlinux/lv_root' [50.00 GiB] inherit
ACTIVE '/dev/vg_jinlinux/lv_home' [64.26 GiB] inherit
ACTIVE '/dev/vg_jinlinux/lv_swap' [3.73 GiB] inherit
ACTIVE Original '/dev/vg_jinlinux/demo' [11.09 GiB] inherit
ACTIVE Snapshot '/dev/vg_jinlinux/mongoSnap' [1.46 GiB] inherit
shutdown mongod
mongo localhost:27021
use admin
db.shutdownServer();
create new directory to mount snapshot
mkdir /app/mongoSnap
chown mongod:mongod /app/mongoSnap
mount snapshot
[root@jinlinux mongod]# mount -t ext4 /dev/vg_jinlinux/mongoSnap /app/mongoSnap
Create new mongod.conf to bring up snapshot version of DB
[root@jinlinux mongod]# cat /home/mongod/mongodSnap.conf
logpath=/tmp/mongodSnap.log
logappend=true
fork = true
port = 27021
bind_ip = 127.0.0.1,10.128.213.66
dbpath=/app/mongoSnap
journal=true
directoryperdb=true
quota=true
quotaFiles=10
rest=true
slowms=200
oplogSize=100
pidfilepath = /tmp/mongod.pid
Bring up mongod with snapshot
mongod --smallfiles -f /home/mongod/mongodSnap.conf
Show row count
[root@jinlinux mongod]# mongo localhost:27021
MongoDB shell version: 2.4.3
connecting to: localhost:27021/test
> use admin
switched to db admin
> use test
switched to db test
> db.usr_sdlc.count();
5295
> use admin
switched to db admin
> db.shutdownServer();
Bring up original server and show row count 8040. The difference in counts shows the snapshot backup was done at the moment while active writing was happenning
[root@jinlinux mongod]# mongo localhost:27021
MongoDB shell version: 2.4.3
connecting to: localhost:27021/test
> use test
switched to db test
> db.usr_sdlc.count();
8040
> db.shutdownServer();
shutdown command only works with the admin database; try 'use admin'
> use admin
switched to db admin
> db.shutdownServer();
Wed May 15 11:49:04.882 DBClientCursor::init call() failed
server should be down...
Wed May 15 11:49:04.883 trying reconnect to localhost:27021
Wed May 15 11:49:04.884 reconnect localhost:27021 ok
Wed May 15 11:49:04.884 Socket say send() errno:104 Connection reset by peer 127.0.0.1:27021
Unmount snapshot directory
[root@jinlinux app]# umount mongoSnap
[root@jinlinux app]# lvscan
ACTIVE '/dev/vg_jinlinux/lv_root' [50.00 GiB] inherit
ACTIVE '/dev/vg_jinlinux/lv_home' [64.26 GiB] inherit
ACTIVE '/dev/vg_jinlinux/lv_swap' [3.73 GiB] inherit
ACTIVE Original '/dev/vg_jinlinux/demo' [11.09 GiB] inherit
ACTIVE Snapshot '/dev/vg_jinlinux/mongoSnap' [1.46 GiB] inherit
Backup snapshot to different location
[root@jinlinux app]# dd if=/dev/vg_jinlinux/mongoSnap |gzip > /tmp/mongSnap.gz
23248896+0 records in
23248896+0 records out
11903434752 bytes (12 GB) copied, 217.663 s, 54.7 MB/s
[root@jinlinux app]# ls -alrt /tmp/mongSnap.gz
-rw-r--r--. 1 root root 958360718 May 15 13:55 /tmp/mongSnap.gz
Restore validation
Create LV, make sure it is of the same size of bigger than the original lv
[root@jinlinux app]# lvcreate --size 12G --name snapRestore vg_jinlinux
Logical volume "snapRestore" created
[root@jinlinux app]# mkdir /app/snapRestore
[root@jinlinux app]# chown mongod:mongod /app/snapRestore
Restore the lv from backup
[root@jinlinux app]# gzip -d -c /tmp/mongSnap.gz |dd of=/dev/vg_jinlinux/snapRestore
23248896+0 records in
23248896+0 records out
11903434752 bytes (12 GB) copied, 513.228 s, 23.2 MB/s
Create mongodSnapRestore.conf like the following
-bash-4.1$ cat mongodSnapRestore.conf
logpath=/tmp/mongodSnapRestore.log
logappend=true
fork = true
port = 27021
bind_ip = 127.0.0.1,10.128.213.66
dbpath=/app/snapRestore
journal=true
directoryperdb=true
quota=true
quotaFiles=10
rest=true
slowms=200
oplogSize=100
pidfilepath = /tmp/mongodRestore.pid
Bring up Mongo database from /app/snapRestore location
-bash-4.1$ mongod --smallfiles -f /home/mongod/mongodSnapRestore.conf
forked process: 6178
all output going to: /tmp/mongodSnapRestore.log
child process started successfully, parent exiting
Show count
-bash-4.1$ mongo localhost:27021
MongoDB shell version: 2.4.3
connecting to: localhost:27021/test
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
http://docs.mongodb.org/
Questions? Try the support group
http://groups.google.com/group/mongodb-user
> use test
switched to db test
> db.usr_sdlc.count();
5295
>
Seup lv
lvcreate -n mongoData demo -L 10000M vg_jinlinux
[root@jinlinux ~]# lvscan
ACTIVE '/dev/vg_jinlinux/lv_root' [50.00 GiB] inherit
ACTIVE '/dev/vg_jinlinux/lv_home' [64.26 GiB] inherit
ACTIVE '/dev/vg_jinlinux/lv_swap' [3.73 GiB] inherit
ACTIVE Original '/dev/vg_jinlinux/demo' [11.09 GiB] inherit
ACTIVE Snapshot '/dev/vg_jinlinux/mongoSnap' [1.46 GiB] inherit
[root@jinlinux ~]# vgdisplay
--- Volume group ---
VG Name vg_jinlinux
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 28
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 5
Open LV 5
Max PV 0
Cur PV 1
Act PV 1
VG Size 148.52 GiB
PE Size 4.00 MiB
Total PE 38021
Alloc PE / Size 33420 / 130.55 GiB
Free PE / Size 4601 / 17.97 GiB
VG UUID BZRyNp-1wf6-VTa2-zOiD-IMyL-gbiS-5AUyFU
Create file system
mkfs ext4 -m 0 /dev/vg_jinlinux/demo
Create directory
[root@jinlinux ~]# mkdir /app/mongoData
Change Owner
chown mongod:mongod /app/mongoData
Mount lv
[root@jinlinux mongod]# mount -t ext4 /dev/vg_jinlinux/demo /app/mongoData
edit /etc/mongod.conf Make changes to point to this new directory
- logpath=/var/log/mongo/mongod.log
logappend=true
fork = true
port = 27021
bind_ip = 127.0.0.1,10.128.213.66
dbpath=/app/mongoData
journal=true
directoryperdb=true
quota=true
quotaFiles=10
rest=true
slowms=200
oplogSize=100
pidfilepath = /var/run/mongodb/mongod.pid
Start mongod
mongod --smallfiles -f /etc/mongod.conf
Begin data inserting
invoke java program "dbMoverOracleToMongo.java" to insert less than 10000 rows into mongo.
During insert at db.usr_sdlc.count() shows 5550 do LVM snapshot
[root@jinlinux ~]# lvcreate -L1500M -s -n mongoSnap /dev/mapper/vg_jinlinux-demo
Let insert keep going for a while till reach db.usr_sdlc.count() show 8040
Validate snapshot created
[root@jinlinux mongod]# lvscan
ACTIVE '/dev/vg_jinlinux/lv_root' [50.00 GiB] inherit
ACTIVE '/dev/vg_jinlinux/lv_home' [64.26 GiB] inherit
ACTIVE '/dev/vg_jinlinux/lv_swap' [3.73 GiB] inherit
ACTIVE Original '/dev/vg_jinlinux/demo' [11.09 GiB] inherit
ACTIVE Snapshot '/dev/vg_jinlinux/mongoSnap' [1.46 GiB] inherit
shutdown mongod
mongo localhost:27021
use admin
db.shutdownServer();
create new directory to mount snapshot
mkdir /app/mongoSnap
chown mongod:mongod /app/mongoSnap
mount snapshot
[root@jinlinux mongod]# mount -t ext4 /dev/vg_jinlinux/mongoSnap /app/mongoSnap
Create new mongod.conf to bring up snapshot version of DB
[root@jinlinux mongod]# cat /home/mongod/mongodSnap.conf
logpath=/tmp/mongodSnap.log
logappend=true
fork = true
port = 27021
bind_ip = 127.0.0.1,10.128.213.66
dbpath=/app/mongoSnap
journal=true
directoryperdb=true
quota=true
quotaFiles=10
rest=true
slowms=200
oplogSize=100
pidfilepath = /tmp/mongod.pid
Bring up mongod with snapshot
mongod --smallfiles -f /home/mongod/mongodSnap.conf
Show row count
[root@jinlinux mongod]# mongo localhost:27021
MongoDB shell version: 2.4.3
connecting to: localhost:27021/test
> use admin
switched to db admin
> use test
switched to db test
> db.usr_sdlc.count();
5295
> use admin
switched to db admin
> db.shutdownServer();
Bring up original server and show row count 8040. The difference in counts shows the snapshot backup was done at the moment while active writing was happenning
[root@jinlinux mongod]# mongo localhost:27021
MongoDB shell version: 2.4.3
connecting to: localhost:27021/test
> use test
switched to db test
> db.usr_sdlc.count();
8040
> db.shutdownServer();
shutdown command only works with the admin database; try 'use admin'
> use admin
switched to db admin
> db.shutdownServer();
Wed May 15 11:49:04.882 DBClientCursor::init call() failed
server should be down...
Wed May 15 11:49:04.883 trying reconnect to localhost:27021
Wed May 15 11:49:04.884 reconnect localhost:27021 ok
Wed May 15 11:49:04.884 Socket say send() errno:104 Connection reset by peer 127.0.0.1:27021
Unmount snapshot directory
[root@jinlinux app]# umount mongoSnap
[root@jinlinux app]# lvscan
ACTIVE '/dev/vg_jinlinux/lv_root' [50.00 GiB] inherit
ACTIVE '/dev/vg_jinlinux/lv_home' [64.26 GiB] inherit
ACTIVE '/dev/vg_jinlinux/lv_swap' [3.73 GiB] inherit
ACTIVE Original '/dev/vg_jinlinux/demo' [11.09 GiB] inherit
ACTIVE Snapshot '/dev/vg_jinlinux/mongoSnap' [1.46 GiB] inherit
Backup snapshot to different location
[root@jinlinux app]# dd if=/dev/vg_jinlinux/mongoSnap |gzip > /tmp/mongSnap.gz
23248896+0 records in
23248896+0 records out
11903434752 bytes (12 GB) copied, 217.663 s, 54.7 MB/s
[root@jinlinux app]# ls -alrt /tmp/mongSnap.gz
-rw-r--r--. 1 root root 958360718 May 15 13:55 /tmp/mongSnap.gz
Restore validation
Create LV, make sure it is of the same size of bigger than the original lv
[root@jinlinux app]# lvcreate --size 12G --name snapRestore vg_jinlinux
Logical volume "snapRestore" created
[root@jinlinux app]# mkdir /app/snapRestore
[root@jinlinux app]# chown mongod:mongod /app/snapRestore
Restore the lv from backup
[root@jinlinux app]# gzip -d -c /tmp/mongSnap.gz |dd of=/dev/vg_jinlinux/snapRestore
23248896+0 records in
23248896+0 records out
11903434752 bytes (12 GB) copied, 513.228 s, 23.2 MB/s
Create mongodSnapRestore.conf like the following
-bash-4.1$ cat mongodSnapRestore.conf
logpath=/tmp/mongodSnapRestore.log
logappend=true
fork = true
port = 27021
bind_ip = 127.0.0.1,10.128.213.66
dbpath=/app/snapRestore
journal=true
directoryperdb=true
quota=true
quotaFiles=10
rest=true
slowms=200
oplogSize=100
pidfilepath = /tmp/mongodRestore.pid
Bring up Mongo database from /app/snapRestore location
-bash-4.1$ mongod --smallfiles -f /home/mongod/mongodSnapRestore.conf
forked process: 6178
all output going to: /tmp/mongodSnapRestore.log
child process started successfully, parent exiting
Show count
-bash-4.1$ mongo localhost:27021
MongoDB shell version: 2.4.3
connecting to: localhost:27021/test
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
http://docs.mongodb.org/
Questions? Try the support group
http://groups.google.com/group/mongodb-user
> use test
switched to db test
> db.usr_sdlc.count();
5295
>
Friday, May 10, 2013
ODA can only one level of file mirroring in disk groups
To have different type of file mirroring in the same disk group. The easiest way is to
1. Define Normal redundancy disk group.
2. Define different templates for this disk group with Redundancy attributes set to "MIRROR", "HIGH" or UNPROTECTED.
3. Create tables space with these templates with incomplete_file_name.
e.g.
create tablespace test_data DATAFILE +data(template);
In ODA, all diskgroup are in High redundancy configuration. In this case, template attribute MIRROR and HIGH will both give three way mirroring.
Reference to : "Oracle Automatic Storage Management Administrator's Guide" 11gR2 Page 7-16
1. Define Normal redundancy disk group.
2. Define different templates for this disk group with Redundancy attributes set to "MIRROR", "HIGH" or UNPROTECTED.
3. Create tables space with these templates with incomplete_file_name.
e.g.
create tablespace test_data DATAFILE +data(template);
In ODA, all diskgroup are in High redundancy configuration. In this case, template attribute MIRROR and HIGH will both give three way mirroring.
Reference to : "Oracle Automatic Storage Management Administrator's Guide" 11gR2 Page 7-16
ODA interconnect layout
Oracle Database Appliance Interconnect
The ODA has 8 (6 GbE and 2 10GbE) physical ethernet ports available to you, along with 2 internal fibre ports that are used for a built-in cluster interconnect. Here's the output of running "ethtool" on the internal NICs:
[root@poda-n1 ~]# ethtool eth0
Settings for eth0:
Supported ports: [ FIBRE ]
Supported link modes: 1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 1000baseT/Full
Advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: FIBRE
PHYAD: 0
Transceiver: external
Auto-negotiation: on
Supports Wake-on: pumbg
Wake-on: d
Current message level: 0x00000001 (1)
Link detected: yes
[root@poda-n1 ~]# ethtool eth1
Settings for eth1:
Supported ports: [ FIBRE ]
Supported link modes: 1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 1000baseT/Full
Advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: FIBRE
PHYAD: 0
Transceiver: external
Auto-negotiation: on
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000001 (1)
Link detected: yes
Subscribe to:
Posts (Atom)