Friday, May 31, 2013

Change hadoop file storage from /u01/hadoop to /hadoop

Change hadoop file storage from /u01/hadoop to /hadoop


1. Stop hadoop server and hbase from ONLY node 1 
    cd /home/hadoop/hbase-0.94.2/bin/
    ./stop-hbase.sh
    cd /home/hadoop/hadoop-1.0.4/bin
    ./stop-dfs.sh
2. Make new directory in desired location on both node. 
    su 
    cd /
    mkdir /hadoop
    chown hadoop:apache /hadoop
    su - hadoop
    cd /hadoop
    cp -r /u01/hadoop/* . 
    su 
    mv /u01/hadoop /u01/hadoop1
3. Modify the following xml files to replace "/u01/hadoop" with "/hadoop" on both noded. 
Search script is like the following. 
    cd /home/hadoop
    for i in `find . -name "*.xml" -print`
    do
    cat $i |grep \/u01 && echo $i 
    done

After find the related files. backup then modify all of them

    [hadoop@qoda-n1 ~]$ vi ./hadoop-1.0.4/conf/hdfs-site.xml
    [hadoop@qoda-n1 ~]$ cp ./hadoop-1.0.4/conf/hdfs-site.xml ./hadoop-1.0.4/conf/hdf-site.xml.053113
    [hadoop@qoda-n1 ~]$ cp ./hadoop-1.0.4/conf/mapred-site.xml ./hadoop-1.0.4/conf/mapred-site.xml.053113
    [hadoop@qoda-n1 ~]$ cp ./hbase-0.94.2/conf/hbase-site.xml ./hbase-0.94.2/conf/hbase-site.xml.053113
    [hadoop@qoda-n1 ~]$ vi ./hadoop-1.0.4/conf/hdfs-site.xml
    [hadoop@qoda-n1 ~]$ vi ./hadoop-1.0.4/conf/mapred-site.xml
    [hadoop@qoda-n1 ~]$ vi ./hbase-0.94.2/conf/hbase-site.xml


4. Restart hadoop and hbase 
    cd /home/hadoop/hadoop-1.0.4/bin
    ./start-dfs.sh
    cd /home/hadoop/hbase-0.94.2/bin/
    ./start-hbase.sh

Thursday, May 30, 2013

create hbase table then put value into it with hbase shell



connect with hbase shell 
    mapr@cvlqepn3:/opt/mapr/hbase/hbase-0.94.5/conf> hbase shell
    HBase Shell; enter 'help<RETURN>' for list of supported commands.
    Type "exit<RETURN>" to leave the HBase Shell
    Version 0.94.5-mapr, r1dddcf01de3d37559586994b6727827c9493e0f0, Wed May 1 17:42:07 PDT 2013

    Not all HBase shell commands are applicable to MapR tables.
    Consult MapR documentation for the list of supported commands.

create table t1 and family t1 and family t2
families are static 99% of the time. They are like column in RDBMS
    hbase(main):001:0> create 't1','f1','f2'
    0 row(s) in 1.9250 seconds

put colume col1 value v1 into family 'f1' in row 'r1' , put column col2,col3 value 'v2' into family 'f2' in row 'r2'
hbase(main):002:0> put 't1','r1','f1:col1','v1'
0 row(s) in 0.2050 seconds

hbase(main):003:0> put 't1','r2','f2:col2','v2'
0 row(s) in 0.0130 seconds

hbase(main):004:0> put 't1','r2','f2:col3','v2'
0 row(s) in 0.0040 seconds

Show data in table t1
hbase(main):005:0> scan 't1'
ROW COLUMN+CELL
r1 column=f1:col1, timestamp=1369947468296, value=v1
r2 column=f2:col2, timestamp=1369947497236, value=v2
r2 column=f2:col3, timestamp=1369947501384, value=v2
2 row(s) in 0.1020 seconds


Get row 'r2' 's columns and values
hbase(main):006:0> get 't1', 'r2'
COLUMN CELL
f2:col2 timestamp=1369947497236, value=v2
f2:col3 timestamp=1369947501384, value=v2
2 row(s) in 0.0320 seconds

Tuesday, May 21, 2013

opatch util and Old patch storage cleanup

MetaLink Note 550522 has the answer and tells you: it depends. Normally, this data is used in order to be able to rollback a patch. However, if you have installed a patchset (eg. 10.2.0.4), then the patches for the previous patchset (10.2.0.3) which are located in the .patch_storage directory are not needed anymore and can be removed. 


Recently I have Oracle Cluster Infrastructure (11.2.0.3) patchset install on top of my old 11.2.0.1 RAC Infrastructure. The Old patches applied to this cluster is no longer needed. There are 1.8G can be reclaimed. Following are the process. 


Here the new CRS Home is /u01/app/oracle/11.2.0.3/grid
The old CRS Home is /u01/app/oracle/11.2.0/grid




[grid@qoda-n2 OPatch]$ pwd
/u01/app/

[root@qoda-n2 app]# du -h |grep G |grep -v K


3.3G    ./11.2.0.3/grid
3.3G    ./11.2.0.3
1.8G    ./11.2.0/grid/.patch_storage/9413827_Oct_25_2010_07_07_18
2.6G    ./11.2.0/grid/.patch_storage
1.2G    ./11.2.0/grid/bin
5.7G    ./11.2.0/grid
5.7G    ./11.2.0
8.9G    .

Go to new CRS Home 
[grid@qoda-n2 OPatch]$ pwd
/u01/app/11.2.0.3/grid/OPatch

[grid@qoda-n2 OPatch]$ ./opatch util Cleanup -oh /u01/app/11.2.0/grid
Invoking OPatch 11.2.0.1.7

Oracle Interim Patch Installer version 11.2.0.1.7
Copyright (c) 2011, Oracle Corporation.  All rights reserved.

UTIL session

Oracle Home       : /u01/app/11.2.0/grid
Central Inventory : /u01/app/oraInventory
   from           : /etc/oraInst.loc
OPatch version    : 11.2.0.1.7
OUI version       : 11.2.0.1.0
Log file location : /u01/app/11.2.0/grid/cfgtoollogs/opatch/opatch2013-05-21_10-35-47AM.log

Invoking utility "cleanup"
OPatch will clean up 'restore.sh,make.txt' files and 'rac,scratch,backup' directories.
You will be still able to rollback patches after this cleanup.
Do you want to proceed? [y|n]
y
User Responded with: Y
Size of directory "/u01/app/11.2.0/grid/.patch_storage" before cleanup is 2708420951 bytes.
Size of directory "/u01/app/11.2.0/grid/.patch_storage" after cleanup is 1853977297 bytes.

UtilSession: Backup area for restore has been cleaned up. For a complete list of files/directories
deleted, Please refer log file.


As it suggested, there are only 900 M storage have been removed. We will need to remove the rest through manual process. 






Wednesday, May 15, 2013

LVM snapshot backup and restore mongo DB sample

This is LVM snapshot validation test performed on jinlinux 


Seup lv 
lvcreate -n mongoData demo -L 10000M vg_jinlinux

[root@jinlinux ~]# lvscan
ACTIVE '/dev/vg_jinlinux/lv_root' [50.00 GiB] inherit
ACTIVE '/dev/vg_jinlinux/lv_home' [64.26 GiB] inherit
ACTIVE '/dev/vg_jinlinux/lv_swap' [3.73 GiB] inherit
ACTIVE Original '/dev/vg_jinlinux/demo' [11.09 GiB] inherit
ACTIVE Snapshot '/dev/vg_jinlinux/mongoSnap' [1.46 GiB] inherit
[root@jinlinux ~]# vgdisplay
--- Volume group ---
VG Name vg_jinlinux
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 28
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 5
Open LV 5
Max PV 0
Cur PV 1
Act PV 1
VG Size 148.52 GiB
PE Size 4.00 MiB
Total PE 38021
Alloc PE / Size 33420 / 130.55 GiB
Free PE / Size 4601 / 17.97 GiB
VG UUID BZRyNp-1wf6-VTa2-zOiD-IMyL-gbiS-5AUyFU

Create file system
mkfs ext4 -m 0 /dev/vg_jinlinux/demo

Create directory
[root@jinlinux ~]# mkdir /app/mongoData

Change Owner
chown mongod:mongod /app/mongoData

Mount lv 
[root@jinlinux mongod]# mount -t ext4 /dev/vg_jinlinux/demo /app/mongoData

edit /etc/mongod.conf Make changes to point to this new directory
    logpath=/var/log/mongo/mongod.log
    logappend=true
    fork = true
    port = 27021
    bind_ip = 127.0.0.1,10.128.213.66
    dbpath=/app/mongoData
    journal=true
    directoryperdb=true
    quota=true
    quotaFiles=10
    rest=true
    slowms=200
    oplogSize=100
    pidfilepath = /var/run/mongodb/mongod.pid

Start mongod 
mongod --smallfiles -f /etc/mongod.conf

Begin data inserting
invoke java program "dbMoverOracleToMongo.java" to insert less than 10000 rows into mongo.

During insert at db.usr_sdlc.count() shows 5550 do LVM snapshot
[root@jinlinux ~]# lvcreate -L1500M -s -n mongoSnap /dev/mapper/vg_jinlinux-demo

Let insert keep going for a while till reach db.usr_sdlc.count() show 8040

Validate snapshot created
[root@jinlinux mongod]# lvscan
ACTIVE '/dev/vg_jinlinux/lv_root' [50.00 GiB] inherit
ACTIVE '/dev/vg_jinlinux/lv_home' [64.26 GiB] inherit
ACTIVE '/dev/vg_jinlinux/lv_swap' [3.73 GiB] inherit
ACTIVE Original '/dev/vg_jinlinux/demo' [11.09 GiB] inherit
ACTIVE Snapshot '/dev/vg_jinlinux/mongoSnap' [1.46 GiB] inherit

shutdown mongod
mongo localhost:27021
use admin
db.shutdownServer();

create new directory to mount snapshot
mkdir /app/mongoSnap
chown mongod:mongod /app/mongoSnap

mount snapshot
[root@jinlinux mongod]# mount -t ext4 /dev/vg_jinlinux/mongoSnap /app/mongoSnap

Create new mongod.conf to bring up snapshot version of DB
[root@jinlinux mongod]# cat /home/mongod/mongodSnap.conf
logpath=/tmp/mongodSnap.log
logappend=true
fork = true
port = 27021
bind_ip = 127.0.0.1,10.128.213.66
dbpath=/app/mongoSnap
journal=true
directoryperdb=true
quota=true
quotaFiles=10
rest=true
slowms=200
oplogSize=100
pidfilepath = /tmp/mongod.pid

Bring up mongod with snapshot 
mongod --smallfiles -f /home/mongod/mongodSnap.conf

Show row count 
[root@jinlinux mongod]# mongo localhost:27021
MongoDB shell version: 2.4.3
connecting to: localhost:27021/test
> use admin
switched to db admin
> use test
switched to db test
> db.usr_sdlc.count();
5295
> use admin
switched to db admin
> db.shutdownServer();


Bring up original server and show row count 8040. The difference in counts shows the snapshot backup was done at the moment while active writing was happenning 

[root@jinlinux mongod]# mongo localhost:27021
MongoDB shell version: 2.4.3
connecting to: localhost:27021/test
> use test
switched to db test
> db.usr_sdlc.count();
8040
> db.shutdownServer();
shutdown command only works with the admin database; try 'use admin'
> use admin
switched to db admin
> db.shutdownServer();
Wed May 15 11:49:04.882 DBClientCursor::init call() failed
server should be down...
Wed May 15 11:49:04.883 trying reconnect to localhost:27021
Wed May 15 11:49:04.884 reconnect localhost:27021 ok
Wed May 15 11:49:04.884 Socket say send() errno:104 Connection reset by peer 127.0.0.1:27021




Unmount snapshot directory

[root@jinlinux app]# umount mongoSnap
[root@jinlinux app]# lvscan
ACTIVE '/dev/vg_jinlinux/lv_root' [50.00 GiB] inherit
ACTIVE '/dev/vg_jinlinux/lv_home' [64.26 GiB] inherit
ACTIVE '/dev/vg_jinlinux/lv_swap' [3.73 GiB] inherit
ACTIVE Original '/dev/vg_jinlinux/demo' [11.09 GiB] inherit
ACTIVE Snapshot '/dev/vg_jinlinux/mongoSnap' [1.46 GiB] inherit

Backup snapshot to different location
[root@jinlinux app]# dd if=/dev/vg_jinlinux/mongoSnap |gzip > /tmp/mongSnap.gz
23248896+0 records in
23248896+0 records out
11903434752 bytes (12 GB) copied, 217.663 s, 54.7 MB/s

[root@jinlinux app]# ls -alrt /tmp/mongSnap.gz
-rw-r--r--. 1 root root 958360718 May 15 13:55 /tmp/mongSnap.gz

Restore validation 
Create LV, make sure it is of the same size of bigger than the original lv
[root@jinlinux app]# lvcreate --size 12G --name snapRestore vg_jinlinux
Logical volume "snapRestore" created
[root@jinlinux app]# mkdir /app/snapRestore
[root@jinlinux app]# chown mongod:mongod /app/snapRestore

Restore the lv from backup
[root@jinlinux app]# gzip -d -c /tmp/mongSnap.gz |dd of=/dev/vg_jinlinux/snapRestore
23248896+0 records in
23248896+0 records out
11903434752 bytes (12 GB) copied, 513.228 s, 23.2 MB/s

Create mongodSnapRestore.conf like the following
-bash-4.1$ cat mongodSnapRestore.conf
logpath=/tmp/mongodSnapRestore.log
logappend=true
fork = true
port = 27021
bind_ip = 127.0.0.1,10.128.213.66
dbpath=/app/snapRestore
journal=true
directoryperdb=true
quota=true
quotaFiles=10
rest=true
slowms=200
oplogSize=100
pidfilepath = /tmp/mongodRestore.pid


Bring up Mongo database from /app/snapRestore location
-bash-4.1$ mongod --smallfiles -f /home/mongod/mongodSnapRestore.conf
forked process: 6178
all output going to: /tmp/mongodSnapRestore.log
child process started successfully, parent exiting

Show count
-bash-4.1$ mongo localhost:27021
MongoDB shell version: 2.4.3
connecting to: localhost:27021/test
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
http://docs.mongodb.org/
Questions? Try the support group
http://groups.google.com/group/mongodb-user
> use test
switched to db test
> db.usr_sdlc.count();
5295
>

Friday, May 10, 2013

ODA can only one level of file mirroring in disk groups

To have different type of file mirroring in the same disk group. The easiest way is to 

1. Define Normal redundancy disk group.
2. Define different templates for this disk group with Redundancy attributes set to "MIRROR", "HIGH" or UNPROTECTED.
3. Create tables space with these templates with incomplete_file_name.
    e.g.
    create tablespace test_data DATAFILE +data(template);

In ODA, all diskgroup are in High redundancy configuration. In this case, template attribute MIRROR and HIGH will both give three way mirroring. 

Reference to : "Oracle Automatic Storage Management Administrator's Guide"  11gR2 Page 7-16

ODA interconnect layout


Oracle Database Appliance Interconnect


The ODA has 8 (6 GbE and 2 10GbE) physical ethernet ports available to you, along with 2 internal fibre ports that are used for a built-in cluster interconnect. Here's the output of running "ethtool" on the internal NICs:

[root@poda-n1 ~]# ethtool eth0
Settings for eth0:
Supported ports: [ FIBRE ]
Supported link modes: 1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 1000baseT/Full
Advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: FIBRE
PHYAD: 0
Transceiver: external
Auto-negotiation: on
Supports Wake-on: pumbg
Wake-on: d
Current message level: 0x00000001 (1)
Link detected: yes

[root@poda-n1 ~]# ethtool eth1
Settings for eth1:
Supported ports: [ FIBRE ]
Supported link modes: 1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 1000baseT/Full
Advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: FIBRE
PHYAD: 0
Transceiver: external
Auto-negotiation: on
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000001 (1)
Link detected: yes

Wednesday, May 8, 2013

Failed connect to mongod with the following java exception: java.io.IOException - message: couldn't connect to [host:port] bc:java.net.SocketException: Too m any open files





15:46:04,067 WARNING [com.mongodb.ReplicaSetStatus] (ReplicaSetStatus:Updater) Server seen down: hostname.somecompany.com/172.20.X.X:27021 
- java.io.IOException - message: couldn't connect to [hostname.somecompany.com/172.20.X.X:27021] bc:java.net.SocketException: Too many open files


This issue was caused by ulimit setting in mongod user is not correct. 

To fix it. 
1. Modify ulimit according to installation spec for correct version. 
sudo vi /etc/security/limits.conf
...
#add for mongod group
@mongod         soft    nofile          65000
@mongod         soft    nproc           65000
@mongod         hard    nofile          65000
@mongod         hard    nproc           65000


2. Restart mongod and mongos
sudo service mongod restart


Thursday, May 2, 2013

cell_offload_processing parameter caused high CPU usage for No Exadata Oracle

Encountered issue with cell_offload_processing parameter in 11.2.0.2. This parameter is set to TRUE by default and caused high CPU usage. According to metalink note "High 'ksv master wait' And 'ASM File Metadata Operation' Waits In Non-Exadata 11g [ID 1308282.1]". This was caused by "Event 'KSV master wait' indicates the process on the RDBMS side is waiting for a reply from a process on the ASM side." 

To resolve this issue do "alter system set cell_offload_processing = false;" 

Wednesday, May 1, 2013

MTU setting in Oracle database appliance


In Oracle database appliance. Interconnect uses jumbo frame, all others NIC use regular 1500 frame. 

There is a very interesting testing article in Oracle blog regarding use jumbo frame to improve proceeding speed as well as reduce server load. "Jumbo Frames for RAC Interconnect by Rene Kundersma".

This is the beauty of packaged product. There is no worry about the none standard frame causing compatibility issues. 

ifconfig -a                                        
bond0     UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1           
...                                                                                                                                                       
bond1     UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1           
                                                                              
bond2     UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1           
                                                                              
eth0      Link encap:Ethernet  HWaddr 00:21:28:F9:19:3E
          inet addr:192.168.16.25  Bcast:192.168.16.255  Mask:255.255.255.0
          inet6 addr: fe80::221:28ff:fef9:193e/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
          RX packets:83864468 errors:0 dropped:0 overruns:0 frame:0
          TX packets:88488696 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:100590633610 (93.6 GiB)  TX bytes:109000919319 (101.5 GiB)
          Interrupt:28 Memory:dee80000-deea0000
                       
                                                                              
eth0:1    Link encap:Ethernet  HWaddr 00:21:28:F9:19:3E                       
          inet addr:169.254.74.231  Bcast:169.254.127.255  Mask:255.255.128.0 
          UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1                  
          Interrupt:28 Memory:dee80000-deea0000                               
                                                                              
eth1      Link encap:Ethernet  HWaddr 00:21:28:F9:19:3F                       
          inet addr:192.168.17.25  Bcast:192.168.17.255  Mask:255.255.255.0   
          inet6 addr: fe80::221:28ff:fef9:193f/64 Scope:Link                  
          UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1                  
          RX packets:72255555 errors:0 dropped:0 overruns:0 frame:0           
          TX packets:76183594 errors:0 dropped:0 overruns:0 carrier:0         
          collisions:0 txqueuelen:1000                                        
          RX bytes:55355355888 (51.5 GiB)  TX bytes:78343835583 (72.9 GiB)    
          Interrupt:40 Memory:deee0000-def00000                               
                                                                              
eth1:1    Link encap:Ethernet  HWaddr 00:21:28:F9:19:3F                       
          inet addr:169.254.211.232  Bcast:169.254.255.255  Mask:255.255.128.0
          UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1                  
          Interrupt:40 Memory:deee0000-def00000                               
                                                                              
eth2      
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1            
          
                                                                              
eth3      UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1            
                                                                              
eth4      UP BROADCAST SLAVE MULTICAST  MTU:1500  Metric:1                    
                                                                              
eth5      UP BROADCAST SLAVE MULTICAST  MTU:1500  Metric:1                    
                                                                              
eth6      UP BROADCAST SLAVE MULTICAST  MTU:1500  Metric:1                    
                                                                              
eth7      UP BROADCAST SLAVE MULTICAST  MTU:1500  Metric:1                    
                                                                                        
eth8      UP BROADCAST SLAVE MULTICAST  MTU:1500  Metric:1                             
                                                                              
eth9      UP BROADCAST SLAVE MULTICAST  MTU:1500  Metric:1                    
                                                                                        
lo        Link encap:Local Loopback                                           
          inet addr:127.0.0.1  Mask:255.0.0.0                                 
          inet6 addr: ::1/128 Scope:Host                                      
          UP LOOPBACK RUNNING  MTU:16436  Metric:1                            
          RX packets:196742468 errors:0 dropped:0 overruns:0 frame:0          
          TX packets:196742468 errors:0 dropped:0 overruns:0 carrier:0        
          collisions:0 txqueuelen:0                                           
          RX bytes:87192154769 (81.2 GiB)  TX bytes:87192154769 (81.2 GiB)    
                                                                              
xbond0    Link encap:Ethernet  HWaddr 90:E2:BA:1A:A7:58                       
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1           
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0                  
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0                
          collisions:0 txqueuelen:0                                           
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)      

Tuesday, April 30, 2013

Oracle database appliance 500G internal disk



Checked our ODA box today and found there is 251G free in the internal disk waiting to be allocated. Sweet.

[root@n1 ~]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/md1
  VG Name               VolGroupSys
  PV Size               465.66 GB / not usable 3.44 MB
  Allocatable           yes
  PE Size (KByte)       32768
  Total PE              14901
  Free PE               8053
  Allocated PE          6848
  PV UUID               qta79y-SYoP-T24F-aNoq-aTpS-eG1z-O443E4

[root@n1 ~]# lvdisplay
  --- Logical volume ---
  LV Name                /dev/VolGroupSys/LogVolRoot
  VG Name                VolGroupSys
  LV UUID                jLUfrQ-EbOg-xae0-ihaQ-2sSM-0qsu-dcKXH8
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                30.00 GB
  Current LE             960
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

  --- Logical volume ---
  LV Name                /dev/VolGroupSys/LogVolU01
  VG Name                VolGroupSys
  LV UUID                fqBmOA-SEP5-qziV-f98F-bJdw-jLdW-NOtTYA
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                100.00 GB
  Current LE             3200
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

  --- Logical volume ---
  LV Name                /dev/VolGroupSys/LogVolOpt
  VG Name                VolGroupSys
  LV UUID                w7zOfV-MVDh-l44f-9S5V-vwzl-FW5P-zoQKpY
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                60.00 GB
  Current LE             1920
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:2

  --- Logical volume ---
  LV Name                /dev/VolGroupSys/LogVolSwap
  VG Name                VolGroupSys
  LV UUID                hSnWvz-HRyf-cycE-MMdS-3jPd-RXUr-hMFUPQ
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                24.00 GB
  Current LE             768
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:3