Thursday, October 20, 2011

Protocol used by RMAN to communicate with the Oracle server

I just wanted to see what happens from a server process point of view when I run "rman" from a
the command. So from one terminal I ran "rman" as follows
bash-3.2$ rman target /
When I did that I observed that there were three new processes
that were started on the server. The rman process is the obvious one, but then other two processes which were the child processes of the rman process.
>   oracle 32309274 30998642   0 22:01:57  pts/0  0:00 rman target /
55a57
>   oracle 34275468 32309274   0 22:01:57      -  0:00 oracletestasm (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
58a61
>   oracle 37421080 32309274   0 22:01:58      -  0:00 oracletestasm (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
isvp14_ora>
Which got me curious about the PROTOCOL=beq. Reading throught the docs I found out the following abot the BEQ Protocol

  • The BEQ Protocol Adapter, is both a communications mechanism and a process spawning mechanism.
  •  If a service name is not specified, either directly by the user on the command line or the login screen, or indirectly through an environment variable, then the BEQ Protocol Adapter will be used.
  • When the BEQ Protocol is used a dedicated server will always be used, and the multi-threaded server will never be used.
  •  This dedicated server is started automatically by the BEQ Protocol Adapter, which waits for the server process to start and attach to an existing SGA. If the startup of the server process is successful, the BEQ Protocol Adapter then provides inter-process communication via UNIX pipes.
  •  An important feature of the BEQ Protocol Adapter is that no network Listener is required for its operation, since the adapter is linked into the client tools and directly starts its own server process with no outside interaction.
  •  Another thing to note is that the BEQ Protocol Adapter is always installed, and always linked in to all client tools.

Wednesday, October 12, 2011

Oracle ASM and data rebalancing between the volumes

In this post I want to show how data is distributed evenly between all the volumes of an Oracle ASM diskgroup eg. If we have 2 x 100 GB volumes in our disk group, and 100 GB of data. Then the data is distributed evenly between each of the volumes ie. the first volume will have 50GB and the second volume will have 50GB. Now if I add a 3rd volume to the disk group, then the data will be re-balanced between the three volumes ie. each volume will have 33.33GB of the data.

The initial distribution of data between the volumes, and the later re-balancing of the data is done based on the current data capacity, and not on the I/O pattern. This means that if we do have a table that is heavily accessed, we could potentially have hot-spots and degradation in performance due to the limitations of HDD disks.

Here is the informtion I got just after creating the Oracle ASM diskgroup +DATA using hdisk21, hdisk22, hdisk23, and hdisk24 on an AIX system. Each volume in this case is 300 GB


SQL> select name, path, free_mb, total_mb from v$asm_disk;

NAME
------------------------------
PATH
--------------------------------------------------------------------------------
   FREE_MB   TOTAL_MB
---------- ----------

DATA_0000
/dev/rhdisk21

NAME
------------------------------
PATH
--------------------------------------------------------------------------------
   FREE_MB   TOTAL_MB
---------- ----------
    307180     307200

DATA_0001
/dev/rhdisk22
    307185     307200

DATA_0002

NAME
------------------------------
PATH
--------------------------------------------------------------------------------
   FREE_MB   TOTAL_MB
---------- ----------
/dev/rhdisk23
    307184     307200

DATA_0003
/dev/rhdisk24
    307185     307200


4 rows selected.

SQL>

After creating the database, schema, tables, and filling it with data, here is what I have. As you see the data is evenly distributed between all the volumes of the Oracle ASM diskgroup.


SQL> select name,path, free_mb, total_mb from v$asm_disk;

NAME
------------------------------
PATH
--------------------------------------------------------------------------------
   FREE_MB   TOTAL_MB
---------- ----------

DATA_0000
/dev/rhdisk21

NAME
------------------------------
PATH
--------------------------------------------------------------------------------
   FREE_MB   TOTAL_MB
---------- ----------
    161103     307200

DATA_0001
/dev/rhdisk22
    161111     307200

DATA_0002

NAME
------------------------------
PATH
--------------------------------------------------------------------------------
   FREE_MB   TOTAL_MB
---------- ----------
/dev/rhdisk23
    161113     307200

DATA_0003
/dev/rhdisk24
    161108     307200

4 rows selected.

SQL>

Tuesday, October 11, 2011

Increase the redo log file size on a live system

While running a workload on my test system I noticed in the AWR report that in the Top 5 Timed Foreground Events "log file switch (checkpoint incomplete)" was on the top. It was taking around 48.43% DB Time which is rather high.

To resolve the issue it is recommended to increase the size of the redo logs and/or add more redo log groups.

It is not possible to increase the size of an existing red log file, so on a live system where the workload was running I did the following to increase the size of the redo logs:

I added 3 new log files each of size 250M

SQL> alter database add logfile group 4 size 250M;

Database altered.

SQL> alter database add logfile group 5 size 250M;

Database altered.

SQL> alter database add logfile group 6 size 250M;

Database altered.

SQL>
SQL> select group#,status from v$log;

    GROUP# STATUS
---------- ----------------
         1 ACTIVE
         2 INACTIVE
         3 ACTIVE
         4 CURRENT
         5 UNUSED
         6 UNUSED

6 rows selected.

SQL>
SQL> select l.bytes/1024/1024 "Meg",g.member from v$log l,v$logfile g where l.group# = g.group#;

       Meg
----------
MEMBER
--------------------------------------------------------------------------------
        50
+DATA/testasm/onlinelog/group_3.263.764177393

        50
+DATA/testasm/onlinelog/group_2.262.764177393

        50
+DATA/testasm/onlinelog/group_1.261.764177393


       Meg
----------
MEMBER
--------------------------------------------------------------------------------
       250
+DATA/testasm/onlinelog/group_4.276.764264429

       250
+DATA/testasm/onlinelog/group_5.277.764264441

       250
+DATA/testasm/onlinelog/group_6.278.764264449


6 rows selected.


The log files need to be inactive to drop then, so dropped group 1 and 3 first.

SQL> alter database drop logfile group 1;

Database altered.


SQL> alter database drop logfile group 3;

Database altered.

SQL>
SQL> alter system checkpoint;

System altered.

SQL> SELECT GROUP#, ARCHIVED, STATUS FROM V$LOG;

    GROUP# ARC STATUS
---------- --- ----------------
         2 NO  INACTIVE
         4 NO  INACTIVE
         5 NO  ACTIVE
         6 NO  CURRENT

In then dropped log file 2:

SQL> alter database drop logfile group 2;

Database altered.

SQL> SELECT GROUP#, ARCHIVED, STATUS FROM V$LOG;

    GROUP# ARC STATUS
---------- --- ----------------
         4 NO  INACTIVE
         5 NO  ACTIVE
         6 NO  CURRENT

SQL>
SQL> alter system switch logfile;

System altered.

SQL> SELECT GROUP#, ARCHIVED, STATUS FROM V$LOG;

    GROUP# ARC STATUS
---------- --- ----------------
         4 NO  ACTIVE
         5 NO  CURRENT
         6 NO  ACTIVE


SQL>

There we go, I now have 3 redo log each of 250M

SQL> select l.bytes/1024/1024 "Meg",g.member from v$log l,v$logfile g where l.group# = g.group#;

       Meg
----------
MEMBER
--------------------------------------------------------------------------------
       250
+DATA/testasm/onlinelog/group_4.276.764264429

       250
+DATA/testasm/onlinelog/group_5.277.764264441

       250
+DATA/testasm/onlinelog/group_6.278.764264449

Monday, October 10, 2011

Increasing the Maximum number of PROCESSES allowed per user in AIX 7.1

One thing to remember while running Oracle benchmark clients is to increase the maximum number of processes the user(oracle in this case) can handle on the database server(AIX) side. You can use "ulimit -a" to check the current "max user processes"

To increase the "max user processes" on an AIX server, run smitty -> select System Environments -> select  Change / Show Characteristics of Operating System -> Maximum number of PROCESSES allowed per user       [600]  ->  press Enter
 



isvp14_ora> smitty
Login  as user oracle and run "ulimit -a". You will see that the maximum user processes has increased to 600.