Wednesday, August 14, 2013

Installing Cloudera Hadoop CDH4 on a RedHat Enterprise Linux (RHEL) 5 single node

I wanted to play around with Hadoop, but only had one RedHat (RHEL) 5 box to play around with. So I decided to install Cloudera Hadoop CDH4 in Pseudo-distributed mode.

In the Pseudo-distributed mode all the HDFS and MapReduce daemons run on a single node. In short, the tasks of both the NameNode and the DataNode is done by a single machine.

What I did here is basically follow the Cloudera Hadoop documentation for Pseudo-distributed mode installation.
http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/4.2.0/CDH4-Quick-Start/cdh4qs_topic_3_2.html

[root@isvx3 hadoop]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 5.9 (Tikanga)

Here is what I did after that: Add the Cloudera Public GPG Key to your repository by executing one of the the following commands

[root@isvx7 ~]# sudo rpm --import http://archive.cloudera.com/cdh4/redhat/5/x86_64/cdh/RPM-GPG-KEY-cloudera

Install Hadoop in pseudo-distributed mode:
To install Hadoop with MRv1:

[root@isvx7 ~]# sudo yum install hadoop-0.20-conf-pseudo
Loaded plugins: rhnplugin, security
This system is receiving updates from RHN Classic or RHN Satellite.
Setting up Install Process
Resolving Dependencies
There are unfinished transactions remaining. You might consider running yum-complete-transaction first to finish them.
The program yum-complete-transaction is found in the yum-utils package.
--> Running transaction check
---> Package hadoop-0.20-conf-pseudo.noarch 0:2.0.0+1357-1.cdh4.3.0.p0.21.el5 set to be updated
--> Processing Dependency: hadoop-hdfs-datanode = 2.0.0+1357-1.cdh4.3.0.p0.21.el5 for package: hadoop-0.20-conf-pseudo
--> Processing Dependency: hadoop-hdfs-namenode = 2.0.0+1357-1.cdh4.3.0.p0.21.el5 for package: hadoop-0.20-conf-pseudo
--> Processing Dependency: hadoop-hdfs-secondarynamenode = 2.0.0+1357-1.cdh4.3.0.p0.21.el5 for package: hadoop-0.20-conf-pseudo
--> Processing Dependency: hadoop = 2.0.0+1357-1.cdh4.3.0.p0.21.el5 for package: hadoop-0.20-conf-pseudo
--> Processing Dependency: hadoop-0.20-mapreduce-tasktracker = 2.0.0+1357-1.cdh4.3.0.p0.21.el5 for package: hadoop-0.20-conf-pseudo
--> Processing Dependency: hadoop-0.20-mapreduce-jobtracker = 2.0.0+1357-1.cdh4.3.0.p0.21.el5 for package: hadoop-0.20-conf-pseudo
--> Running transaction check
---> Package hadoop.noarch 0:2.0.0+1357-1.cdh4.3.0.p0.21.el5 set to be updated
--> Processing Dependency: bigtop-utils >= 0.6 for package: hadoop
--> Processing Dependency: zookeeper >= 3.4.0 for package: hadoop
---> Package hadoop-0.20-mapreduce-jobtracker.noarch 0:2.0.0+1357-1.cdh4.3.0.p0.21.el5 set to be updated
--> Processing Dependency: hadoop-0.20-mapreduce = 2.0.0+1357-1.cdh4.3.0.p0.21.el5 for package: hadoop-0.20-mapreduce-jobtracker
---> Package hadoop-0.20-mapreduce-tasktracker.noarch 0:2.0.0+1357-1.cdh4.3.0.p0.21.el5 set to be updated
---> Package hadoop-hdfs-datanode.noarch 0:2.0.0+1357-1.cdh4.3.0.p0.21.el5 set to be updated
--> Processing Dependency: hadoop-hdfs = 2.0.0+1357-1.cdh4.3.0.p0.21.el5 for package: hadoop-hdfs-datanode
--> Processing Dependency: hadoop-hdfs = 2.0.0+1357-1.cdh4.3.0.p0.21.el5 for package: hadoop-hdfs-datanode
---> Package hadoop-hdfs-namenode.noarch 0:2.0.0+1357-1.cdh4.3.0.p0.21.el5 set to be updated
---> Package hadoop-hdfs-secondarynamenode.noarch 0:2.0.0+1357-1.cdh4.3.0.p0.21.el5 set to be updated
--> Running transaction check
---> Package bigtop-utils.noarch 0:0.6.0+73-1.cdh4.3.0.p0.17.el5 set to be updated
---> Package hadoop-0.20-mapreduce.noarch 0:2.0.0+1357-1.cdh4.3.0.p0.21.el5 set to be updated
---> Package hadoop-hdfs.noarch 0:2.0.0+1357-1.cdh4.3.0.p0.21.el5 set to be updated
--> Processing Dependency: bigtop-jsvc for package: hadoop-hdfs
---> Package zookeeper.noarch 0:3.4.5+19-1.cdh4.3.0.p0.14.el5 set to be updated
--> Running transaction check
---> Package bigtop-jsvc.x86_64 0:1.0.10-1.cdh4.3.0.p0.14.el5 set to be updated
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package       Arch    Version                             Repository      Size
================================================================================
Installing:
 hadoop-0.20-conf-pseudo
               noarch  2.0.0+1357-1.cdh4.3.0.p0.21.el5     cloudera-cdh4  7.2 k
Installing for dependencies:
 bigtop-jsvc   x86_64  1.0.10-1.cdh4.3.0.p0.14.el5         cloudera-cdh4   48 k
 bigtop-utils  noarch  0.6.0+73-1.cdh4.3.0.p0.17.el5       cloudera-cdh4  7.8 k
 hadoop        noarch  2.0.0+1357-1.cdh4.3.0.p0.21.el5     cloudera-cdh4   16 M
 hadoop-0.20-mapreduce
               noarch  2.0.0+1357-1.cdh4.3.0.p0.21.el5     cloudera-cdh4   25 M
 hadoop-0.20-mapreduce-jobtracker
               noarch  2.0.0+1357-1.cdh4.3.0.p0.21.el5     cloudera-cdh4  4.8 k
 hadoop-0.20-mapreduce-tasktracker
               noarch  2.0.0+1357-1.cdh4.3.0.p0.21.el5     cloudera-cdh4  4.8 k
 hadoop-hdfs   noarch  2.0.0+1357-1.cdh4.3.0.p0.21.el5     cloudera-cdh4   12 M
 hadoop-hdfs-datanode
               noarch  2.0.0+1357-1.cdh4.3.0.p0.21.el5     cloudera-cdh4  4.7 k
 hadoop-hdfs-namenode
               noarch  2.0.0+1357-1.cdh4.3.0.p0.21.el5     cloudera-cdh4  4.8 k
 hadoop-hdfs-secondarynamenode
               noarch  2.0.0+1357-1.cdh4.3.0.p0.21.el5     cloudera-cdh4  4.7 k
 zookeeper     noarch  3.4.5+19-1.cdh4.3.0.p0.14.el5       cloudera-cdh4  3.9 M

Transaction Summary
================================================================================
Install      12 Package(s)
Upgrade       0 Package(s)

Total download size: 58 M
Is this ok [y/N]: y
Downloading Packages:
(1/12): hadoop-hdfs-datanode-2.0.0+1357-1.cdh4.3.0.p0.21 | 4.7 kB     00:00
(2/12): hadoop-hdfs-secondarynamenode-2.0.0+1357-1.cdh4. | 4.7 kB     00:00
(3/12): hadoop-0.20-mapreduce-tasktracker-2.0.0+1357-1.c | 4.8 kB     00:00
(4/12): hadoop-hdfs-namenode-2.0.0+1357-1.cdh4.3.0.p0.21 | 4.8 kB     00:00
(5/12): hadoop-0.20-mapreduce-jobtracker-2.0.0+1357-1.cd | 4.8 kB     00:00
(6/12): hadoop-0.20-conf-pseudo-2.0.0+1357-1.cdh4.3.0.p0 | 7.2 kB     00:00
(7/12): bigtop-utils-0.6.0+73-1.cdh4.3.0.p0.17.el5.noarc | 7.8 kB     00:00
(8/12): bigtop-jsvc-1.0.10-1.cdh4.3.0.p0.14.el5.x86_64.r |  48 kB     00:00
(9/12): zookeeper-3.4.5+19-1.cdh4.3.0.p0.14.el5.noarch.r | 3.9 MB     00:02
(10/12): hadoop-hdfs-2.0.0+1357-1.cdh4.3.0.p0.21.el5.noa |  12 MB     00:08
(11/12): hadoop-2.0.0+1357-1.cdh4.3.0.p0.21.el5.noarch.r |  16 MB     00:11
(12/12): hadoop-0.20-mapreduce-2.0.0+1357-1.cdh4.3.0.p0. |  25 MB     00:17
--------------------------------------------------------------------------------
Total                                           1.4 MB/s |  58 MB     00:41
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing     : bigtop-jsvc                                             1/12
  Installing     : bigtop-utils                                            2/12
  Installing     : zookeeper                                               3/12
  Installing     : hadoop                                                  4/12
  Installing     : hadoop-hdfs                                             5/12
  Installing     : hadoop-0.20-mapreduce                                   6/12
  Installing     : hadoop-0.20-mapreduce-jobtracker                        7/12
  Installing     : hadoop-0.20-mapreduce-tasktracker                       8/12
  Installing     : hadoop-hdfs-namenode                                    9/12
  Installing     : hadoop-hdfs-datanode                                   10/12
  Installing     : hadoop-hdfs-secondarynamenode                          11/12
  Installing     : hadoop-0.20-conf-pseudo                                12/12

Installed:
  hadoop-0.20-conf-pseudo.noarch 0:2.0.0+1357-1.cdh4.3.0.p0.21.el5

Dependency Installed:
  bigtop-jsvc.x86_64 0:1.0.10-1.cdh4.3.0.p0.14.el5
  bigtop-utils.noarch 0:0.6.0+73-1.cdh4.3.0.p0.17.el5
  hadoop.noarch 0:2.0.0+1357-1.cdh4.3.0.p0.21.el5
  hadoop-0.20-mapreduce.noarch 0:2.0.0+1357-1.cdh4.3.0.p0.21.el5
  hadoop-0.20-mapreduce-jobtracker.noarch 0:2.0.0+1357-1.cdh4.3.0.p0.21.el5
  hadoop-0.20-mapreduce-tasktracker.noarch 0:2.0.0+1357-1.cdh4.3.0.p0.21.el5
  hadoop-hdfs.noarch 0:2.0.0+1357-1.cdh4.3.0.p0.21.el5
  hadoop-hdfs-datanode.noarch 0:2.0.0+1357-1.cdh4.3.0.p0.21.el5
  hadoop-hdfs-namenode.noarch 0:2.0.0+1357-1.cdh4.3.0.p0.21.el5
  hadoop-hdfs-secondarynamenode.noarch 0:2.0.0+1357-1.cdh4.3.0.p0.21.el5
  zookeeper.noarch 0:3.4.5+19-1.cdh4.3.0.p0.14.el5

Complete!
[root@isvx7 ~]#



Starting Hadoop and Verifying it is Working Properly:
For MRv1, a pseudo-distributed Hadoop installation consists of one node running all five Hadoop daemons: namenode, jobtracker, secondarynamenode, datanode, and tasktracker.

To verify the hadoop-0.20-conf-pseudo packages on your system.

    To view the files on Red Hat or SLES systems:

[root@isvx7 ~]# rpm -ql hadoop-0.20-conf-pseudo
/etc/hadoop/conf.pseudo.mr1
/etc/hadoop/conf.pseudo.mr1/README
/etc/hadoop/conf.pseudo.mr1/core-site.xml
/etc/hadoop/conf.pseudo.mr1/hadoop-metrics.properties
/etc/hadoop/conf.pseudo.mr1/hdfs-site.xml
/etc/hadoop/conf.pseudo.mr1/log4j.properties
/etc/hadoop/conf.pseudo.mr1/mapred-site.xml

 To start Hadoop, proceed as follows.

Step 1: Format the NameNode.

Before starting the NameNode for the first time you must format the file system. Make sure you perform the format of the NameNode as user hdfs. You can do this as part of the command string, using sudo -u hdfs as in the command above.

[root@isvx7 ~]# sudo -u hdfs hdfs namenode -format
13/08/14 16:20:26 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = isvx7.storage.tucson.ibm.com/127.0.0.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.0.0-cdh4.3.0
STARTUP_MSG:   classpath = /etc/hadoop/conf:/usr/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/lib/jetty-6.1.26.cloudera.2.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.6.1.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/avro-1.7.4.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/commons-io-2.1.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/jets3t-0.6.1.jar:/usr/lib/hadoop/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/commons-lang-2.5.jar:/usr/lib/hadoop/lib/jline-0.9.94.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/jersey-server-1.8.jar:/usr/lib/hadoop/lib/jersey-json-1.8.jar:/usr/lib/hadoop/lib/jersey-core-1.8.jar:/usr/lib/hadoop/lib/zookeeper-3.4.5-cdh4.3.0.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.cloudera.2.jar:/usr/lib/hadoop/lib/kfs-0.3.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop/lib/commons-math-2.1.jar:/usr/lib/hadoop/lib/stax-api-1.0.1.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/junit-4.8.2.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.3.0-tests.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//hadoop-auth-2.0.0-cdh4.3.0.jar:/usr/lib/hadoop/.//hadoop-annotations-2.0.0-cdh4.3.0.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.3.0.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26.cloudera.2.jar:/usr/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.1.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.3.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.5.jar:/usr/lib/hadoop-hdfs/lib/jline-0.9.94.jar:/usr/lib/hadoop-hdfs/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.8.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.8.jar:/usr/lib/hadoop-hdfs/lib/zookeeper-3.4.5-cdh4.3.0.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26.cloudera.2.jar:/usr/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.3.0.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.3.0-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-0.20-mapreduce/./:/usr/lib/hadoop-0.20-mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jetty-6.1.26.cloudera.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jsp-api-2.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/hadoop-fairscheduler-2.0.0-mr1-cdh4.3.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/ant-contrib-1.0b3.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-digester-1.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/avro-1.7.4.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jsch-0.1.42.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-io-2.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/activation-1.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jettison-1.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/kfs-0.2.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/servlet-api-2.5.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jets3t-0.6.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-0.20-mapreduce/lib/xmlenc-0.52.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-lang-2.5.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jline-0.9.94.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-0.20-mapreduce/lib/guava-11.0.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/hsqldb-1.8.0.10.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-net-3.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/xz-1.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jersey-server-1.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jersey-json-1.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jersey-core-1.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/zookeeper-3.4.5-cdh4.3.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jetty-util-6.1.26.cloudera.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/kfs-0.3.jar:/usr/lib/hadoop-0.20-mapreduce/lib/avro-compiler-1.7.4.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-math-2.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/stax-api-1.0.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-codec-1.4.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-el-1.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-cli-1.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/junit-4.8.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-configuration-1.6.jar:/usr/lib/hadoop-0.20-mapreduce/.//hadoop-examples-2.0.0-mr1-cdh4.3.0.jar:/usr/lib/hadoop-0.20-mapreduce/.//hadoop-core-2.0.0-mr1-cdh4.3.0.jar:/usr/lib/hadoop-0.20-mapreduce/.//hadoop-core.jar:/usr/lib/hadoop-0.20-mapreduce/.//hadoop-tools.jar:/usr/lib/hadoop-0.20-mapreduce/.//hadoop-examples.jar:/usr/lib/hadoop-0.20-mapreduce/.//hadoop-test.jar:/usr/lib/hadoop-0.20-mapreduce/.//hadoop-test-2.0.0-mr1-cdh4.3.0.jar:/usr/lib/hadoop-0.20-mapreduce/.//hadoop-ant-2.0.0-mr1-cdh4.3.0.jar:/usr/lib/hadoop-0.20-mapreduce/.//hadoop-ant.jar:/usr/lib/hadoop-0.20-mapreduce/.//hadoop-tools-2.0.0-mr1-cdh4.3.0.jar
STARTUP_MSG:   build = file:///data/1/jenkins/workspace/generic-package-centos64-5-5/topdir/BUILD/hadoop-2.0.0-cdh4.3.0/src/hadoop-common-project/hadoop-common -r 48a9315b342ca16de92fcc5be95ae3650629155a; compiled by 'jenkins' on Mon May 27 19:45:28 PDT 2013
STARTUP_MSG:   java = 1.6.0_22
************************************************************/
13/08/14 16:20:26 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
Formatting using clusterid: CID-d50f155e-b438-49a0-89b7-4c94d6c48605
13/08/14 16:20:26 INFO util.HostsFileReader: Refreshing hosts (include/exclude) list
13/08/14 16:20:26 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
13/08/14 16:20:26 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
13/08/14 16:20:26 INFO blockmanagement.BlockManager: defaultReplication         = 1
13/08/14 16:20:26 INFO blockmanagement.BlockManager: maxReplication             = 512
13/08/14 16:20:26 INFO blockmanagement.BlockManager: minReplication             = 1
13/08/14 16:20:26 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
13/08/14 16:20:26 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
13/08/14 16:20:26 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
13/08/14 16:20:26 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
13/08/14 16:20:27 INFO namenode.FSNamesystem: fsOwner             = hdfs (auth:SIMPLE)
13/08/14 16:20:27 INFO namenode.FSNamesystem: supergroup          = supergroup
13/08/14 16:20:27 INFO namenode.FSNamesystem: isPermissionEnabled = true
13/08/14 16:20:27 INFO namenode.FSNamesystem: HA Enabled: false
13/08/14 16:20:27 INFO namenode.FSNamesystem: Append Enabled: true
13/08/14 16:20:27 INFO namenode.NameNode: Caching file names occuring more than 10 times
13/08/14 16:20:27 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
13/08/14 16:20:27 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
13/08/14 16:20:27 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 0
13/08/14 16:20:27 INFO namenode.NNStorage: Storage directory /var/lib/hadoop-hdfs/cache/hdfs/dfs/name has been successfully formatted.
13/08/14 16:20:27 INFO namenode.FSImage: Saving image file /var/lib/hadoop-hdfs/cache/hdfs/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
13/08/14 16:20:27 INFO namenode.FSImage: Image file of size 119 saved in 0 seconds.
13/08/14 16:20:27 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
13/08/14 16:20:27 INFO util.ExitUtil: Exiting with status 0
13/08/14 16:20:27 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at isvx7.storage.tucson.ibm.com/127.0.0.1
************************************************************/


Step 2: Start HDFS

[root@isvx7 ~]# for x in `cd /etc/init.d ; ls hadoop-hdfs-*` ; do sudo service $x start ; done
Starting Hadoop datanode:                                  [  OK  ]
starting datanode, logging to /var/log/hadoop-hdfs/hadoop-hdfs-datanode-isvx7.storage.tucson.ibm.com.out
Starting Hadoop namenode:                                  [  OK  ]
starting namenode, logging to /var/log/hadoop-hdfs/hadoop-hdfs-namenode-isvx7.storage.tucson.ibm.com.out
Starting Hadoop secondarynamenode:                         [  OK  ]
starting secondarynamenode, logging to /var/log/hadoop-hdfs/hadoop-hdfs-secondarynamenode-isvx7.storage.tucson.ibm.com.out

I then verified if the services started fine by cheching the web console. The NameNode provides a web console http://localhost:50070/ for viewing your Distributed File System (DFS) capacity, number of DataNodes, and logs. In this pseudo-distributed configuration, you should see one live DataNode named.













Clicking on the Browse File system link above showed the following:













The NameNode logs showed the following:




Clicking on the Live Nodes above showed:

Step 3: Create the /tmp Directory

Create the /tmp directory and set permissions:



[root@isvx7 ~]# sudo -u hdfs hadoop fs -mkdir /tmp $ sudo -u hdfs hadoop fs -chmod -R 1777 /tmp
[root@isvx7 ~]#
Step 4: Create the MapReduce system directories:

[root@isvx7 ~]# sudo -u hdfs hadoop fs -mkdir -p /var/lib/hadoop-hdfs/cache/mapred/mapred/staging
[root@isvx7 ~]# sudo -u hdfs hadoop fs -chmod 1777 /var/lib/hadoop-hdfs/cache/mapred/mapred/staging
[root@isvx7 ~]# sudo -u hdfs hadoop fs -chown -R mapred /var/lib/hadoop-hdfs/cache/mapred

Step 5: Verify the HDFS File Structure

[root@isvx7 ~]# sudo -u hdfs hadoop fs -ls -R /
drwxr-xr-x   - hdfs supergroup          0 2013-08-14 22:44 /tmp
drwxr-xr-x   - hdfs supergroup          0 2013-08-14 22:44 /user
drwxr-xr-x   - hdfs supergroup          0 2013-08-14 22:44 /user/hdfs
drwxr-xr-x   - hdfs supergroup          0 2013-08-14 22:44 /user/hdfs/$
drwxr-xr-x   - hdfs supergroup          0 2013-08-14 22:44 /user/hdfs/-R
drwxr-xr-x   - hdfs supergroup          0 2013-08-14 22:44 /user/hdfs/-chmod
drwxr-xr-x   - hdfs supergroup          0 2013-08-14 22:44 /user/hdfs/-u
drwxr-xr-x   - hdfs supergroup          0 2013-08-14 22:44 /user/hdfs/1777
drwxr-xr-x   - hdfs supergroup          0 2013-08-14 22:44 /user/hdfs/fs
drwxr-xr-x   - hdfs supergroup          0 2013-08-14 22:44 /user/hdfs/hadoop
drwxr-xr-x   - hdfs supergroup          0 2013-08-14 22:44 /user/hdfs/hdfs
drwxr-xr-x   - hdfs supergroup          0 2013-08-14 22:44 /user/hdfs/sudo
drwxr-xr-x   - hdfs supergroup          0 2013-08-14 22:45 /var
drwxr-xr-x   - hdfs supergroup          0 2013-08-14 22:45 /var/lib
drwxr-xr-x   - hdfs supergroup          0 2013-08-14 22:45 /var/lib/hadoop-hdfs
drwxr-xr-x   - hdfs supergroup          0 2013-08-14 22:45 /var/lib/hadoop-hdfs/cache
drwxr-xr-x   - mapred supergroup          0 2013-08-14 22:45 /var/lib/hadoop-hdfs/cache/mapred
drwxr-xr-x   - mapred supergroup          0 2013-08-14 22:45 /var/lib/hadoop-hdfs/cache/mapred/mapred
drwxrwxrwt   - mapred supergroup          0 2013-08-14 22:45 /var/lib/hadoop-hdfs/cache/mapred/mapred/staging
[root@isvx7 ~]#

Step 6: Start MapReduce

[root@isvx7 ~]# for x in `cd /etc/init.d ; ls hadoop-0.20-mapreduce-*` ; do sudo service $x start ; done
Starting Hadoop jobtracker:                                [  OK  ]
starting jobtracker, logging to /var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-isvx7.storage.tucson.ibm.com.out
Starting Hadoop tasktracker:                               [  OK  ]
starting tasktracker, logging to /var/log/hadoop-0.20-mapreduce/hadoop-hadoop-tasktracker-isvx7.storage.tucson.ibm.com.out

Step 7: Create User Directories

Create a home directory for MapReduce user joe. It is best to do this on the NameNode; for example:

[root@isvx7 ~]# sudo -u hdfs hadoop fs -mkdir /user/joe
[root@isvx7 ~]# sudo -u hdfs hadoop fs -chown joe /user/joe
[root@isvx7 ~]#



Thursday, August 1, 2013

TAF for Oracle RAC

During the PoC of the IBM SVC Stretched Cluster for Oracle RAC, some of the tests were to simulate Oracle RAC node failure or entire site failure.

We had configured Oracle SCAN IP on the cluster, this meant that the clients would be evenly balanced between the two nodes of the Oracle RAC cluster.

We also configured Oracle TAF(Transparent Application Failover).
FAILOVER CONCEPTS from
http://www.oracle.com/technetwork/database/features/oci/taf-10-133239.pdf
Failover allows a database to recover on another system within a cluster. Figure illustrates a typical database cluster configuration. Although the example shows a two-system cluster, larger clusters can be constructed. In a cold failover configuration, only one active instance can mount the database at a time. With Oracle Real Application Clusters, multiple instances can mount the database, speeding recovery from failures.

The failure of one instance will be detected by the surviving instances, which will assume the workload of the failed instance. Clients connected to the failed instance will migrate to a surviving instance. The mechanics of this migration will depend upon the cluster configuration. Transparent Application Failover feature will automatically reconnect client sessions to the database and minimize disruption to end-user applications.

Here is the tnsnames.ora file with TAF configured that we used for the PoC

On the client machine

-sh-4.1$ pwd
/home/oracle
-sh-4.1$ cat tnsnames.ora
# tnsnames.ora Network Configuration File: /u01/app/ora11/product/11.2.0/dbhome_1/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.

svccon =
  (description=
    (address=(protocol=tcp)(host=192.168.45.244)(port=1521))
    (address=(protocol=tcp)(host=192.168.45.245)(port=1521))
    (load_balance=yes)
    (connect_data =
        (server = dedicated)
        (service_name=svcdb)
        (failover_mode =
        (type=select)
        (method=basic)
           (retries=180)
           (delay=5)
     )
     )
     )