Thursday, October 24, 2013

Step by step deploying Storm-yarn on HDP2.0 Using Hortonworks Sandbox

Catalog

  1. Install HDP2.0 using Sandbox;
  2. Prepare for the Storm-yarn depolyment;
  3. Set up Storm on your cluster
All the steps are based on Mac OS 10.8, Windows & Linux can be applied as well with some tiny differences.

Install HDP2.0 using Sandbox

What you need?

Steps

1.Open VMware Fusion.

2.Click File -> Import.


3.The file browser opens. Select the appropriate Sandbox appliance file. 
Click Open.

4.Import Library opens. Unless you have specific needs, the default values are fine. 
Click Import.

5.The appliance is imported. A console window opens and the VM shows up in the Virtual Machine Library. 


6.Start the VM. When the Sandbox has finished starting up, the console displays the login instructions. 
Press fn+control+option+f5 to start command line.

7.Use a browser on your host machine to open the URL displayed on the console. You will see the index page. Now the HDP2.0 has been settled.

Prepare for the Storm-yarn depolyment 

(To distinguish the commands between local Mac OS and VM, I assume the Mac OS command line starts with '$', and VM command line starts with '#')

1. Open the Terminal in your Mac

2. ssh from your Mac to VM;

$ ssh root@<Your VM IP>


Password is hadoop in default.

3. Disable selinux using the command:

# setenforce 0

4. Edit the SELinux configuration file:

# vi /etc/selinux/config

Change SELINUX=enforcing to SELINUX=disabled

5. Stop the iptables firewall and disable it.

# stop iptables
# service iptables stop
# chkconfig iptables off

6. Install the wget package

# yum -y install wget

7. Get the repo for Ambari and copy it to /etc/yum.repos.d

# wget http://public-repo-1.hortonworks.com/ambari-beta/centos6/1.x/beta/ambari.repo
# cp ambari.repo /etc/yum.repos.d

8. Install Oracle Java7 Development environment.
  1. Download jdk file from Oracle Website to Mac local storage, choose the Linux x64 rpm package.
  2. Copy downloaded rpm into the VM, let's say we downloaded jdk into ~/Download
  3. $ scp ~/Download/jdk-7u<version>-linux-x64.rpm root@<Your VM IP>:/tmp
  4. Install the jdk file
  5. # rpm -ivh jdk-7u<version>-linux-x64.rpm
  6. Set JAVA_HOME & PATH 
    # vi  ~/.bash_profile
    Insert these script before export PATH
    JAVA_HOME=/usr/java/jdk1.7.0_45/
    export JAVA_HOME
    PATH=$PATH:$HOME/bin
    PATH=$JAVA_HOME/bin:$PATH
  1. Test your Installation to see if jdk7 installed successfully.(if it say java 1.6 instead of 1.7, check previous steps again)
  2. # java -version
9. Install ntpd, start service and sync time

# yum -y install ntp
# service ntpd start

10. Run the Ambari server setup

# ambari-server setup -s -j /usr/java/jdk1.7<version>/

11. Start Ambari server  & agent

# ambari-server start
# ambari-agent start

12. Install Maven 3.11

# wget http://mirror.symnds.com/software/Apache/maven/maven-3/3.1.1/binaries/apache-maven-3.1.1-bin.tar.gz
# tar -zxvf apache-maven-3.1.1-bin.tar.gz
# mkdir -p /usr/lib/maven
# mv apache-maven-3.1.1 /usr/lib/maven
# vi ~/.bash_profile

Add Scripts before export PATH:

PATH=$PATH:/usr/lib/maven/bin

13. Get a copy of the repository for Storm on YARN from GitHub

# wget https://github.com/anfeng/storm-yarn/archive/master.zip
# unzip master.zip

14. Edit the pom.xml repos and Hadoop version to point at Hortonworks.

# cd storm-yarn-master
# vi pom.xml

Uncommand some lines and make some lines commanded as below:

Set up Storm on your cluster

1. Create a work folder to hold working files for Storm. Let's say '~/workspace/storm'

# mkdir -p ~/workspace/storm

2. Copy storm.zip to work folder. Go to your work folder and unzip storm.zip.

# cp lib/storm.zip ~/workspace/storm
# cd ~/workspace/storm
# unzip storm.zip

3. Add storm-0.9.0-wip21 and storm-yarn-master bin folders to path

# vi ~/.bash_profile

Add Scripts before export PATH

PATH=$PATH:$HOME/workspace/storm/storm-0.9.0-wip21/bin:$HOME/storm-yarn-master/bin

4. Add root user to hdfs group

# usermod -G hdfs root

5. Add storm.zip to hdfs /lib/storm/0.9.0-wip[*]/storm.zip

# sudo -u hdfs hadoop fs -put ~/storm-yarn-master/lib/storm.zip /lib/storm/0.9.0-wip[*]/storm.zip

You may encounter some permission problems, try this:


# cp ~/storm-yarn-master/lib/storm.zip /tmp/storm.zip
# chown hdfs:hdfs /tmp/storm.zip
# sudo -u hdfs hadoop fs -put /tmp/storm.zip /lib/storm/0.9.0-wip[*]/storm.zip

6. Start Maven in the storm-yarn-master folder.

# cd storm-yarn-master
# mvn package


7. Start Storm

# storm-yarn launch

You may encounter permission issues when launch storm, try step 4 in this section to set root to hdfs group

8. Get the stormconfig with the yarn application id. (application id should be like application_numbers_numbers)

# yarn application -list

9. We store the storm.yaml file in the .storm directory so the storm command can find it when it is submitting jobs.

# storm-yarn getStormConfig -appId <application id>  -output ~/.storm/storm.yaml


10. Try running two of the sample topologies:

Word Count:

# storm jar lib/storm-starter-0.0.1-SNAPSHOT.jar storm.starter.WordCountTopology

Exclamation:

# storm jar lib/storm-starter-0.0.1-SNAPSHOT.jar storm.starter.ExclamationTopology 

11. Storm UI monitoring tool

Still not working yet...to be continue.

No comments:

Post a Comment