Catalog
- Install HDP2.0 using Sandbox;
- Prepare for the Storm-yarn depolyment;
- Set up Storm on your cluster
All the steps are based on Mac OS 10.8, Windows & Linux can be applied as well with some tiny differences.
Install HDP2.0 using Sandbox
What you need?
- VMware Fusion/Play 5 (Virtualbox 4.2 can work as well)
- Hortonworks Sandbox 2.0
Steps
1.Open VMware Fusion.
3.The file browser opens. Select the appropriate Sandbox appliance file.
Click Open.
4.Import Library opens. Unless you have specific needs, the default values are fine.
Click Import.
5.The appliance is imported. A console window opens and the VM shows up in the Virtual Machine Library.
6.Start the VM. When the Sandbox has finished starting up, the console displays the login instructions.
Press fn+control+option+f5 to start command line.
7.Use a browser on your host machine to open the URL displayed on the console. You will see the index page. Now the HDP2.0 has been settled.
Prepare for the Storm-yarn depolyment
(To distinguish the commands between local Mac OS and VM, I assume the Mac OS command line starts with '$', and VM command line starts with '#')
2. ssh from your Mac to VM;
$ ssh root@<Your VM IP>
Password is hadoop in default.
3. Disable selinux using the command:# setenforce 0
4. Edit the SELinux configuration file:
# vi /etc/selinux/config
Change SELINUX=enforcing to SELINUX=disabled
5. Stop the iptables firewall and disable it.
# stop iptables
# service iptables stop
# chkconfig iptables off
6. Install the wget package# yum -y install wget
7. Get the repo for Ambari and copy it to /etc/yum.repos.d# wget http://public-repo-1.hortonworks.com/ambari-beta/centos6/1.x/beta/ambari.repo
# cp ambari.repo /etc/yum.repos.d
8. Install Oracle Java7 Development environment.- Download jdk file from Oracle Website to Mac local storage, choose the Linux x64 rpm package.
- Copy downloaded rpm into the VM, let's say we downloaded jdk into ~/Download
- Install the jdk file
- Set JAVA_HOME & PATH
$ scp ~/Download/jdk-7u<version>-linux-x64.rpm root@<Your VM IP>:/tmp
# rpm -ivh jdk-7u<version>-linux-x64.rpm
# vi ~/.bash_profile
- Insert these script before export PATH
- JAVA_HOME=/usr/java/jdk1.7.0_45/
export JAVA_HOME
PATH=$PATH:$HOME/bin
PATH=$JAVA_HOME/bin:$PATH
- Test your Installation to see if jdk7 installed successfully.(if it say java 1.6 instead of 1.7, check previous steps again)
# java -version
9. Install ntpd, start service and sync time
10. Run the Ambari server setup
# yum -y install ntp
# service ntpd start
10. Run the Ambari server setup
# ambari-server setup -s -j /usr/java/jdk1.7<version>/
11. Start Ambari server & agent
# ambari-server start
# ambari-agent start
# wget http://mirror.symnds.com/software/Apache/maven/maven-3/3.1.1/binaries/apache-maven-3.1.1-bin.tar.gz
# tar -zxvf apache-maven-3.1.1-bin.tar.gz
# mkdir -p /usr/lib/maven
# mv apache-maven-3.1.1 /usr/lib/maven
# vi ~/.bash_profile
Add Scripts before export PATH:PATH=$PATH:/usr/lib/maven/bin
13. Get a copy of the repository for Storm on YARN from GitHub# wget https://github.com/anfeng/storm-yarn/archive/master.zip
# unzip master.zip
14. Edit the pom.xml repos and Hadoop version to point at Hortonworks.
# cd storm-yarn-master
# vi pom.xml
Uncommand some lines and make some lines commanded as below:Set up Storm on your cluster
1. Create a work folder to hold working files for Storm. Let's say '~/workspace/storm'# mkdir -p ~/workspace/storm
2. Copy storm.zip to work folder. Go to your work folder and unzip storm.zip.# cp lib/storm.zip ~/workspace/storm
# cd ~/workspace/storm
# unzip storm.zip
3. Add storm-0.9.0-wip21 and storm-yarn-master bin folders to path# vi ~/.bash_profile
Add Scripts before export PATH
PATH=$PATH:$HOME/workspace/storm/storm-0.9.0-wip21/bin:$HOME/storm-yarn-master/bin
4. Add root user to hdfs group
# usermod -G hdfs root
5. Add storm.zip to hdfs /lib/storm/0.9.0-wip[*]/storm.zip
# sudo -u hdfs hadoop fs -put ~/storm-yarn-master/lib/storm.zip /lib/storm/0.9.0-wip[*]/storm.zip
You may encounter some permission problems, try this:
# cp ~/storm-yarn-master/lib/storm.zip /tmp/storm.zip
# chown hdfs:hdfs /tmp/storm.zip
# sudo -u hdfs hadoop fs -put /tmp/storm.zip /lib/storm/0.9.0-wip[*]/storm.zip
6. Start Maven in the storm-yarn-master folder.# cd storm-yarn-master
# mvn package
7. Start Storm
# storm-yarn launch
You may encounter permission issues when launch storm, try step 4 in this section to set root to hdfs group
8. Get the stormconfig with the yarn application id. (application id should be like application_numbers_numbers)
9. We store the storm.yaml file in the .storm directory so the storm command can find it when it is submitting jobs.
10. Try running two of the sample topologies:
Word Count:
Exclamation:
11. Storm UI monitoring tool
Still not working yet...to be continue.
# yarn application -list
9. We store the storm.yaml file in the .storm directory so the storm command can find it when it is submitting jobs.
# storm-yarn getStormConfig -appId <application id> -output ~/.storm/storm.yaml
10. Try running two of the sample topologies:
Word Count:
# storm jar lib/storm-starter-0.0.1-SNAPSHOT.jar storm.starter.WordCountTopology
Exclamation:
# storm jar lib/storm-starter-0.0.1-SNAPSHOT.jar storm.starter.ExclamationTopology
11. Storm UI monitoring tool
Still not working yet...to be continue.
No comments:
Post a Comment