发布时间:2016-09-27 已阅读:次
一、hadoop 简介
hadoop是一个由apache基金会所开发的分布式系统基础架构。用户可以在不了解分布式底层细节的情况下,开发分布式程序。充分利用集群的威力进行高速运算和存储。[hadoop@localhost log]$ sudo yum -y install java-1.8.0-openjdk-devel java-1.8.0-openjdk-headless \ java-1.8.0-openjdk findbugs cmake protobuf-compiler
export findbugs_home=/usr/share/findbugs
export maven_home=/usr/share/maven
export maven_opts="-xms256m -xmx512m"
export java_home=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.25-5.rc16.fc21.loongson.m
path=/usr/lib64/ccache:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/h
export path=$path:$java_home/bin:$maven_home/bin
tar xvf hadoop-2.7.2.src.gz -c mkdir /usr/local/
cd /usr/local/hadoop-2.7.2
mvn clean package -pdist,native,src -dskiptests -dtar
#
# a fatal error has been detected by the java runtime environment:
#
# sigsegv (0xb) at pc=0x000000ffe18f46fc, pid=5300, tid=1099154321904
#
# jre version: openjdk runtime environment (8.0_25-b17) (build 1.8.0_25-rc16-b17)
# java vm: openjdk 64-bit server vm (25.25-b02 mixed mode linux- compressed oops)
# problematic frame:
# j 62748 c2 scala.tools.asm.classwriter.get(lscala/tools/asm/item;)lscala/tools/asm/item; (49 bytes) @ 0x000000ffe18f46fc [0x000000ffe18f46a0 0x5c]
#
# failed to write core dump. core dumps have been disabled. to enable core dumping, try "ulimit -c unlimited" before starting java again
#
# if you would like to submit a bug report, please visit:
# http://bugreport.sun.com/bugreport/crash.jsp
#
export maven_opts="-xms3560m -xmx3560m -xx:-useparallelgc -xx:-useparalleloldgc"
#为maven 设置代理
proxy01
true
http
ip_address
port
localhost
proxy02
true
https
ip_address
port
localhost
#重新安装ca-certificates
sudo yum -y install ca-certificates
[info] ------------------------------------------------------------------------
[info] reactor summary:
[info]
[info] apache hadoop main ................................. success [ 10.769 s]
[info] apache hadoop project pom .......................... success [ 8.793 s]
[info] apache hadoop annotations .......................... success [ 18.834 s]
[info] apache hadoop assemblies ........................... success [ 2.414 s]
[info] apache hadoop project dist pom ..................... success [ 9.653 s]
[info] apache hadoop maven plugins ........................ success [ 25.215 s]
[info] apache hadoop minikdc .............................. success [ 20.682 s]
[info] apache hadoop auth ................................. success [ 26.240 s]
[info] apache hadoop auth examples ........................ success [ 23.112 s]
[info] apache hadoop common ............................... success [45:23 min]
[info] apache hadoop nfs .................................. success [ 45.079 s]
[info] apache hadoop kms .................................. success [01:27 min]
[info] apache hadoop common project ....................... success [ 1.104 s]
[info] apache hadoop hdfs ................................. success [21:45 min]
[info] apache hadoop httpfs ............................... success [02:13 min]
[info] apache hadoop hdfs bookkeeper journal .............. success [ 47.832 s]
[info] apache hadoop hdfs-nfs ............................. success [ 34.029 s]
[info] apache hadoop hdfs project ......................... success [ 1.075 s]
[info] hadoop-yarn ........................................ success [ 1.354 s]
[info] hadoop-yarn-api .................................... success [07:20 min]
[info] hadoop-yarn-common ................................. success [35:51 min]
[info] hadoop-yarn-server ................................. success [ 1.020 s]
[info] hadoop-yarn-server-common .......................... success [01:42 min]
[info] hadoop-yarn-server-nodemanager ..................... success [01:58 min]
[info] hadoop-yarn-server-web-proxy ....................... success [ 25.288 s]
[info] hadoop-yarn-server-applicationhistoryservice ....... success [01:05 min]
[info] hadoop-yarn-server-resourcemanager ................. success [02:52 min]
[info] hadoop-yarn-server-tests ........................... success [ 40.356 s]
[info] hadoop-yarn-client ................................. success [ 54.780 s]
[info] hadoop-yarn-server-sharedcachemanager .............. success [ 24.110 s]
[info] hadoop-yarn-applications ........................... success [ 1.017 s]
[info] hadoop-yarn-applications-distributedshell .......... success [ 21.223 s]
[info] hadoop-yarn-applications-unmanaged-am-launcher ..... success [ 17.608 s]
[info] hadoop-yarn-site ................................... success [ 1.145 s]
[info] hadoop-yarn-registry ............................... success [ 42.659 s]
[info] hadoop-yarn-project ................................ success [ 34.614 s]
[info] hadoop-mapreduce-client ............................ success [ 1.905 s]
[info] hadoop-mapreduce-client-core ....................... success [33:18 min]
[info] hadoop-mapreduce-client-common ..................... success [32:57 min]
[info] hadoop-mapreduce-client-shuffle .................... success [ 28.868 s]
[info] hadoop-mapreduce-client-app ........................ success [01:00 min]
[info] hadoop-mapreduce-client-hs ......................... success [ 46.223 s]
[info] hadoop-mapreduce-client-jobclient .................. success [ 29.643 s]
[info] hadoop-mapreduce-client-hs-plugins ................. success [ 15.580 s]
[info] apache hadoop mapreduce examples ................... success [ 40.229 s]
[info] hadoop-mapreduce ................................... success [ 24.719 s]
[info] apache hadoop mapreduce streaming .................. success [ 33.669 s]
[info] apache hadoop distributed copy ..................... success [ 59.792 s]
[info] apache hadoop archives ............................. success [ 19.986 s]
[info] apache hadoop rumen ................................ success [ 47.303 s]
[info] apache hadoop gridmix .............................. success [ 30.258 s]
[info] apache hadoop data join ............................ success [ 22.306 s]
[info] apache hadoop ant tasks ............................ success [ 19.212 s]
[info] apache hadoop extras ............................... success [ 27.362 s]
[info] apache hadoop pipes ................................ success [ 6.723 s]
[info] apache hadoop openstack support .................... success [ 34.857 s]
[info] apache hadoop amazon web services support .......... success [ 37.631 s]
[info] apache hadoop azure support ........................ success [ 30.848 s]
[info] apache hadoop client ............................... success [01:02 min]
[info] apache hadoop mini-cluster ......................... success [ 3.409 s]
[info] apache hadoop scheduler load simulator ............. success [ 33.821 s]
[info] apache hadoop tools dist ........................... success [ 55.501 s]
[info] apache hadoop tools ................................ success [ 0.768 s]
[info] apache hadoop distribution ......................... success [03:44 min]
[info] ------------------------------------------------------------------------
[info] build success
[info] ------------------------------------------------------------------------
[info] total time: 03:33 h
[info] finished at: 2016-08-01t14:22:17 08:00
[info] final memory: 125m/3096m
[info] ------------------------------------------------------------------------
#rsaauthentication yes
#pubkeyauthentication yes
cat id_rsa.pub>> authorized_keys
ssh root@10.20.42.22 cat ~/.ssh/id_rsa.pub>> authorized_keys
ssh root@10.20.42.10 cat ~/.ssh/id_rsa.pub>> authorized_keys
master 10.20.42.199
slave1 10.20.42.22
slave2 10.20.42.10
#编辑/etc/profile 文件并设置java_home等环境变量
vi /etc/profile
export java_home=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.25-6.b17.rc16.fc21.loongson.mips64el
export classpath=.:$java_home/jre/lib/rt.jar:$java_home/lib/dt.jar:$java_home/lib/tools.jar
export path=$path:$java_home/bin
#使环境变量生效 并且验证jdk 是否生效
source /etc/profile && java -version
fs.defaultfs
hdfs://10.20.42.199:9000
hadoop.tmp.dir
file:/home/loongson/hadoop/tmp
io.file.buffer.size
131702
dfs.namenode.name.dir
file:/home/loongson/hadoop/dfs/name
dfs.datanode.data.dir
file:/home/loongson/hadoop/dfs/data
dfs.replication
2
dfs.namenode.secondary.http-address
10.20.42.199:9001
dfs.webhdfs.enabled
true
mapreduce.framework.name
yarn
mapreduce.jobhistory.address
10.20.42.199:10020
mapreduce.jobhistory.webapp.address
10.20.42.199:19888
yarn.nodemanager.aux-services
mapreduce_shuffle
yarn.nodemanager.auxservices.mapreduce.shuffle.class
org.apache.hadoop.mapred.shufflehandler
yarn.resourcemanager.address
10.20.42.199:8032
yarn.resourcemanager.scheduler.address
10.20.42.199:8030
yarn.resourcemanager.resource-tracker.address
10.20.42.199:8031
yarn.resourcemanager.admin.address
10.20.42.199:8033
yarn.resourcemanager.webapp.address
10.20.42.199:8088
yarn.nodemanager.resource.memory-mb
768
export java_home=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.25-6.b17.rc16.fc21.loongson.mips64el
10.20.42.10
10.20.42.22
scp -r /home/loongson/hadoop-2.7.2 10.20.42.10:/home/loongson
scp -r /home/loongson/hadoop-2.7.2 10.20.42.22:/home/loongson
this script is deprecated. instead use start-dfs.sh and start-yarn.sh
16/09/02 08:49:56 warn util.nativecodeloader: unable to load native-hadoop library for your platform...
using builtin-java classes where applicable
starting namenodes on [hadoop-master-001]
hadoop-master-001: starting namenode, logging to /home/loongson/hadoop-2.7.2/logs/hadoop-root-namenode-
localhost.localdomain.out
10.20.42.22: starting datanode, logging to /home/loongson/hadoop-2.7.2/logs/hadoop-root-datanode-localhost.localdomain.out
10.20.42.22: /home/loongson/hadoop-2.7.2/bin/hdfs: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.25-6.b17.rc16.fc21.loongson.mips64el
10.20.42.22: /home/loongson/hadoop-2.7.2/bin/hdfs: line 304: /usr/lib/jvm/java-1.8.0-
openjdk-1.8.0.25-6.b17.rc16.fc21.loongson.mips64el/bin/java: 成功
10.20.42.10: starting datanode, logging to /home/loongson/hadoop-2.7.2/logs/hadoop-root-datanode-localhost.localdomain.out
starting secondary namenodes [hadoop-master-001]
hadoop-master-001: secondarynamenode running as process 18418. stop it first.
16/09/02 08:50:33 warn util.nativecodeloader: unable to load native-hadoop library for your platform... using builtin-java
classes where applicable
starting yarn daemons
resourcemanager running as process 16937. stop it first.
10.20.42.10: starting nodemanager, logging to /home/loongson/hadoop-2.7.2/logs/yarn-root-nodemanager-localhost.localdomain.out
10.20.42.22: starting nodemanager, logging to /home/loongson/hadoop-2.7.2/logs/yarn-root-nodemanager-localhost.localdomain.out
10.20.42.22: /home/loongson/hadoop-2.7.2/bin/yarn: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.25-6.b17.rc16.fc21.loongson.mips64el
10.20.42.22: /home/loongson/hadoop-2.7.2/bin/yarn: line 333: /usr/lib/jvm/java-1.8.0-
openjdk-1.8.0.25-6.b17.rc16.fc21.loongson.mips64el/bin/java: 成功
master:
32497 oservermain
3506 secondarynamenode
3364 datanode
5654 jps
2582 ogremlinconsole
16937 resourcemanager
3263 namenode
slaves:
21580 jps
20622 datanode