天问

Spark分布式安装

CentOS分布式安装:

先安装hadoop,zookeeper,hbase等等。

下载解压/opt/spark-2.0.2-bin-hadoop2.7/

 
 

配置

conf/slaves

 
 

spark-env.sh

export HADOOP_CONF_DIR=/opt/hadoop-2.7.1/etc/hadoop

export SCALA_HOME=/opt/scala-2.12.2

export JAVA_HOME=/usr/java/jdk1.7.0_21

export SPARK_WORKER_MEMORY=1g

 
 

 
 

环境变量:

export SPARK_HOME=/opt/spark-2.0.2-bin-hadoop2.7

export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin

 
 

log4j.properties

 
 

启动:

 
 

hadoopzkhbase,最后spark

start-dfs.sh

start-yarn.sh

 
 

/opt/zookeeper-3.4.9/bin/zkServer.sh start

 
 

/opt/spark-2.0.2-bin-hadoop2.7/sbin/start-master.sh

/opt/spark-2.0.2-bin-hadoop2.7/sbin/start-slaves.sh

 
 

Jps检查:

 
 

spark访问:

 
 

http://master:8080

 
 

Spark jobs:

http://192.168.6.161:4040/jobs/

 
 

停止:

stop-master.sh

stop-slaves.sh

/opt/zookeeper-3.4.9/bin/zkServer.sh stop

stop-yarn.sh

stop-dfs.sh

博客地址:http://blog.yoqi.me/?p=3724
扫我捐助哦
喜欢 0

这篇文章还没有评论

发表评论