hadoop-2.7.3.tar.gz + spark-2.0.2-bin-hadoop2.7.tgz + zeppelin-0.6.2-incubating-bin-all.tgz(master、slave1和slave2)(博主推荐)(图文详解)
不多说,直接上干货!
我这里,采取的是ubuntu 16.04系统,当然大家也可以在CentOS6.5里,这些都是小事
CentOS 6.5的安装详解
hadoop-2.6.0.tar.gz + spark-1.5.2-bin-hadoop2.6.tgz的集群搭建(单节点)(Ubuntu系统)
大数据搭建各个子项目时配置文件技巧(适合CentOS和Ubuntu系统)(博主推荐)
新建用户组、用户、用户密码、删除用户组、用户(适合CentOS、Ubuntu系统)
VMware里Ubuntu-16.04-desktop的VMware Tools安装图文详解
Ubuntukylin-14.04-desktop( 不带分区)安装步骤详解
Ubuntu14.04安装之后的一些配置
Ubuntu11.10 带图形安装步骤详解
Ubuntukylin-14.04-desktop(带分区)安装步骤详解
Ubuntu各版本的历史发行界面
Spark on YARN模式的安装(spark-1.6.1-bin-hadoop2.6.tgz + hadoop-2.6.0.tar.gz)(master、slave1和slave2)(博主推荐)
Spark standalone模式的安装(spark-1.6.1-bin-hadoop2.6.tgz)(master、slave1和slave2)
我这里,采取的是Ubuntu 16.04 ,而且Spark on YARN模式
Spark on YARN模式的安装(spark-1.6.1-bin-hadoop2.6.tgz + hadoop-2.6.0.tar.gz)(master、slave1和slave2)(博主推荐)
hadoop-2.6.0.tar.gz + spark-1.5.2-bin-hadoop2.6.tgz的集群搭建(单节点)(Ubuntu系统)
系统配置文件
#jdk
export JAVA_HOME=/home/spark/app/jdk
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin #scala
export SCALA_HOME=/home/spark/app/scala
export PATH=$PATH:$SCALA_HOME/bin #hadoop
export HADOOP_HOME=/home/spark/app/hadoop
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin #spark
export SPARK_HOME=/home/spark/app/spark
export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin #zookeeper
export ZOOKEEPER_HOME=/home/spark/app/zookeeper
export PATH=$PATH:$ZOOKEEPER_HOME/bin #zeppelin
export ZEPPELIN_HOME=/home/spark/app/zeppelin
export PATH=$PATH:$ZEPPELIN_HOME_HOME/bin
我的路径是在/home/spark/app下
Hadoop的配置文件
core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
--> <!-- Put site-specific property overrides in this file. --> <configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value></value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/hadoop-2.7./tmp</value>
</property>
<property>
<name>hadoop.proxyuser.hadoop.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hadoop.groups</name>
<value>*</value>
</property>
</configuration>
hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
--> <!-- Put site-specific property overrides in this file. --> <configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:</value>
</property>
<property>
<name>dfs.replication</name>
<value></value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/usr/local/hadoop/hadoop-2.7./dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/usr/local/hadoop/hadoop-2.7./dfs/data</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
--> <!-- Put site-specific property overrides in this file. --> <configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master:</value>
</property>
</configuration>
yarn-site.xml
<?xml version="1.0"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
--> <configuration> <property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>master:</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:</value>
</property> </configuration>
slaves
slave1
slave2
hadoop-env.sh
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. # Set Hadoop-specific environment variables here. # The only required environment variable is JAVA_HOME. All others are
# optional. When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes. # The java implementation to use.
export JAVA_HOME=/home/spark/app/jdk # The jsvc implementation to use. Jsvc is required to run secure datanodes
# that bind to privileged ports to provide authentication of data transfer
# protocol. Jsvc is not required if SASL is configured for authentication of
# data transfer protocol using non-privileged ports.
#export JSVC_HOME=${JSVC_HOME} export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/etc/hadoop"} # Extra Java CLASSPATH elements. Automatically insert capacity-scheduler.
for f in $HADOOP_HOME/contrib/capacity-scheduler/*.jar; do
if [ "$HADOOP_CLASSPATH" ]; then
export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$f
else
export HADOOP_CLASSPATH=$f
fi
done # The maximum amount of heap to use, in MB. Default is 1000.
#export HADOOP_HEAPSIZE=
#export HADOOP_NAMENODE_INIT_HEAPSIZE="" # Extra Java runtime options. Empty by default.
export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true" # Command specific options appended to HADOOP_OPTS when specified
export HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_NAMENODE_OPTS"
export HADOOP_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS $HADOOP_DATANODE_OPTS" export HADOOP_SECONDARYNAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_SECONDARYNAMENODE_OPTS" export HADOOP_NFS3_OPTS="$HADOOP_NFS3_OPTS"
export HADOOP_PORTMAP_OPTS="-Xmx512m $HADOOP_PORTMAP_OPTS" # The following applies to multiple commands (fs, dfs, fsck, distcp etc)
export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS"
#HADOOP_JAVA_PLATFORM_OPTS="-XX:-UsePerfData $HADOOP_JAVA_PLATFORM_OPTS" # On secure datanodes, user to run the datanode as after dropping privileges.
# This **MUST** be uncommented to enable secure HDFS if using privileged ports
# to provide authentication of data transfer protocol. This **MUST NOT** be
# defined if SASL is configured for authentication of data transfer protocol
# using non-privileged ports.
export HADOOP_SECURE_DN_USER=${HADOOP_SECURE_DN_USER} # Where log files are stored. $HADOOP_HOME/logs by default.
#export HADOOP_LOG_DIR=${HADOOP_LOG_DIR}/$USER # Where log files are stored in the secure data environment.
export HADOOP_SECURE_DN_LOG_DIR=${HADOOP_LOG_DIR}/${HADOOP_HDFS_USER} ###
# HDFS Mover specific parameters
###
# Specify the JVM options to be used when starting the HDFS Mover.
# These options will be appended to the options specified as HADOOP_OPTS
# and therefore may override any similar flags set in HADOOP_OPTS
#
# export HADOOP_MOVER_OPTS="" ###
# Advanced Users Only!
### # The directory where pid files are stored. /tmp by default.
# NOTE: this should be set to a directory that can only be written to by
# the user that will run the hadoop daemons. Otherwise there is the
# potential for a symlink attack.
export HADOOP_PID_DIR=${HADOOP_PID_DIR}
export HADOOP_SECURE_DN_PID_DIR=${HADOOP_PID_DIR} # A string representing this instance of hadoop. $USER by default.
export HADOOP_IDENT_STRING=$USER
Spark的配置文件
spark-env.sh
#!/usr/bin/env bash #
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# # This file is sourced when running various Spark programs.
# Copy it as spark-env.sh and edit that to configure Spark for your site. # Options read when launching programs locally with
# ./bin/run-example or ./bin/spark-submit
# - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files
# - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node
# - SPARK_PUBLIC_DNS, to set the public dns name of the driver program
# - SPARK_CLASSPATH, default classpath entries to append # Options read by executors and drivers running inside the cluster
# - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node
# - SPARK_PUBLIC_DNS, to set the public DNS name of the driver program
# - SPARK_CLASSPATH, default classpath entries to append
# - SPARK_LOCAL_DIRS, storage directories to use on this node for shuffle and RDD data
# - MESOS_NATIVE_JAVA_LIBRARY, to point to your libmesos.so if you use Mesos # Options read in YARN client mode
# - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files
# - SPARK_EXECUTOR_INSTANCES, Number of executors to start (Default: )
# - SPARK_EXECUTOR_CORES, Number of cores for the executors (Default: ).
# - SPARK_EXECUTOR_MEMORY, Memory per Executor (e.g. 1000M, 2G) (Default: 1G)
# - SPARK_DRIVER_MEMORY, Memory for Driver (e.g. 1000M, 2G) (Default: 1G) # Options for the daemons used in the standalone deploy mode
# - SPARK_MASTER_HOST, to bind the master to a different IP address or hostname
# - SPARK_MASTER_PORT / SPARK_MASTER_WEBUI_PORT, to use non-default ports for the master
# - SPARK_MASTER_OPTS, to set config properties only for the master (e.g. "-Dx=y")
# - SPARK_WORKER_CORES, to set the number of cores to use on this machine
# - SPARK_WORKER_MEMORY, to set how much total memory workers have to give executors (e.g. 1000m, 2g)
# - SPARK_WORKER_PORT / SPARK_WORKER_WEBUI_PORT, to use non-default ports for the worker
# - SPARK_WORKER_INSTANCES, to set the number of worker processes per node
# - SPARK_WORKER_DIR, to set the working directory of worker processes
# - SPARK_WORKER_OPTS, to set config properties only for the worker (e.g. "-Dx=y")
# - SPARK_DAEMON_MEMORY, to allocate to the master, worker and history server themselves (default: 1g).
# - SPARK_HISTORY_OPTS, to set config properties only for the history server (e.g. "-Dx=y")
# - SPARK_SHUFFLE_OPTS, to set config properties only for the external shuffle service (e.g. "-Dx=y")
# - SPARK_DAEMON_JAVA_OPTS, to set config properties for all daemons (e.g. "-Dx=y")
# - SPARK_PUBLIC_DNS, to set the public dns name of the master or workers # Generic options for the daemons used in the standalone deploy mode
# - SPARK_CONF_DIR Alternate conf dir. (Default: ${SPARK_HOME}/conf)
# - SPARK_LOG_DIR Where log files are stored. (Default: ${SPARK_HOME}/logs)
# - SPARK_PID_DIR Where the pid file is stored. (Default: /tmp)
# - SPARK_IDENT_STRING A string representing this instance of spark. (Default: $USER)
# - SPARK_NICENESS The scheduling priority for daemons. (Default: ) export JAVA_HOME=/home/spark/app/jdk
export SCALA_HOME=/home/spark/app/scala
export HADOOP_HOME=/home/spark/app/hadoop
export HADOOP_CONF_DIR=/home/spark/app/hadoop/etc/hadoop
export SPARK_MASTER_IP=192.168.80.145
export SPARK_WORKER_MERMORY=1G
slaves
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# # A Spark Worker will be started on each of the machines listed below. slave1
slave2
大家一定要去看我的前期博客。
下面,我是接着直接来下载并安装配置Zeppelin
首先一点想说的是版本问题,为什么本博文的标题我会写清楚版本号呢!原因就是版本不对真的很会坑人。
博主我也尝试过hadoop-2.7.3.tar.gz + spark-2.1.0-bin-hadoop2.7.tgz。
但是呢,对于zeppelin的版本,目前官网开发团队确实做的不太好,控制版本有待完善。
坑人的地方在哪里呢,Zeppelin0.6.2不支持Spark2.1.0这个版本
于是我又仔细的查看了官网的教程:
得出的结论是我必须要装老版本的,还好的是支持Spark2.0,于是我又安装了Spark2.0.2
不过呢如果你没有任何的集群环境,上面这篇是值得参考的,只不过你得自己改下版本号,从2.1.0到2.0.2,其他完全一样。
当然,庆幸的是,教大家,灵活会用软连接,实现多版本的使用。
大数据各子项目的环境搭建之建立与删除软连接(博主推荐)
1、下载Zeppelin安装包
Apache Zeppelin官方提供了Source包和二进制包,我们可以根据需要下载相关的包进行安装。
wget http://archive.apache.org/dist/zeppelin/zeppelin-0.6.2/zeppelin-0.6.2-bin-all.tgz
或者
wget http://ftp.meisei-u.ac.jp/mirror/apache/dist/incubator/zeppelin/0.6.2-incubating/zeppelin-0.6.2-incubating-bin-all.tgz
http://mirrors.tuna.tsinghua.edu.cn/apache/zeppelin/zeppelin-0.6.2/zeppelin-0.6.2-bin-all.tgz
或者,去官网
我一般,是喜欢将软件安装在/usr/local/app下,或者/home/hadoop/app、或者/home/spark/app下
2、解压并为创建软连接
tar -xvf zeppelin-0.6.2-bin-all.tgz
spark@master:~/app$ pwd
/home/spark/app
spark@master:~/app$ ls
hadoop jdk1..0_79 scala-2.10. zookeeper
hadoop-2.7. jdk1..0_60 spark zookeeper-3.4.
jdk scala spark-2.0.-bin-hadoop2.
spark@master:~/app$ cp /home/zhouls/Downloads/zeppelin-0.6.-bin-all.tgz .
spark@master:~/app$ ls
hadoop jdk1..0_60 spark-2.0.-bin-hadoop2.
hadoop-2.7. scala zeppelin-0.6.-bin-all.tgz
jdk scala-2.10. zookeeper
jdk1..0_79 spark zookeeper-3.4.
spark@master:~/app$ tar -zxvf zeppelin-0.6.-bin-all.tgz
spark@master:~/app$ ls
hadoop jdk1..0_60 spark-2.0.-bin-hadoop2. zookeeper-3.4.
hadoop-2.7. scala zeppelin-0.6.-bin-all
jdk scala-2.10. zeppelin-0.6.-bin-all.tgz
jdk1..0_79 spark zookeeper
spark@master:~/app$ rm zeppelin-0.6.-bin-all.tgz
spark@master:~/app$ ln -s zeppelin-0.6.-bin-all/ zeppelin
spark@master:~/app$ ll
total
drwxrwxr-x spark spark Jun : ./
drwxr-xr-x spark spark Jun : ../
lrwxrwxrwx spark spark Jun : hadoop -> hadoop-2.7./
drwxr-xr-x spark spark Jun : hadoop-2.7./
lrwxrwxrwx spark spark Jun : jdk -> jdk1..0_60//
drwxr-xr-x spark spark Apr jdk1..0_79/
drwxr-xr-x spark spark Aug jdk1..0_60/
lrwxrwxrwx spark spark Jun : scala -> scala-2.10.//
drwxrwxr-x spark spark Feb scala-2.10./
lrwxrwxrwx spark spark Jun : spark -> spark-2.0.-bin-hadoop2.//
drwxr-xr-x spark spark Nov spark-2.0.-bin-hadoop2./
lrwxrwxrwx spark spark Jun : zeppelin -> zeppelin-0.6.-bin-all//
drwxr-xr-x spark spark Oct zeppelin-0.6.-bin-all/
lrwxrwxrwx spark spark Jun : zookeeper -> zookeeper-3.4.//
drwxr-xr-x spark spark Jun : zookeeper-3.4./
spark@master:~/app$
3、修改环境变量配置
#zeppelin
export ZEPPELIN_HOME=/home/spark/app/zeppelin
export PATH=$PATH:$ZEPPELIN_HOME_HOME/bin
root@master:~# vim /etc/profile
root@master:~# source /etc/profile
root@master:~#
说明,成功生效。
4.安装并修改配置文件:
(1)安装
网络安装版需要运行下面的命令:
./bin/install-interpreter.sh --all
而完整版不需要,直接进入到zeppelin的根目录修改配置文件即可。(我这里)
cd zepplin-0.6.2-bin-all
(2)/home/spark/app/zeppelin-0.6.2-bin-all/conf/zeppelin-env.sh
export JAVA_HOME=/home/spark/app/jdk
export SPARK_HOME=/home/spark/app/spark
export HADOOP_HOME=/home/spark/app/hadoop
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export ZEPPELIN_HOME_INTP_JAVA_OPTS="-XX:PermSize=512M -XX:MaxPermSize=1024M"
spark@master:~/app$ cd zeppelin-0.6.-bin-all/
spark@master:~/app/zeppelin-0.6.-bin-all$ pwd
/home/spark/app/zeppelin-0.6.-bin-all
spark@master:~/app/zeppelin-0.6.-bin-all$ ls
bin lib notebook zeppelin-server-0.6..jar
conf LICENSE NOTICE zeppelin-web-0.6..war
interpreter licenses README.md
spark@master:~/app/zeppelin-0.6.-bin-all$ cd conf/
spark@master:~/app/zeppelin-0.6.-bin-all/conf$ pwd
/home/spark/app/zeppelin-0.6.-bin-all/conf
spark@master:~/app/zeppelin-0.6.-bin-all/conf$ ls
configuration.xsl README.md zeppelin-env.sh.template
interpreter-list shiro.ini zeppelin-site.xml.template
log4j.properties zeppelin-env.cmd.template
spark@master:~/app/zeppelin-0.6.-bin-all/conf$ mv zeppelin-env.sh.template zeppelin-env.sh
spark@master:~/app/zeppelin-0.6.-bin-all/conf$ ls
configuration.xsl README.md zeppelin-env.sh
interpreter-list shiro.ini zeppelin-site.xml.template
log4j.properties zeppelin-env.cmd.template
spark@master:~/app/zeppelin-0.6.-bin-all/conf$ vi zeppelin-env.sh
export JAVA_HOME=/home/spark/app/jdk
export SPARK_HOME=/home/spark/app/spark
export HADOOP_HOME=/home/spark/app/hadoop
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export ZEPPELIN_HOME_INTP_JAVA_OPTS="-XX:PermSize=512M -XX:MaxPermSize=1024M"
以上,主要是设置Java、Spark、Hadoop的安装路径,并且设置Java内存的使用量。
(3)/home/spark/app/zeppelin-0.6.2-bin-all/conf/zeppelin-site.xml
这里,我跟大家普及一个大数据领域的端口知识。(以下是我习惯用的端口情况)
YARN web ui 是8088
HBase web ui是60010、60030
Hadoop web ui是50070
Spark web ui是8081
Ooize web ui是11000
Kibana web ui是5601
Hue web ui是8888
Elasticsearch/Kopf/Head web ui是9200
Resourcemanager web ui是23188
History web ui是19888
Azakban web ui是8443
Storm web ui是9999
基于以上考虑,
为了防止端口冲突我将zeppelin的默认端口8080改为8099,具体情况要看自己的机子决定:
<property>
<name>zeppelin.server.port</name>
<value>8099</value>
<description>Server port.</description>
</property>
4.启动或关闭:
修改之前的启动脚本为:(当然大家,也可不修改)
#!/bin/bash
echo -e "\033[31m ========Start The Cluster======== \033[0m"
echo -e "\033[31m Starting Hadoop Now !!! \033[0m"
/opt/hadoop-2.7.3/sbin/start-all.sh
echo -e "\033[31m Starting Spark Now !!! \033[0m"
/opt/spark-2.0.2-bin-hadoop2.7/sbin/start-all.sh
echo -e "\033[31m Starting Zeppelin Now !!! \033[0m"
/opt/zeppelin-0.6.2-bin-all/bin/zeppelin-daemon.sh start
echo -e "\033[31m The Result Of The Command \"jps\" : \033[0m"
jps
echo -e "\033[31m ========END======== \033[0m"
修改之前的关闭脚本为:(当然大家,也可不修改)
#!/bin/bash
echo -e "\033[31m ===== Stoping The Cluster ====== \033[0m"
echo -e "\033[31m Stoping Zeppelin Now !!! \033[0m"
/opt/zeppelin-0.6.2-bin-all/bin/zeppelin-daemon.sh stop
echo -e "\033[31m Stoping Spark Now !!! \033[0m"
/opt/spark-2.0.2-bin-hadoop2.7/sbin/stop-all.sh
echo -e "\033[31m Stopting Hadoop Now !!! \033[0m"
/opt/hadoop-2.7.3/sbin/stop-all.sh
echo -e "\033[31m The Result Of The Command \"jps\" : \033[0m"
jps
echo -e "\033[31m ======END======== \033[0m"
启动Apache Zeppelin
在zeppelin_home目录下执行如下命令:
spark@master:~/app/zeppelin-0.6.-bin-all$ bin/zeppelin-daemon.sh start
spark@master:~/app/zeppelin-0.6.-bin-all$ pwd
/home/spark/app/zeppelin-0.6.-bin-all
spark@master:~/app/zeppelin-0.6.-bin-all$ ls
bin lib notebook zeppelin-server-0.6..jar
conf LICENSE NOTICE zeppelin-web-0.6..war
interpreter licenses README.md
spark@master:~/app/zeppelin-0.6.-bin-all$ bin/zeppelin-daemon.sh start
Log dir doesn't exist, create /home/spark/app/zeppelin/logs
Pid dir doesn't exist, create /home/spark/app/zeppelin/run
Zeppelin start [ OK ]
spark@master:~/app/zeppelin-0.6.-bin-all$ ls
bin LICENSE NOTICE zeppelin-web-0.6..war
conf licenses README.md
interpreter logs run
lib notebook zeppelin-server-0.6..jar
spark@master:~/app/zeppelin-0.6.-bin-all$
其启动/停止命令: bin/zeppelin-daemon.sh start/stop。
启动之后,打开localhost:8099访问zepplin主页。
http://localhost:8099/#/
我这里是
在打开Zeppelin Web UI界面之后,然后如下
然后,可以看到
其实啊,这样的显示风格不难,我不知道,如果大家有玩过python来做机器学习、深度学习、数据挖掘的话。其实,这样的风格就很简单啦。
如果大家感兴趣的话,可以继续关注我的博客。会陆续有这方面的优质博文。大家一起进步共同学习!
后续博客
Zeppelin的入门使用系列之创建新的Notebook(一)
hadoop-2.7.3.tar.gz + spark-2.0.2-bin-hadoop2.7.tgz + zeppelin-0.6.2-incubating-bin-all.tgz(master、slave1和slave2)(博主推荐)(图文详解)的更多相关文章
- Spark on YARN模式的安装(spark-1.6.1-bin-hadoop2.6.tgz + hadoop-2.6.0.tar.gz)(master、slave1和slave2)(博主推荐)
说白了 Spark on YARN模式的安装,它是非常的简单,只需要下载编译好Spark安装包,在一台带有Hadoop YARN客户端的的机器上运行即可. Spark on YARN简介与运行wor ...
- 大数据学习系列之七 ----- Hadoop+Spark+Zookeeper+HBase+Hive集群搭建 图文详解
引言 在之前的大数据学习系列中,搭建了Hadoop+Spark+HBase+Hive 环境以及一些测试.其实要说的话,我开始学习大数据的时候,搭建的就是集群,并不是单机模式和伪分布式.至于为什么先写单 ...
- CDH版本大数据集群下搭建Hue(hadoop-2.6.0-cdh5.5.4.gz + hue-3.9.0-cdh5.5.4.tar.gz)(博主推荐)
不多说,直接上干货! 我的集群机器情况是 bigdatamaster(192.168.80.10).bigdataslave1(192.168.80.11)和bigdataslave2(192.168 ...
- hadoop-2.6.0.tar.gz + spark-1.6.1-bin-hadoop2.6.tgz + zeppelin-0.5.6-incubating-bin-all.tgz(master、slave1和slave2)(博主推荐)(图文详解)
不多说,直接上干货! 我这里,采取的是CentOS6.5,当然大家也可以在ubuntu 16.04系统里,这些都是小事 CentOS 6.5的安装详解 hadoop-2.6.0.tar.gz + sp ...
- apache-storm-1.0.2.tar.gz的集群搭建(3节点)(图文详解)(非HA和HA)
不多说,直接上干货! Storm的版本选取 我这里,是选用apache-storm-1.0.2.tar.gz apache-storm-0.9.6.tar.gz的集群搭建(3节点)(图文详解) 为什么 ...
- apache-storm-0.9.6.tar.gz的集群搭建(3节点)(图文详解)
不多说,直接上干货! Storm的版本选取 我这里,是选用apache-storm-0.9.6.tar.gz Storm的本地模式安装 本地模式在一个进程里面模拟一个storm集群的所有功能, 这对开 ...
- kibana-4.6.3-linux-x86_64.tar.gz的安装(图文详解)(升级)
前期博客 kibana-4.6.3-linux-x86_64.tar.gz的下载(图文详解) 因为,我的机器情况是如下: 1.上传 [hadoop@master app]$ rz [hadoop@m ...
- Apache-kylin-2.0.0-bin-hbase1x.tar.gz的下载与安装(图文详解)
首先,对于Apache Kylin的安装,我有话要说. 由于Apache Kylin本身只是一个Server,所以安装部署还是比较简单的.但是它的前提要求是Hadoop.Hive.HBase必须已经安 ...
- Spark on YARN简介与运行wordcount(master、slave1和slave2)(博主推荐)
前期博客 Spark on YARN模式的安装(spark-1.6.1-bin-hadoop2.6.tgz +hadoop-2.6.0.tar.gz)(master.slave1和slave2)(博主 ...
随机推荐
- 0.00-050613_head.s
# head.s contains the -bit startup code. # Two L3 task multitasking. The code of tasks are in kernel ...
- IE 内使用ActiveX,写注册表被重定向到如下注册表
IE 内使用ActiveX,写注册表被重定向到如下注册表,所以使用注册表做标记要注意下,目前还没找为什么会这样 HKEY_CURRENT_USER\Software\Microsoft\Interne ...
- 实现继承+接口继承+虚方法+隐藏方法+this/base+抽象类+密封类/方法+修饰符
概念: 在上一节课中学习了如何定义类,用类当做模板来声明我们的数据. 很多类中有相似的数据,比如在一个游戏中,有Boss类,小怪类Enemy,这些类他们有很多相同的属性,也有不同的,这个时候我们可以使 ...
- 分布式_事务_02_2PC框架raincat源码解析
一.前言 上一节已经将raincat demo工程运行起来了,这一节来分析下raincat的源码 二.协调者启动过程 主要就是在启动类中通过如下代码来启动 netty nettyService.sta ...
- LeetCode OJ:Combination Sum III(组合之和III)
Find all possible combinations of k numbers that add up to a number n, given that only numbers from ...
- Technocup 2019 C. Compress String
一个字符串 $s$,你要把它分成若干段,有两种合法的段 1.段长为 $1$,代价为 $a$ 2.这个段是前面所有段拼起来组成的字符串的字串,代价为 $b$ 问最小代价 $|s| \leq 5000$ ...
- 畅通工程再续 (kruskal算法的延续)
个人心得:这题其实跟上一题没什么区别,自己想办法把坐标啥的都给转换为对应的图形模样就好了 相信大家都听说一个“百岛湖”的地方吧,百岛湖的居民生活在不同的小岛中,当他们想去其他的小岛时都要通过划小船来实 ...
- hdu5542 The Battle of Chibi[DP+BIT]
求给定序列中长度为M的上升子序列个数.$N,M<=1000$. 很容易想到方法.$f[i,j]$表示以第$i$个数结尾,长度为$j$的满足要求子序列个数.于是转移也就写出来了$f[i][j]+= ...
- 常用Request对象获取请求信息
Request.ServerVariables(“REMOTE_ADDR”) ‘获取访问IPRequest.ServerVariables(“LOCAL_ADDR”) ‘同上Request.Serve ...
- C++动多态和静多态
动多态的设计思想:对于相关的对象类型,确定它们之间的一个共同功能集,然后在基类中,把这些共同的功能声明为多个公共的虚函数接口.各个子类重写这些虚函数,以完成具体的功能.客户端的代码(操作函数)通过指向 ...