Impala2.7.0-cdh5.x.x安装部署
部署impala
impala安装选择rpm包方式进行,这是本次部署唯一一个主要主件采用rpm方式进行安装部署,这里主要原因是cloudera没有提供现成的tar包文件,而源码编译过程会出现各种未知原因,为了方便采用以下方式进行部署。
安装介质如下:
$ ls
bigtop-utils-0.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.noarch.rpm
impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64.rpm
impala-shell-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64.rpm
impala-catalog-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64.rpm
impala-state-store-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64.rpm
impala-debuginfo-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64.rpm
impala-udf-devel-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64.rpm
impala-server-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64.rpm
sentry-1.5.1+cdh5.10.0+272-1.cdh5.10.0.p0.70.el7.noarch.rpm
[hadoop@db01 impala270]$ rpm -ivh impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64.rpm
warning: impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID e8f86acd: NOKEY
error: Failed dependencies:
bigtop-utils >= 0.7 is needed by impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64
hadoop is needed by impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64
hadoop-hdfs is needed by impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64
hadoop-yarn is needed by impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64
hadoop-mapreduce is needed by impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64
hbase is needed by impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64
hive >= 0.12.0+cdh5.1.0 is needed by impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64
zookeeper is needed by impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64
hadoop-libhdfs is needed by impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64
avro-libs is needed by impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64
parquet is needed by impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64
sentry >= 1.3.0+cdh5.1.0 is needed by impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64
sentry is needed by impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64
/lib/lsb/init-functions is needed by impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64
libhdfs.so.0.0.0()(64bit) is needed by impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64
[hadoop@db01 impala270]$
1、安装bigtop和sentry
$sudo yum -y install redhat-lsb
sudo rpm -ivh bigtop-utils-0.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.noarch.rpm
warning: bigtop-utils-0.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.noarch.rpm: Header V4 DSA/SHA1 Signature, key ID e8f86acd: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:bigtop-utils-0.7.0+cdh5.10.0+0-1.################################# [100%]
$sudo rpm -ivh sentry-1.5.1+cdh5.10.0+272-1.cdh5.10.0.p0.70.el7.noarch.rpm --nodeps
warning: sentry-1.5.1+cdh5.10.0+272-1.cdh5.10.0.p0.70.el7.noarch.rpm: Header V4 DSA/SHA1 Signature, key ID e8f86acd: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:sentry-1.5.1+cdh5.10.0+272-1.cdh5################################# [100%]
2、impalad安装
注:impalad需安装在所有datanode服务器上。
sudo rpm -ivh impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64.rpm --nodeps
warning: impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID e8f86acd: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:impala-2.7.0+cdh5.10.0+0-1.cdh5.1################################# [100%]
3、安装impala server
sudo rpm -ivh impala-server-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64.rpm
warning: impala-server-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID e8f86acd: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:impala-server-2.7.0+cdh5.10.0+0-1################################# [100%]
--以上步骤在所有datanode节点安装
4、修改配置文件
sudo vim /etc/default/impala
IMPALA_CATALOG_SERVICE_HOST=db01
IMPALA_STATE_STORE_HOST=db01
这两个ip是要装CATALOG和STATE_STORE的节点,
CATALOG和STATE_STORE这两个服务必须和hive装在同一个节点
在一个节点编辑后,复制到其他节点
sudo scp /etc/default/impala db02:/etc/default/impala
sudo scp /etc/default/impala db03:/etc/default/impala
sudo scp /etc/default/impala db04:/etc/default/impala
vim /etc/default/bigtop-utils
export JAVA_HOME=/opt/service/jdk1.7.0_67
scp bigtop-utils db02:/etc/default/bigtop-utils
scp bigtop-utils db03:/etc/default/bigtop-utils
scp bigtop-utils db04:/etc/default/bigtop-utils
vim hdfs-site.xml
<property>
<name>dfs.datanode.hdfs-blocks-metadata.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.client.file-block-storage-locations.timeout.millis</name>
<value>10000</value>
</property>
5、复制hive和hadoop的配置文件到impala
sudo scp /opt/cdh5/hive-1.1.0-cdh5.10.0/conf/hive-site.xml db01:/etc/impala/conf/
sudo scp /opt/cdh5/hive-1.1.0-cdh5.10.0/conf/hive-site.xml db02:/etc/impala/conf/
sudo scp /opt/cdh5/hive-1.1.0-cdh5.10.0/conf/hive-site.xml db03:/etc/impala/conf/
sudo scp /opt/cdh5/hive-1.1.0-cdh5.10.0/conf/hive-site.xml db04:/etc/impala/conf/
sudo scp /opt/cdh5/hadoop-2.6.0-cdh5.10.0/etc/hadoop/core-site.xml db01:/etc/impala/conf/
sudo scp /opt/cdh5/hadoop-2.6.0-cdh5.10.0/etc/hadoop/core-site.xml db02:/etc/impala/conf/
sudo scp /opt/cdh5/hadoop-2.6.0-cdh5.10.0/etc/hadoop/core-site.xml db03:/etc/impala/conf/
sudo scp /opt/cdh5/hadoop-2.6.0-cdh5.10.0/etc/hadoop/core-site.xml db04:/etc/impala/conf/
sudo scp /opt/cdh5/hadoop-2.6.0-cdh5.10.0/etc/hadoop/hdfs-site.xml db01:/etc/impala/conf/
sudo scp /opt/cdh5/hadoop-2.6.0-cdh5.10.0/etc/hadoop/hdfs-site.xml db02:/etc/impala/conf/
sudo scp /opt/cdh5/hadoop-2.6.0-cdh5.10.0/etc/hadoop/hdfs-site.xml db03:/etc/impala/conf/
sudo scp /opt/cdh5/hadoop-2.6.0-cdh5.10.0/etc/hadoop/hdfs-site.xml db04:/etc/impala/conf/
6、在hive节点安装impala-state-store 和impala-catalog
$ sudo rpm -ivh impala-state-store-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64.rpm
warning: impala-state-store-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID e8f86acd: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:impala-state-store-2.7.0+cdh5.10.################################# [100%]
$ sudo rpm -ivh impala-catalog-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64.rpm
warning: impala-catalog-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID e8f86acd: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:impala-catalog-2.7.0+cdh5.10.0+0-################################# [100%]
$ sudo cp /mnt/mysql-connector-java-5.1.22-bin.jar /var/lib/impala/
7、所有节点安装impala shell
sudo yum -y install python-setuptools
sudo rpm -ivh impala-shell-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64.rpm
8、修改impala依赖jar包,通过软连接方式
sudo rm -rf /usr/lib/impala/lib/hadoop-annotations.jar
sudo rm -rf /usr/lib/impala/lib/hadoop-auth.jar
sudo rm -rf /usr/lib/impala/lib/hadoop-aws.jar
sudo rm -rf /usr/lib/impala/lib/hadoop-common.jar
sudo rm -rf /usr/lib/impala/lib/hadoop-hdfs.jar
sudo rm -rf /usr/lib/impala/lib/hadoop-mapreduce-client-common.jar
sudo rm -rf /usr/lib/impala/lib/hadoop-mapreduce-client-core.jar
sudo rm -rf /usr/lib/impala/lib/hadoop-mapreduce-client-jobclient.jar
sudo rm -rf /usr/lib/impala/lib/hadoop-mapreduce-client-shuffle.jar
sudo rm -rf /usr/lib/impala/lib/hadoop-yarn-api.jar
sudo rm -rf /usr/lib/impala/lib/hadoop-yarn-client.jar
sudo rm -rf /usr/lib/impala/lib/hadoop-yarn-common.jar
sudo rm -rf /usr/lib/impala/lib/hadoop-yarn-server-applicationhistoryservice.jar
sudo rm -rf /usr/lib/impala/lib/hadoop-yarn-server-common.jar
sudo rm -rf /usr/lib/impala/lib/hadoop-yarn-server-nodemanager.jar
sudo rm -rf /usr/lib/impala/lib/hadoop-yarn-server-resourcemanager.jar
sudo rm -rf /usr/lib/impala/lib/hadoop-yarn-server-web-proxy.jar
sudo rm -rf /usr/lib/impala/lib/hbase-annotations.jar
sudo rm -rf /usr/lib/impala/lib/hbase-client.jar
sudo rm -rf /usr/lib/impala/lib/hbase-common.jar
sudo rm -rf /usr/lib/impala/lib/hbase-protocol.jar
sudo rm -rf /usr/lib/impala/lib/hive-ant.jar
sudo rm -rf /usr/lib/impala/lib/hive-beeline.jar
sudo rm -rf /usr/lib/impala/lib/hive-common.jar
sudo rm -rf /usr/lib/impala/lib/hive-exec.jar
sudo rm -rf /usr/lib/impala/lib/hive-hbase-handler.jar
sudo rm -rf /usr/lib/impala/lib/hive-metastore.jar
sudo rm -rf /usr/lib/impala/lib/hive-serde.jar
sudo rm -rf /usr/lib/impala/lib/hive-service.jar
sudo rm -rf /usr/lib/impala/lib/hive-shims-common.jar
sudo rm -rf /usr/lib/impala/lib/hive-shims.jar
sudo rm -rf /usr/lib/impala/lib/hive-shims-scheduler.jar
sudo rm -rf /usr/lib/impala/lib/zookeeper.jar
sudo ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/share/hadoop/common/lib/hadoop-annotations-2.6.0-cdh5.10.0.jar /usr/lib/impala/lib/hadoop-annotations.jar
sudo ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/share/hadoop/common/lib/hadoop-auth-2.6.0-cdh5.10.0.jar /usr/lib/impala/lib/hadoop-auth.jar
sudo ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/share/hadoop/mapreduce1/lib/hadoop-aws-2.6.0-cdh5.10.0.jar /usr/lib/impala/lib/hadoop-aws.jar
sudo ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/share/hadoop/common/hadoop-common-2.6.0-cdh5.10.0.jar /usr/lib/impala/lib/hadoop-common.jar
sudo ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/share/hadoop/hdfs/hadoop-hdfs-2.6.0-cdh5.10.0.jar /usr/lib/impala/lib/hadoop-hdfs.jar
sudo ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.6.0-cdh5.10.0.jar /usr/lib/impala/lib/hadoop-mapreduce-client-common.jar
sudo ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.6.0-cdh5.10.0.jar /usr/lib/impala/lib/hadoop-mapreduce-client-core.jar
sudo ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0-cdh5.10.0.jar /usr/lib/impala/lib/hadoop-mapreduce-client-jobclient.jar
sudo ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.6.0-cdh5.10.0.jar /usr/lib/impala/lib/hadoop-mapreduce-client-shuffle.jar
sudo ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/share/hadoop/yarn/hadoop-yarn-api-2.6.0-cdh5.10.0.jar /usr/lib/impala/lib/hadoop-yarn-api.jar
sudo ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/share/hadoop/yarn/hadoop-yarn-client-2.6.0-cdh5.10.0.jar /usr/lib/impala/lib/hadoop-yarn-client.jar
sudo ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/share/hadoop/yarn/hadoop-yarn-common-2.6.0-cdh5.10.0.jar /usr/lib/impala/lib/hadoop-yarn-common.jar
sudo ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.10.0.jar /usr/lib/impala/lib/hadoop-yarn-server-applicationhistoryservice.jar
sudo ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/share/hadoop/yarn/hadoop-yarn-server-common-2.6.0-cdh5.10.0.jar /usr/lib/impala/lib/hadoop-yarn-server-common.jar
sudo ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.6.0-cdh5.10.0.jar /usr/lib/impala/lib/hadoop-yarn-server-nodemanager.jar
sudo ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.6.0-cdh5.10.0.jar /usr/lib/impala/lib/hadoop-yarn-server-resourcemanager.jar
sudo ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.6.0-cdh5.10.0.jar /usr/lib/impala/lib/hadoop-yarn-server-web-proxy.jar
sudo ln -s /opt/cdh5/hbase-1.2.0-cdh5.10.0/lib/hbase-annotations-1.2.0-cdh5.10.0.jar /usr/lib/impala/lib/hbase-annotations.jar
sudo ln -s /opt/cdh5/hbase-1.2.0-cdh5.10.0/lib/hbase-client-1.2.0-cdh5.10.0.jar /usr/lib/impala/lib/hbase-client.jar
sudo ln -s /opt/cdh5/hbase-1.2.0-cdh5.10.0/lib/hbase-common-1.2.0-cdh5.10.0.jar /usr/lib/impala/lib/hbase-common.jar
sudo ln -s /opt/cdh5/hbase-1.2.0-cdh5.10.0/lib/hbase-protocol-1.2.0-cdh5.10.0.jar /usr/lib/impala/lib/hbase-protocol.jar
sudo ln -s /opt/cdh5/hive-1.1.0-cdh5.10.0/lib/hive-ant-1.1.0-cdh5.10.0.jar /usr/lib/impala/lib/hive-ant.jar
sudo ln -s /opt/cdh5/hive-1.1.0-cdh5.10.0/lib/hive-beeline-1.1.0-cdh5.10.0.jar /usr/lib/impala/lib/hive-beeline.jar
sudo ln -s /opt/cdh5/hive-1.1.0-cdh5.10.0/lib/hive-common-1.1.0-cdh5.10.0.jar /usr/lib/impala/lib/hive-common.jar
sudo ln -s /opt/cdh5/hive-1.1.0-cdh5.10.0/lib/hive-exec-1.1.0-cdh5.10.0.jar /usr/lib/impala/lib/hive-exec.jar
sudo ln -s /opt/cdh5/hive-1.1.0-cdh5.10.0/lib/hive-hbase-handler-1.1.0-cdh5.10.0.jar /usr/lib/impala/lib/hive-hbase-handler.jar
sudo ln -s /opt/cdh5/hive-1.1.0-cdh5.10.0/lib/hive-metastore-1.1.0-cdh5.10.0.jar /usr/lib/impala/lib/hive-metastore.jar
sudo ln -s /opt/cdh5/hive-1.1.0-cdh5.10.0/lib/hive-serde-1.1.0-cdh5.10.0.jar /usr/lib/impala/lib/hive-serde.jar
sudo ln -s /opt/cdh5/hive-1.1.0-cdh5.10.0/lib/hive-service-1.1.0-cdh5.10.0.jar /usr/lib/impala/lib/hive-service.jar
sudo ln -s /opt/cdh5/hive-1.1.0-cdh5.10.0/lib/hive-shims-common-1.1.0-cdh5.10.0.jar /usr/lib/impala/lib/hive-shims-common.jar
sudo ln -s /opt/cdh5/hive-1.1.0-cdh5.10.0/lib/hive-shims-1.1.0-cdh5.10.0.jar /usr/lib/impala/lib/hive-shims.jar
sudo ln -s /opt/cdh5/hive-1.1.0-cdh5.10.0/lib/hive-shims-scheduler-1.1.0-cdh5.10.0.jar /usr/lib/impala/lib/hive-shims-scheduler.jar
sudo ln -s /opt/cdh5/zookeeper-3.4.5-cdh5.10.0/zookeeper-3.4.5-cdh5.10.0.jar /usr/lib/impala/lib/zookeeper.jar
rm -rf /usr/lib/impala/lib/libhadoop.so
rm -rf /usr/lib/impala/lib/libhadoop.so.1.0.0
rm -rf /usr/lib/impala/lib/libhdfs.so
rm -rf /usr/lib/impala/lib/libhdfs.so.0.0.0
ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/lib/native/libhadoop.so /usr/lib/impala/lib/libhadoop.so
ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/lib/native/libhadoop.so.1.0.0 /usr/lib/impala/lib/libhadoop.so.1.0.0
ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/lib/native/libhdfs.so /usr/lib/impala/lib/libhdfs.so
ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/lib/native/libhdfs.so.0.0.0 /usr/lib/impala/lib/libhdfs.so.0.0.0
9、启动impala
#安装了hive的节点启动state-store和catalog服务
service impala-state-store start
service impala-catalog start
#所有datanode节点启动impalad
service impala-server start
Impala2.7.0-cdh5.x.x安装部署的更多相关文章
- Linux平台Oracle 12.1.0.2 单实例安装部署
主题:Linux平台Oracle 12.1.0.2 单实例安装部署 环境:RHEL 6.5 + Oracle 12.1.0.2 需求:安装部署OEM 13.2需要Oracle 12.1.0.2版本作为 ...
- redis4.0.1集群安装部署
安装环境 序号 项目 值 1 OS版本 Red Hat Enterprise Linux Server release 7.1 (Maipo) 2 内核版本 3.10.0-229.el7.x86_64 ...
- 0、ubuntu16.04安装部署kvm
ubuntu16.04安装部署kvm1.查看CPU是否支持KVM egrep "(svm|vmx)" /proc/cpuinfo 2.安装相关kvm包 sudo apt-get i ...
- window10下的solr6.1.0入门笔记之---安装部署
1.安装部署java1.6+ ,确保jre安装[安装步骤略] 安装后的环境为jdk1.8+ jre1.8+ 2.安装ant 下载:官网=>http://ant.apache.org/=> ...
- centos7 ambari2.6.1.5+hdp2.6.4.0 大数据集群安装部署
前言 本文是讲如何在centos7(64位) 安装ambari+hdp,如果在装有原生hadoop等集群的机器上安装,需要先将集群服务停掉,然后将不需要的环境变量注释掉即可,如果不注释掉,后面虽然可以 ...
- CDH5.12.1 安装部署
###通过http://192.168.50.200:7180/cmf/login 访问CM控制台 4.CDH安装 4.1CDH集群安装向导 1.admin/admin登陆到CM 2.同意licens ...
- Storm-0.9.0.1安装部署 指导
可以带着下面问题来阅读本文章: 1.Storm只支持什么传输 2.通过什么配置,可以更改Zookeeper默认端口 3.Storm UI必须和Storm Nimbus部署在同一台机器上,UI无法正常工 ...
- OEMCC 13.2 安装部署
需求:安装部署OEM 13.2 环境:两台主机,系统RHEL 6.5,分别部署OMS和OMR: OMS,也就是OEMCC的服务端 IP:192.168.1.88 内存:12G+ 硬盘:100G+ OM ...
- Apache atlas liunx环境安装部署手册
一. 背景 本文使用一台ubuntu虚拟机安装Apache-atlas,使用集成包unzip apache-atlas-2.1.0.zip进行快速安装部署,该集成包高度集成了hadoop ...
随机推荐
- C++的子类与父类强制转换产生的问题
近日,在项目的一个类中如果碰上想要将子类强制转换成父类,然后再调用其父类版本的virtual虚函数. 就会出现gcc编译错误提示:error: ld returned 1 exit status gc ...
- How can R and Hadoop be used together?
Referer: http://www.quora.com/How-can-R-and-Hadoop-be-used-together/answer/Jay-Kreps?srid=OVd9&s ...
- Atitit xml框架类库选型 attilax总结
Atitit xml框架类库选型 attilax总结 1. 1. XML类库可以分成2大类.标准的.这些类库通常接口和实现都是分开的1 2. Jdom 和dom4j1 2.1. 5.1. jdom1 ...
- [ci]jenkins server启动,通过jnlp的方式启动slave(容器模式)
jenkins server启动,通过jnlp的方式启动slave. java -jar jenkins.jar 配置jnlp端口--全局安全 配置云 配置项目 执行成功
- easy_install与pip 区别
作为Python爱好者,如果不知道easy_install或者pip中的任何一个的话,那么...... easy_insall的作用和perl中的cpan,ruby中的gem类似,都提供了在线一键 ...
- 编写自定义Yeoman生成器简述
1. 安装生成器Yeoman提供了generator-generator方便快速编写自己的生成器. 安装: npm install -g generator-generator运行: yo gener ...
- Git 藏匿操作
假设您正在为您的产品实施的一项新功能.你的代码是在推进开发进度而客户不断升级需求突然来了.正因为如此,你必须保持放下你的新功能,工作几个小时.你不能提交你的部分代码,也不能扔掉你的变化.所以,你需要一 ...
- AndroidStudio 代码(导入类)报错但可正常运行,以及解决此问题后带来的系列问题解决
首先是应用中很多导入的类都报红色异常显示找不到此类,但运行编译正常: 第一种方法: 点击AndroidStudio菜单File -> Invalidate Caches/Restar… ,在弹出 ...
- SmileyCount.java笑脸加法程序代写(QQ:928900200)
SmileyCount.java 1/4Java Programming 2014Course Code: EBU4201Mini ProjectTask 1 [30 marks]SmileyCoun ...
- 有趣的JavaScript原生数组函数
本文由 伯乐在线 - yanhaijing 翻译.未经许可,禁止转载!英文出处:flippinawesome.欢迎加入翻译小组. 在JavaScript中,可以通过两种方式创建数组,Array构造函数 ...