部署impala

impala安装选择rpm包方式进行,这是本次部署唯一一个主要主件采用rpm方式进行安装部署,这里主要原因是cloudera没有提供现成的tar包文件,而源码编译过程会出现各种未知原因,为了方便采用以下方式进行部署。

安装介质如下:

$ ls

bigtop-utils-0.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.noarch.rpm

impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64.rpm           

impala-shell-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64.rpm

impala-catalog-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64.rpm   

impala-state-store-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64.rpm

impala-debuginfo-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64.rpm 

impala-udf-devel-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64.rpm

impala-server-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64.rpm

sentry-1.5.1+cdh5.10.0+272-1.cdh5.10.0.p0.70.el7.noarch.rpm

[hadoop@db01 impala270]$ rpm -ivh impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64.rpm

warning: impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID e8f86acd: NOKEY

error: Failed dependencies:
     bigtop-utils >= 0.7 is needed by impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64
     hadoop is needed by impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64
     hadoop-hdfs is needed by impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64
     hadoop-yarn is needed by impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64
     hadoop-mapreduce is needed by impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64
     hbase is needed by impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64
     hive >= 0.12.0+cdh5.1.0 is needed by impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64
     zookeeper is needed by impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64
     hadoop-libhdfs is needed by impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64
     avro-libs is needed by impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64
     parquet is needed by impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64
     sentry >= 1.3.0+cdh5.1.0 is needed by impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64
     sentry is needed by impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64
     /lib/lsb/init-functions is needed by impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64
     libhdfs.so.0.0.0()(64bit) is needed by impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64

[hadoop@db01 impala270]$

1、安装bigtop和sentry

$sudo yum -y install redhat-lsb

sudo rpm -ivh bigtop-utils-0.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.noarch.rpm

warning: bigtop-utils-0.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.noarch.rpm: Header V4 DSA/SHA1 Signature, key ID e8f86acd: NOKEY

Preparing...                          ################################# [100%]

Updating / installing...
    1:bigtop-utils-0.7.0+cdh5.10.0+0-1.################################# [100%]
   

$sudo rpm -ivh sentry-1.5.1+cdh5.10.0+272-1.cdh5.10.0.p0.70.el7.noarch.rpm --nodeps

warning: sentry-1.5.1+cdh5.10.0+272-1.cdh5.10.0.p0.70.el7.noarch.rpm: Header V4 DSA/SHA1 Signature, key ID e8f86acd: NOKEY

Preparing...                          ################################# [100%]

Updating / installing...
    1:sentry-1.5.1+cdh5.10.0+272-1.cdh5################################# [100%]

2、impalad安装

注:impalad需安装在所有datanode服务器上。

sudo rpm -ivh impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64.rpm --nodeps

warning: impala-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID e8f86acd: NOKEY

Preparing...                          ################################# [100%]

Updating / installing...
    1:impala-2.7.0+cdh5.10.0+0-1.cdh5.1################################# [100%]

3、安装impala server

sudo rpm -ivh impala-server-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64.rpm

warning: impala-server-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID e8f86acd: NOKEY

Preparing...                          ################################# [100%]

Updating / installing...
    1:impala-server-2.7.0+cdh5.10.0+0-1################################# [100%]
   
    --以上步骤在所有datanode节点安装
   

4、修改配置文件

sudo vim /etc/default/impala

IMPALA_CATALOG_SERVICE_HOST=db01

IMPALA_STATE_STORE_HOST=db01

这两个ip是要装CATALOG和STATE_STORE的节点,

CATALOG和STATE_STORE这两个服务必须和hive装在同一个节点

在一个节点编辑后,复制到其他节点

sudo scp /etc/default/impala db02:/etc/default/impala

sudo scp /etc/default/impala db03:/etc/default/impala

sudo scp /etc/default/impala db04:/etc/default/impala

vim /etc/default/bigtop-utils

export JAVA_HOME=/opt/service/jdk1.7.0_67

scp bigtop-utils db02:/etc/default/bigtop-utils

scp bigtop-utils db03:/etc/default/bigtop-utils

scp bigtop-utils db04:/etc/default/bigtop-utils

vim hdfs-site.xml

<property>
                 <name>dfs.datanode.hdfs-blocks-metadata.enabled</name>
                 <value>true</value>
      </property>
      <property>
                 <name>dfs.client.file-block-storage-locations.timeout.millis</name>
                 <value>10000</value>
      </property>

5、复制hive和hadoop的配置文件到impala

sudo scp /opt/cdh5/hive-1.1.0-cdh5.10.0/conf/hive-site.xml db01:/etc/impala/conf/

sudo scp /opt/cdh5/hive-1.1.0-cdh5.10.0/conf/hive-site.xml db02:/etc/impala/conf/

sudo scp /opt/cdh5/hive-1.1.0-cdh5.10.0/conf/hive-site.xml db03:/etc/impala/conf/

sudo scp /opt/cdh5/hive-1.1.0-cdh5.10.0/conf/hive-site.xml db04:/etc/impala/conf/

sudo scp /opt/cdh5/hadoop-2.6.0-cdh5.10.0/etc/hadoop/core-site.xml db01:/etc/impala/conf/

sudo scp /opt/cdh5/hadoop-2.6.0-cdh5.10.0/etc/hadoop/core-site.xml db02:/etc/impala/conf/

sudo scp /opt/cdh5/hadoop-2.6.0-cdh5.10.0/etc/hadoop/core-site.xml db03:/etc/impala/conf/

sudo scp /opt/cdh5/hadoop-2.6.0-cdh5.10.0/etc/hadoop/core-site.xml db04:/etc/impala/conf/

sudo scp /opt/cdh5/hadoop-2.6.0-cdh5.10.0/etc/hadoop/hdfs-site.xml db01:/etc/impala/conf/

sudo scp /opt/cdh5/hadoop-2.6.0-cdh5.10.0/etc/hadoop/hdfs-site.xml db02:/etc/impala/conf/

sudo scp /opt/cdh5/hadoop-2.6.0-cdh5.10.0/etc/hadoop/hdfs-site.xml db03:/etc/impala/conf/

sudo scp /opt/cdh5/hadoop-2.6.0-cdh5.10.0/etc/hadoop/hdfs-site.xml db04:/etc/impala/conf/

6、在hive节点安装impala-state-store 和impala-catalog

$ sudo rpm -ivh impala-state-store-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64.rpm

warning: impala-state-store-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID e8f86acd: NOKEY

Preparing...                          ################################# [100%]

Updating / installing...
    1:impala-state-store-2.7.0+cdh5.10.################################# [100%]

$ sudo rpm -ivh impala-catalog-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64.rpm

warning: impala-catalog-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID e8f86acd: NOKEY

Preparing...                          ################################# [100%]

Updating / installing...
    1:impala-catalog-2.7.0+cdh5.10.0+0-################################# [100%]

$ sudo cp /mnt/mysql-connector-java-5.1.22-bin.jar /var/lib/impala/

7、所有节点安装impala shell

sudo yum -y install python-setuptools
  sudo rpm -ivh impala-shell-2.7.0+cdh5.10.0+0-1.cdh5.10.0.p0.71.el7.x86_64.rpm
 

8、修改impala依赖jar包,通过软连接方式

sudo rm -rf /usr/lib/impala/lib/hadoop-annotations.jar

sudo rm -rf /usr/lib/impala/lib/hadoop-auth.jar

sudo rm -rf /usr/lib/impala/lib/hadoop-aws.jar

sudo rm -rf /usr/lib/impala/lib/hadoop-common.jar

sudo rm -rf /usr/lib/impala/lib/hadoop-hdfs.jar

sudo rm -rf /usr/lib/impala/lib/hadoop-mapreduce-client-common.jar

sudo rm -rf /usr/lib/impala/lib/hadoop-mapreduce-client-core.jar

sudo rm -rf /usr/lib/impala/lib/hadoop-mapreduce-client-jobclient.jar

sudo rm -rf /usr/lib/impala/lib/hadoop-mapreduce-client-shuffle.jar

sudo rm -rf /usr/lib/impala/lib/hadoop-yarn-api.jar

sudo rm -rf /usr/lib/impala/lib/hadoop-yarn-client.jar

sudo rm -rf /usr/lib/impala/lib/hadoop-yarn-common.jar

sudo rm -rf /usr/lib/impala/lib/hadoop-yarn-server-applicationhistoryservice.jar

sudo rm -rf /usr/lib/impala/lib/hadoop-yarn-server-common.jar

sudo rm -rf /usr/lib/impala/lib/hadoop-yarn-server-nodemanager.jar

sudo rm -rf /usr/lib/impala/lib/hadoop-yarn-server-resourcemanager.jar

sudo rm -rf /usr/lib/impala/lib/hadoop-yarn-server-web-proxy.jar

sudo rm -rf /usr/lib/impala/lib/hbase-annotations.jar

sudo rm -rf /usr/lib/impala/lib/hbase-client.jar

sudo rm -rf /usr/lib/impala/lib/hbase-common.jar

sudo rm -rf /usr/lib/impala/lib/hbase-protocol.jar

sudo rm -rf /usr/lib/impala/lib/hive-ant.jar

sudo rm -rf /usr/lib/impala/lib/hive-beeline.jar

sudo rm -rf /usr/lib/impala/lib/hive-common.jar

sudo rm -rf /usr/lib/impala/lib/hive-exec.jar

sudo rm -rf /usr/lib/impala/lib/hive-hbase-handler.jar

sudo rm -rf /usr/lib/impala/lib/hive-metastore.jar

sudo rm -rf /usr/lib/impala/lib/hive-serde.jar

sudo rm -rf /usr/lib/impala/lib/hive-service.jar

sudo rm -rf /usr/lib/impala/lib/hive-shims-common.jar

sudo rm -rf /usr/lib/impala/lib/hive-shims.jar

sudo rm -rf /usr/lib/impala/lib/hive-shims-scheduler.jar

sudo rm -rf /usr/lib/impala/lib/zookeeper.jar
    

sudo ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/share/hadoop/common/lib/hadoop-annotations-2.6.0-cdh5.10.0.jar /usr/lib/impala/lib/hadoop-annotations.jar

sudo ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/share/hadoop/common/lib/hadoop-auth-2.6.0-cdh5.10.0.jar /usr/lib/impala/lib/hadoop-auth.jar

sudo ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/share/hadoop/mapreduce1/lib/hadoop-aws-2.6.0-cdh5.10.0.jar /usr/lib/impala/lib/hadoop-aws.jar

sudo ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/share/hadoop/common/hadoop-common-2.6.0-cdh5.10.0.jar /usr/lib/impala/lib/hadoop-common.jar

sudo ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/share/hadoop/hdfs/hadoop-hdfs-2.6.0-cdh5.10.0.jar /usr/lib/impala/lib/hadoop-hdfs.jar

sudo ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.6.0-cdh5.10.0.jar /usr/lib/impala/lib/hadoop-mapreduce-client-common.jar

sudo ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.6.0-cdh5.10.0.jar /usr/lib/impala/lib/hadoop-mapreduce-client-core.jar

sudo ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0-cdh5.10.0.jar /usr/lib/impala/lib/hadoop-mapreduce-client-jobclient.jar

sudo ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.6.0-cdh5.10.0.jar /usr/lib/impala/lib/hadoop-mapreduce-client-shuffle.jar

sudo ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/share/hadoop/yarn/hadoop-yarn-api-2.6.0-cdh5.10.0.jar /usr/lib/impala/lib/hadoop-yarn-api.jar

sudo ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/share/hadoop/yarn/hadoop-yarn-client-2.6.0-cdh5.10.0.jar /usr/lib/impala/lib/hadoop-yarn-client.jar

sudo ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/share/hadoop/yarn/hadoop-yarn-common-2.6.0-cdh5.10.0.jar /usr/lib/impala/lib/hadoop-yarn-common.jar

sudo ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.10.0.jar /usr/lib/impala/lib/hadoop-yarn-server-applicationhistoryservice.jar

sudo ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/share/hadoop/yarn/hadoop-yarn-server-common-2.6.0-cdh5.10.0.jar /usr/lib/impala/lib/hadoop-yarn-server-common.jar

sudo ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.6.0-cdh5.10.0.jar /usr/lib/impala/lib/hadoop-yarn-server-nodemanager.jar

sudo ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.6.0-cdh5.10.0.jar /usr/lib/impala/lib/hadoop-yarn-server-resourcemanager.jar

sudo ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.6.0-cdh5.10.0.jar /usr/lib/impala/lib/hadoop-yarn-server-web-proxy.jar

sudo ln -s /opt/cdh5/hbase-1.2.0-cdh5.10.0/lib/hbase-annotations-1.2.0-cdh5.10.0.jar /usr/lib/impala/lib/hbase-annotations.jar

sudo ln -s /opt/cdh5/hbase-1.2.0-cdh5.10.0/lib/hbase-client-1.2.0-cdh5.10.0.jar /usr/lib/impala/lib/hbase-client.jar

sudo ln -s /opt/cdh5/hbase-1.2.0-cdh5.10.0/lib/hbase-common-1.2.0-cdh5.10.0.jar /usr/lib/impala/lib/hbase-common.jar

sudo ln -s /opt/cdh5/hbase-1.2.0-cdh5.10.0/lib/hbase-protocol-1.2.0-cdh5.10.0.jar /usr/lib/impala/lib/hbase-protocol.jar

sudo ln -s /opt/cdh5/hive-1.1.0-cdh5.10.0/lib/hive-ant-1.1.0-cdh5.10.0.jar /usr/lib/impala/lib/hive-ant.jar

sudo ln -s /opt/cdh5/hive-1.1.0-cdh5.10.0/lib/hive-beeline-1.1.0-cdh5.10.0.jar /usr/lib/impala/lib/hive-beeline.jar

sudo ln -s /opt/cdh5/hive-1.1.0-cdh5.10.0/lib/hive-common-1.1.0-cdh5.10.0.jar /usr/lib/impala/lib/hive-common.jar

sudo ln -s /opt/cdh5/hive-1.1.0-cdh5.10.0/lib/hive-exec-1.1.0-cdh5.10.0.jar /usr/lib/impala/lib/hive-exec.jar

sudo ln -s /opt/cdh5/hive-1.1.0-cdh5.10.0/lib/hive-hbase-handler-1.1.0-cdh5.10.0.jar /usr/lib/impala/lib/hive-hbase-handler.jar

sudo ln -s /opt/cdh5/hive-1.1.0-cdh5.10.0/lib/hive-metastore-1.1.0-cdh5.10.0.jar /usr/lib/impala/lib/hive-metastore.jar

sudo ln -s /opt/cdh5/hive-1.1.0-cdh5.10.0/lib/hive-serde-1.1.0-cdh5.10.0.jar /usr/lib/impala/lib/hive-serde.jar

sudo ln -s /opt/cdh5/hive-1.1.0-cdh5.10.0/lib/hive-service-1.1.0-cdh5.10.0.jar /usr/lib/impala/lib/hive-service.jar

sudo ln -s /opt/cdh5/hive-1.1.0-cdh5.10.0/lib/hive-shims-common-1.1.0-cdh5.10.0.jar /usr/lib/impala/lib/hive-shims-common.jar

sudo ln -s /opt/cdh5/hive-1.1.0-cdh5.10.0/lib/hive-shims-1.1.0-cdh5.10.0.jar /usr/lib/impala/lib/hive-shims.jar

sudo ln -s /opt/cdh5/hive-1.1.0-cdh5.10.0/lib/hive-shims-scheduler-1.1.0-cdh5.10.0.jar /usr/lib/impala/lib/hive-shims-scheduler.jar

sudo ln -s /opt/cdh5/zookeeper-3.4.5-cdh5.10.0/zookeeper-3.4.5-cdh5.10.0.jar /usr/lib/impala/lib/zookeeper.jar

rm -rf /usr/lib/impala/lib/libhadoop.so

rm -rf /usr/lib/impala/lib/libhadoop.so.1.0.0

rm -rf /usr/lib/impala/lib/libhdfs.so

rm -rf /usr/lib/impala/lib/libhdfs.so.0.0.0

ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/lib/native/libhadoop.so /usr/lib/impala/lib/libhadoop.so

ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/lib/native/libhadoop.so.1.0.0 /usr/lib/impala/lib/libhadoop.so.1.0.0

ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/lib/native/libhdfs.so /usr/lib/impala/lib/libhdfs.so

ln -s /opt/cdh5/hadoop-2.6.0-cdh5.10.0/lib/native/libhdfs.so.0.0.0 /usr/lib/impala/lib/libhdfs.so.0.0.0

9、启动impala

#安装了hive的节点启动state-store和catalog服务

service impala-state-store start

service impala-catalog start

#所有datanode节点启动impalad

service impala-server start

Impala2.7.0-cdh5.x.x安装部署的更多相关文章

  1. Linux平台Oracle 12.1.0.2 单实例安装部署

    主题:Linux平台Oracle 12.1.0.2 单实例安装部署 环境:RHEL 6.5 + Oracle 12.1.0.2 需求:安装部署OEM 13.2需要Oracle 12.1.0.2版本作为 ...

  2. redis4.0.1集群安装部署

    安装环境 序号 项目 值 1 OS版本 Red Hat Enterprise Linux Server release 7.1 (Maipo) 2 内核版本 3.10.0-229.el7.x86_64 ...

  3. 0、ubuntu16.04安装部署kvm

    ubuntu16.04安装部署kvm1.查看CPU是否支持KVM egrep "(svm|vmx)" /proc/cpuinfo 2.安装相关kvm包 sudo apt-get i ...

  4. window10下的solr6.1.0入门笔记之---安装部署

    1.安装部署java1.6+ ,确保jre安装[安装步骤略] 安装后的环境为jdk1.8+ jre1.8+ 2.安装ant 下载:官网=>http://ant.apache.org/=>  ...

  5. centos7 ambari2.6.1.5+hdp2.6.4.0 大数据集群安装部署

    前言 本文是讲如何在centos7(64位) 安装ambari+hdp,如果在装有原生hadoop等集群的机器上安装,需要先将集群服务停掉,然后将不需要的环境变量注释掉即可,如果不注释掉,后面虽然可以 ...

  6. CDH5.12.1 安装部署

    ###通过http://192.168.50.200:7180/cmf/login 访问CM控制台 4.CDH安装 4.1CDH集群安装向导 1.admin/admin登陆到CM 2.同意licens ...

  7. Storm-0.9.0.1安装部署 指导

    可以带着下面问题来阅读本文章: 1.Storm只支持什么传输 2.通过什么配置,可以更改Zookeeper默认端口 3.Storm UI必须和Storm Nimbus部署在同一台机器上,UI无法正常工 ...

  8. OEMCC 13.2 安装部署

    需求:安装部署OEM 13.2 环境:两台主机,系统RHEL 6.5,分别部署OMS和OMR: OMS,也就是OEMCC的服务端 IP:192.168.1.88 内存:12G+ 硬盘:100G+ OM ...

  9. Apache atlas liunx环境安装部署手册

    一.        背景 本文使用一台ubuntu虚拟机安装Apache-atlas,使用集成包unzip apache-atlas-2.1.0.zip进行快速安装部署,该集成包高度集成了hadoop ...

随机推荐

  1. spring-mybatis-data-common程序级分表操作实例

    spring-mybatis-data-common-2.0新增分表机制,在1.0基础上做了部分调整. 基于机架展示分库应用数据库分表实力创建 create table tb_example_1( i ...

  2. mariadb(MySql)设置远程访问权限

    [问题]mariadb(MySql)安装之后,本地连接mysql是可以的,但是远程的机器不能连接和访问. [解决]修改mysql远程连接的ip限制配置. [步骤]1.本地mysql客户端连接mysql ...

  3. [k8s]通过openssl生成证书

    证书认证原理: http://www.cnblogs.com/iiiiher/p/7873737.html [root@m1 ssl]# cat master_ssl.cnf [req] req_ex ...

  4. vue使用node的入门

    1.安装cnpm npm install -g cnpm --registry=https://registry.npm.taobao.org 验证是否安装 cnpm -v 2.安装vue cnpm ...

  5. 如何在TextView类中创建超链接 Linkify

    Linkify是一个辅助类,通过RegEx样式匹配,自动地在TextView类(和继承的类)中创建超链接.符合特定的RegEx样式的文本会被转变成可点击的超链接,这些超链接隐式的调用startActi ...

  6. Java如何计数替换字符串中第一次出现的子字符串?

    在Java编程中,如何拆分正则表达式和字符串? 以下示例演示如何使用Matcher类的replaceFirst()方法替换字符中指定的子字符串的首次出现. package com.yiibai; im ...

  7. Java中的引用类型Scanner类和随机类型Random

    Scanner类 我们要学的Scanner类是属于引用数据类型,我们先了解下引用数据类型.   引用数据类型的使用 与定义基本数据类型变量不同,引用数据类型的变量定义及赋值有一个相对固定的步骤或格式. ...

  8. duilib进阶教程 -- Container控件 (3)

    前面两个教程的目的是教大家与MFC结合,那么从这篇起,将不再使用MFC,而使用纯win32项目,本文的所有知识已经在<duilib入门教程>里面讲过了,因此基础知识不再赘述. 代码下载:h ...

  9. U3D对齐功能

    1,按快捷键Shift + v 的用法:    [1]先选中你要对齐的模型A,[将物体A对齐到物体B].    [2]按快捷键Shift + v ,此时将鼠标移到模型A的各个顶点处,可发现各个顶点上会 ...

  10. DLL断点调试

    一般来说调试DLL是把DLL工程和exe工程放到一个解决方案里.如果不放到一个解决方案里,那两者的输出目录要一致,属性-连接器-常规-输出目录.保证dll,dll的pdb,exe,exe的pdb在一个 ...