ranger2.1.0源码编译以及安装

编译环境准备

环境需求 示例版本
JDK8 Java(TM) SE Runtime Environment (build 1.8.0_231-b11)
maven3.5 3.10.0-957.el7.x86_64
git git version 1.8.3.1
gcc gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-36)
python3.7 Python 3.7.0
nodejs 6.4.1

源码下载

https://ranger.apache.org/download.html
示例:wget https://dlcdn.apache.org/ranger/2.1.0/apache-ranger-2.1.0.tar.gz

编译

[root@local opt]# tar -zxvf apache-ranger-2.1.0.tar.gz
[root@local opt]# cd apache-ranger-2.1.0
[root@local opt]# mvn -DskipTests=true clean compile package install
[root@local opt]#
[root@local opt]#

常见错误

E1

[ERROR] Failed to execute goal on project ranger-hive-plugin: Could not resolve dependencies for project org.apache.ranger:ranger-hive-plugin:jar:2.1.0: Could not find artifact org.glassfish:javax.el:jar:3.0.1 in MavenCentral (https://repo1.maven.org/maven2/) -> [Help 1]

E1修复方法

模块:ranger-hbase-plugin-shim 的pom.xml 中hbase-server增加javax.el的exclusion

<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-server</artifactId>
<version>${hbase.version}</version>
<exclusions>
<exclusion>
<groupId>org.glassfish</groupId>
<artifactId>javax.el</artifactId>
</exclusion>
<exclusion>
<groupId>jdk.tools</groupId>
<artifactId>jdk.tools</artifactId>
</exclusion>
</exclusions>
</dependency>

模块:hbase-agent的pom.xml 中hbase-server增加 javax.el的exclusion

        <dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-server</artifactId>
<version>${hbase.version}</version>
<exclusions>
<exclusion>
<groupId>org.apache.hadoop</groupId>
<artifactId>*</artifactId>
</exclusion>
<exclusion>
<groupId>jdk.tools</groupId>
<artifactId>jdk.tools</artifactId>
</exclusion>
<exclusion>
<groupId>org.glassfish</groupId>
<artifactId>javax.el</artifactId>
</exclusion>
</exclusions>
</dependency>

模块:hive-agent的pom.xml 中hive-jdbc和hive-service增加 javax.el的exclusion

        <dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-jdbc</artifactId>
<version>${hive.version}</version>
<exclusions>
<exclusion>
<groupId>org.glassfish</groupId>
<artifactId>javax.el</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-service</artifactId>
<version>${hive.version}</version>
<exclusions>
<exclusion>
<groupId>org.glassfish</groupId>
<artifactId>javax.el</artifactId>
</exclusion>
</exclusions>
</dependency>

模块:ranger-hive-plugin-shim的pom.xml 中hive-jdbc和hive-service增加 javax.el的exclusion

        <dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-jdbc</artifactId>
<version>${hive.version}</version>
<exclusions>
<exclusion>
<groupId>org.glassfish</groupId>
<artifactId>javax.el</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-service</artifactId>
<version>${hive.version}</version>
<exclusions>
<exclusion>
<groupId>org.glassfish</groupId>
<artifactId>javax.el</artifactId>
</exclusion>
</exclusions>
</dependency>

编译完成

[INFO] ranger ............................................. SUCCESS [06:41 min]
[INFO] Jdbc SQL Connector ................................. SUCCESS [ 3.348 s]
[INFO] Credential Support ................................. SUCCESS [ 4.005 s]
[INFO] Audit Component .................................... SUCCESS [ 4.903 s]
[INFO] Common library for Plugins ......................... SUCCESS [ 12.143 s]
[INFO] Installer Support Component ........................ SUCCESS [ 1.312 s]
[INFO] Credential Builder ................................. SUCCESS [ 3.434 s]
[INFO] Embedded Web Server Invoker ........................ SUCCESS [ 2.546 s]
[INFO] Key Management Service ............................. SUCCESS [ 6.004 s]
[INFO] ranger-plugin-classloader .......................... SUCCESS [ 2.380 s]
[INFO] HBase Security Plugin Shim ......................... SUCCESS [ 5.526 s]
[INFO] HBase Security Plugin .............................. SUCCESS [ 7.073 s]
[INFO] Hdfs Security Plugin ............................... SUCCESS [ 6.313 s]
[INFO] Hive Security Plugin ............................... SUCCESS [ 48.054 s]
[INFO] Knox Security Plugin Shim .......................... SUCCESS [ 34.204 s]
[INFO] Knox Security Plugin ............................... SUCCESS [ 13.456 s]
[INFO] Storm Security Plugin .............................. SUCCESS [01:11 min]
[INFO] YARN Security Plugin ............................... SUCCESS [ 2.923 s]
[INFO] Ranger Util ........................................ SUCCESS [ 3.467 s]
[INFO] Unix Authentication Client ......................... SUCCESS [ 1.768 s]
[INFO] Security Admin Web Application ..................... SUCCESS [05:34 min]
[INFO] KAFKA Security Plugin .............................. SUCCESS [ 4.829 s]
[INFO] SOLR Security Plugin ............................... SUCCESS [ 38.678 s]
[INFO] NiFi Security Plugin ............................... SUCCESS [ 3.360 s]
[INFO] NiFi Registry Security Plugin ...................... SUCCESS [ 3.765 s]
[INFO] Unix User Group Synchronizer ....................... SUCCESS [ 5.628 s]
[INFO] Ldap Config Check Tool ............................. SUCCESS [ 1.958 s]
[INFO] Unix Authentication Service ........................ SUCCESS [ 2.293 s]
[INFO] KMS Security Plugin ................................ SUCCESS [ 4.156 s]
[INFO] Tag Synchronizer ................................... SUCCESS [ 5.095 s]
[INFO] Hdfs Security Plugin Shim .......................... SUCCESS [ 1.885 s]
[INFO] Hive Security Plugin Shim .......................... SUCCESS [ 5.011 s]
[INFO] YARN Security Plugin Shim .......................... SUCCESS [ 2.419 s]
[INFO] Storm Security Plugin shim ......................... SUCCESS [ 2.299 s]
[INFO] KAFKA Security Plugin Shim ......................... SUCCESS [ 2.167 s]
[INFO] SOLR Security Plugin Shim .......................... SUCCESS [ 2.491 s]
[INFO] Atlas Security Plugin Shim ......................... SUCCESS [ 2.080 s]
[INFO] KMS Security Plugin Shim ........................... SUCCESS [ 2.216 s]
[INFO] ranger-examples .................................... SUCCESS [ 0.171 s]
[INFO] Ranger Examples - Conditions and ContextEnrichers .. SUCCESS [ 3.232 s]
[INFO] Ranger Examples - SampleApp ........................ SUCCESS [ 1.099 s]
[INFO] Ranger Examples - Ranger Plugin for SampleApp ...... SUCCESS [ 2.178 s]
[INFO] Ranger Tools ....................................... SUCCESS [ 4.450 s]
[INFO] Atlas Security Plugin .............................. SUCCESS [ 3.318 s]
[INFO] Sqoop Security Plugin .............................. SUCCESS [ 3.578 s]
[INFO] Sqoop Security Plugin Shim ......................... SUCCESS [ 1.990 s]
[INFO] Kylin Security Plugin .............................. SUCCESS [ 4.512 s]
[INFO] Kylin Security Plugin Shim ......................... SUCCESS [ 2.517 s]
[INFO] Unix Native Authenticator .......................... SUCCESS [ 1.846 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 18:14 min
[INFO] Finished at: 2022-06-01T11:40:58+08:00
[INFO] ------------------------------------------------------------------------

编译包列表

-rw-r--r--  1 root root 295245545 May 28 16:33 ranger-2.1.0-admin.tar.gz
-rw-r--r-- 1 root root 48976682 May 28 16:33 ranger-2.1.0-atlas-plugin.tar.gz
-rw-r--r-- 1 root root 31709512 May 28 16:33 ranger-2.1.0-elasticsearch-plugin.tar.gz
-rw-r--r-- 1 root root 43390335 May 28 16:33 ranger-2.1.0-hbase-plugin.tar.gz
-rw-r--r-- 1 root root 41972314 May 28 16:33 ranger-2.1.0-hdfs-plugin.tar.gz
-rw-r--r-- 1 root root 41762386 May 28 16:33 ranger-2.1.0-hive-plugin.tar.gz
-rw-r--r-- 1 root root 58726808 May 28 16:33 ranger-2.1.0-kafka-plugin.tar.gz
-rw-r--r-- 1 root root 134727752 May 28 16:33 ranger-2.1.0-kms.tar.gz
-rw-r--r-- 1 root root 46122786 May 28 16:33 ranger-2.1.0-knox-plugin.tar.gz
-rw-r--r-- 1 root root 41685171 May 28 16:33 ranger-2.1.0-kylin-plugin.tar.gz
-rw-r--r-- 1 root root 34206 May 28 16:33 ranger-2.1.0-migration-util.tar.gz
-rw-r--r-- 1 root root 48387150 May 28 16:33 ranger-2.1.0-ozone-plugin.tar.gz
-rw-r--r-- 1 root root 61120560 May 28 16:33 ranger-2.1.0-presto-plugin.tar.gz
-rw-r--r-- 1 root root 19845038 May 28 16:33 ranger-2.1.0-ranger-tools.tar.gz
-rw-r--r-- 1 root root 36801 May 28 16:33 ranger-2.1.0-solr_audit_conf.tar.gz
-rw-r--r-- 1 root root 41366257 May 28 16:33 ranger-2.1.0-solr-plugin.tar.gz
-rw-r--r-- 1 root root 41893624 May 28 16:33 ranger-2.1.0-sqoop-plugin.tar.gz
-rw-r--r-- 1 root root 4434856 May 28 16:33 ranger-2.1.0-src.tar.gz
-rw-r--r-- 1 root root 54977723 May 28 16:33 ranger-2.1.0-storm-plugin.tar.gz
-rw-r--r-- 1 root root 35676787 May 28 16:33 ranger-2.1.0-tagsync.tar.gz
-rw-r--r-- 1 root root 17328489 May 28 16:33 ranger-2.1.0-usersync.tar.gz
-rw-r--r-- 1 root root 41899800 May 28 16:33 ranger-2.1.0-yarn-plugin.tar.gz

ranger2.1.0安装

admin安装:解压admin配置web以及solr审计

配置admin install.properties

pwd:/soft/
tar -zxvf ranger-2.1.0-admin.tar.gz
pwd:/soft/ranger-2.1.0-admin
vim install.properties
#修改以下内容
db_root_user=root
db_root_password=root
db_host=localhost:3306
db_name=ranger
db_user=ranger
db_password=ranger
audit_store=solr
audit_solr_urls=http://localhost:6083/solr/ranger_audits
audit_solr_user=solr
policymgr_external_url=http://localhost:6080
policymgr_http_enabled=true
unix_user=root
unix_user_pwd=123456
unix_group=root
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. #
# This file provides a list of the deployment variables for the Policy Manager Web Application
# #------------------------- DB CONFIG - BEGIN ----------------------------------
# Uncomment the below if the DBA steps need to be run separately
#setup_mode=SeparateDBA PYTHON_COMMAND_INVOKER=python #DB_FLAVOR=MYSQL|ORACLE|POSTGRES|MSSQL|SQLA
DB_FLAVOR=MYSQL
# #
# Location of DB client library (please check the location of the jar file)
#
#SQL_CONNECTOR_JAR=/usr/share/java/ojdbc6.jar
#SQL_CONNECTOR_JAR=/usr/share/java/mysql-connector-java.jar
#SQL_CONNECTOR_JAR=/usr/share/java/postgresql.jar
#SQL_CONNECTOR_JAR=/usr/share/java/sqljdbc4.jar
#SQL_CONNECTOR_JAR=/opt/sqlanywhere17/java/sajdbc4.jar
SQL_CONNECTOR_JAR=/usr/share/java/mysql-connector-java.jar #
# DB password for the DB admin user-id
# **************************************************************************
# ** If the password is left empty or not-defined here,
# ** it will try with blank password during installation process
# **************************************************************************
#
#db_root_user=root|SYS|postgres|sa|dba
#db_host=host:port # for DB_FLAVOR=MYSQL|POSTGRES|SQLA|MSSQL #for example: db_host=localhost:3306
#db_host=host:port:SID # for DB_FLAVOR=ORACLE #for SID example: db_host=localhost:1521:ORCL
#db_host=host:port/ServiceName # for DB_FLAVOR=ORACLE #for Service example: db_host=localhost:1521/XE
db_root_user=root
db_root_password=root
db_host=localhost:3306
#SSL config
db_ssl_enabled=false
db_ssl_required=false
db_ssl_verifyServerCertificate=false
#db_ssl_auth_type=1-way|2-way, where 1-way represents standard one way ssl authentication and 2-way represents mutual ssl authentication
db_ssl_auth_type=2-way
javax_net_ssl_keyStore=
javax_net_ssl_keyStorePassword=
javax_net_ssl_trustStore=
javax_net_ssl_trustStorePassword=
#
# DB UserId used for the Ranger schema
#
db_name=ranger
db_user=ranger
db_password=ranger # change password. Password for below mentioned users can be changed only once using this property.
#PLEASE NOTE :: Password should be minimum 8 characters with min one alphabet and one numeric.
rangerAdmin_password=
rangerTagsync_password=
rangerUsersync_password=
keyadmin_password= #Source for Audit Store. Currently solr and elasticsearch are supported.
# * audit_store is solr
audit_store=solr # * audit_solr_url Elasticsearch Host(s). E.g. 127.0.0.1
audit_elasticsearch_urls=
audit_elasticsearch_port=
audit_elasticsearch_protocol=
audit_elasticsearch_user=
audit_elasticsearch_password=
audit_elasticsearch_index=
audit_elasticsearch_bootstrap_enabled=true # * audit_solr_url URL to Solr. E.g. http://<solr_host>:6083/solr/ranger_audits
audit_solr_urls=http://localhost:6083/solr/ranger_audits
audit_solr_user=solr
audit_solr_password=
audit_solr_zookeepers= audit_solr_collection_name=ranger_audits
#solr Properties for cloud mode
audit_solr_config_name=ranger_audits
audit_solr_no_shards=1
audit_solr_no_replica=1
audit_solr_max_shards_per_node=1
audit_solr_acl_user_list_sasl=solr,infra-solr
audit_solr_bootstrap_enabled=true #------------------------- DB CONFIG - END ---------------------------------- #
# ------- PolicyManager CONFIG ----------------
# policymgr_external_url=http://localhost:6080
policymgr_http_enabled=true
policymgr_https_keystore_file=
policymgr_https_keystore_keyalias=rangeradmin
policymgr_https_keystore_password= #Add Supported Components list below separated by semi-colon, default value is empty string to support all components
#Example : policymgr_supportedcomponents=hive,hbase,hdfs
policymgr_supportedcomponents= #
# ------- PolicyManager CONFIG - END ---------------
# #
# ------- UNIX User CONFIG ----------------
#
unix_user=root
unix_user_pwd=123456
unix_group=root #
# ------- UNIX User CONFIG - END ----------------
#
# #
# UNIX authentication service for Policy Manager
#
# PolicyManager can authenticate using UNIX username/password
# The UNIX server specified here as authServiceHostName needs to be installed with ranger-unix-ugsync package.
# Once the service is installed on authServiceHostName, the UNIX username/password from the host <authServiceHostName> can be used to login into policy manager
#
# ** The installation of ranger-unix-ugsync package can be installed after the policymanager installation is finished.
#
#LDAP|ACTIVE_DIRECTORY|UNIX|NONE
authentication_method=NONE
remoteLoginEnabled=true
authServiceHostName=localhost
authServicePort=5151
ranger_unixauth_keystore=keystore.jks
ranger_unixauth_keystore_password=password
ranger_unixauth_truststore=cacerts
ranger_unixauth_truststore_password=changeit ####LDAP settings - Required only if have selected LDAP authentication ####
#
# Sample Settings
#
#xa_ldap_url=ldap://127.0.0.1:389
#xa_ldap_userDNpattern=uid={0},ou=users,dc=xasecure,dc=net
#xa_ldap_groupSearchBase=ou=groups,dc=xasecure,dc=net
#xa_ldap_groupSearchFilter=(member=uid={0},ou=users,dc=xasecure,dc=net)
#xa_ldap_groupRoleAttribute=cn
#xa_ldap_base_dn=dc=xasecure,dc=net
#xa_ldap_bind_dn=cn=admin,ou=users,dc=xasecure,dc=net
#xa_ldap_bind_password=
#xa_ldap_referral=follow|ignore
#xa_ldap_userSearchFilter=(uid={0}) xa_ldap_url=
xa_ldap_userDNpattern=
xa_ldap_groupSearchBase=
xa_ldap_groupSearchFilter=
xa_ldap_groupRoleAttribute=
xa_ldap_base_dn=
xa_ldap_bind_dn=
xa_ldap_bind_password=
xa_ldap_referral=
xa_ldap_userSearchFilter=
####ACTIVE_DIRECTORY settings - Required only if have selected AD authentication ####
#
# Sample Settings
#
#xa_ldap_ad_domain=xasecure.net
#xa_ldap_ad_url=ldap://127.0.0.1:389
#xa_ldap_ad_base_dn=dc=xasecure,dc=net
#xa_ldap_ad_bind_dn=cn=administrator,ou=users,dc=xasecure,dc=net
#xa_ldap_ad_bind_password=
#xa_ldap_ad_referral=follow|ignore
#xa_ldap_ad_userSearchFilter=(sAMAccountName={0}) xa_ldap_ad_domain=
xa_ldap_ad_url=
xa_ldap_ad_base_dn=
xa_ldap_ad_bind_dn=
xa_ldap_ad_bind_password=
xa_ldap_ad_referral=
xa_ldap_ad_userSearchFilter= #------------ Kerberos Config -----------------
spnego_principal=
spnego_keytab=
token_valid=30
cookie_domain=
cookie_path=/
admin_principal=
admin_keytab=
lookup_principal=
lookup_keytab=
hadoop_conf=/etc/hadoop/conf
#
#-------- SSO CONFIG - Start ------------------
#
sso_enabled=false
sso_providerurl=https://127.0.0.1:8443/gateway/knoxsso/api/v1/websso
sso_publickey= #
#-------- SSO CONFIG - END ------------------ # Custom log directory path
RANGER_ADMIN_LOG_DIR=$PWD # PID file path
RANGER_PID_DIR_PATH=/var/run/ranger # ################# DO NOT MODIFY ANY VARIABLES BELOW #########################
#
# --- These deployment variables are not to be modified unless you understand the full impact of the changes
#
################################################################################
XAPOLICYMGR_DIR=$PWD
app_home=$PWD/ews/webapp
TMPFILE=$PWD/.fi_tmp
LOGFILE=$PWD/logfile
LOGFILES="$LOGFILE" JAVA_BIN='java'
JAVA_VERSION_REQUIRED='1.8'
JAVA_ORACLE='Java(TM) SE Runtime Environment' ranger_admin_max_heap_size=1g
#retry DB and Java patches after the given time in seconds.
PATCH_RETRY_INTERVAL=120
STALE_PATCH_ENTRY_HOLD_TIME=10 #mysql_create_user_file=${PWD}/db/mysql/create_dev_user.sql
mysql_core_file=db/mysql/optimized/current/ranger_core_db_mysql.sql
mysql_audit_file=db/mysql/xa_audit_db.sql
#mysql_asset_file=${PWD}/db/mysql/reset_asset.sql #oracle_create_user_file=${PWD}/db/oracle/create_dev_user_oracle.sql
oracle_core_file=db/oracle/optimized/current/ranger_core_db_oracle.sql
oracle_audit_file=db/oracle/xa_audit_db_oracle.sql
#oracle_asset_file=${PWD}/db/oracle/reset_asset_oracle.sql
#
postgres_core_file=db/postgres/optimized/current/ranger_core_db_postgres.sql
postgres_audit_file=db/postgres/xa_audit_db_postgres.sql
#
sqlserver_core_file=db/sqlserver/optimized/current/ranger_core_db_sqlserver.sql
sqlserver_audit_file=db/sqlserver/xa_audit_db_sqlserver.sql
#
sqlanywhere_core_file=db/sqlanywhere/optimized/current/ranger_core_db_sqlanywhere.sql
sqlanywhere_audit_file=db/sqlanywhere/xa_audit_db_sqlanywhere.sql
cred_keystore_filename=$app_home/WEB-INF/classes/conf/.jceks/rangeradmin.jceks

配置solr install.properties

pwd:/soft/ranger-2.1.0-admin/contrib/solr_for_audit_setup
vim install.properties
#修改以下选项
SOLR_USER=root
SOLR_GROUP=root
SOLR_INSTALL=true
#此链接可以自行下载solr安装包到本地内网http服务器 或者替换为apache链接 http://archive.apache.org/dist/lucene/solr/8.3.0/solr-8.3.0.tgz
SOLR_DOWNLOAD_URL=http://192.168.1.222/solr/solr-8.3.0.tgz
SOLR_HOST_URL=http://localhost:6083
SOLR_RANGER_HOME=/opt/solr/ranger_audit_server
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. #Note:
#1. This file is sourced from setup.sh, so make sure there are no spaces after the "="
#2. For variable with file path, please provide full path #!/bin/bash #JAVA_HOME to be used by Solr. Solr only support JDK 1.7 and above. If JAVA_HOME is not set
#in the env, then please set it here
#JAVA_HOME= #The operating system (linux) user used by Solr process. You need to run Solr as the below user and group
SOLR_USER=root
SOLR_GROUP=root
#How long to keep the audit logs. Please note, audit records grows very rapidly. Make sure to
#allocate enough memory and disk space to the server running Solr.
MAX_AUDIT_RETENTION_DAYS=90 #If you want this script to install Solr, set the value to true. If it is already installed, then set this to false
#If it is true, then it will download and install it.
#NOTE: If you want the script to install Solr, then this script needs to be executed as root.
SOLR_INSTALL=true ### BEGIN: if SOLR_INSTALL==true ###
#Location to download Solr. If SOLR_INSTALL is true, then SOLR_DOWNLOAD_URL is mandatory #For open source version, pick a mirror from below. Recommended versions are Apache Solr 5.2.1 or above
#http://lucene.apache.org/solr/mirrors-solr-latest-redir.html #Note: If possible, use the link from one of the mirror site
#SOLR_DOWNLOAD_URL=http://archive.apache.org/dist/lucene/solr/5.2.1/solr-5.2.1.tgz
SOLR_DOWNLOAD_URL=http://192.168.1.222/solr/solr-8.3.0.tgz ### END: if SOLR_INSTALL==true ### #The folder where Solr is installed. If SOLR_INSTALL=false, then Solr need to be preinstalled, else the setup will
#install at the below location
#Note: If you are using RPM from LucidWorks in HDP, then Solr is by default installed in the following location:
#SOLR_INSTALL_FOLDER=/opt/lucidworks-hdpsearch/solr
SOLR_INSTALL_FOLDER=/opt/solr #The location for the Solr configuration for Ranger. This script copies required configuration and
#startup scripts to the $SOLR_RANGER_HOME folder.
#NOTE: In SolrCloud mode, the data folders are under this folder. So make sure this is on seperate drive
# with enough disk space. Have 1TB free disk space on this volume. Also regularly monitor available disk space
# for this volume
#SOLR_RANGER_HOME=/opt/solr/ranger_audit_server
SOLR_RANGER_HOME=/opt/solr/ranger_audit_server #Port for Solr instance to be used by Ranger.
SOLR_RANGER_PORT=6083 #Standalone or SolrCloud. Valid values are "standalone" or "solrcloud"
SOLR_DEPLOYMENT=standalone #### BEGIN: if SOLR_DEPLOYMENT=standalone ##########################
#Location for the data files. Make sure it has enough disk space. Since audits records can grow dramatically,
#please have 1TB free disk space for the data folder. Also regularly monitor available disk space for this volume
SOLR_RANGER_DATA_FOLDER=/opt/solr/ranger_audit_server/data
#### END: if SOLR_DEPLOYMENT=standalone ########################## #### BEGIN: if SOLR_DEPLOYMENT=solrcloud ##########################
#Comma seperated list of of zookeeper host and path. Give fully qualified domain name for the host
#SOLR_ZK=localhost:2181/ranger_audits
SOLR_ZK=
#Base URL of the Solr. Used for creating collections
SOLR_HOST_URL=http://localhost:6083
#Number of shards
SOLR_SHARDS=1
#Number of replication
SOLR_REPLICATION=1
#### END: if SOLR_DEPLOYMENT=solrcloud ########################## #Location for the log file. Please note that "solr" or the process owner should have write permission
#to log folder
#SOLR_LOG_FOLDER=logs
SOLR_LOG_FOLDER=/var/log/solr/ranger_audits SOLR_RANGER_COLLECTION=ranger_audits #Memory for Solr. Both min and max memory to the java process are set to this value.
#Note: In production, please assign enough memory. It is recommended to have at least 2GB RAM.
# Higher the RAM, the better. Solr core can take upto 32GB. For dev test you can use 512m
#SOLR_MAX_MEM=2g
#SOLR_MAX_MEM=512m
SOLR_MAX_MEM=2g

初始化solr

pwd:/soft/ranger-2.1.0-admin/contrib/solr_for_audit_setup
./setup.sh
Wed Jun  1 11:58:55 CST 2022|INFO|Solr Ranger Home </opt/solr/ranger_audit_server> exists. Will overwrite configurations
Wed Jun 1 11:58:55 CST 2022|WARN|/opt/solr exists. This script will overwrite some files
Wed Jun 1 11:58:55 CST 2022|INFO|Downloading solr from http://192.168.1.222/solr/solr-8.3.0.tgz
--2022-06-01 11:58:55-- http://192.168.1.222/solr/solr-8.3.0.tgz
Connecting to 192.168.1.222:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 186097798 (177M) [application/x-gzip]
Saving to: ‘solr-8.3.0.tgz’ 100%[========================================================================================================================================================================>] 186,097,798 110MB/s in 1.6s 2022-06-01 11:58:56 (110 MB/s) - ‘solr-8.3.0.tgz’ saved [186097798/186097798] Wed Jun 1 11:58:59 CST 2022|WARN|/opt/solr exists. Moving to /opt/solr.bk.060122115855
Wed Jun 1 11:58:59 CST 2022|INFO|Installed Solr in /opt/solr
Wed Jun 1 11:58:59 CST 2022|INFO|Configuring standalone instance
Wed Jun 1 11:58:59 CST 2022|INFO|Copying Ranger Audit Server configuration to /opt/solr/ranger_audit_server
Wed Jun 1 11:59:00 CST 2022|INFO|Done configuring Solr for Apache Ranger Audit
Wed Jun 1 11:59:00 CST 2022|INFO|Solr HOME for Ranger Audit is /opt/solr/ranger_audit_server
Wed Jun 1 11:59:00 CST 2022|INFO|Data folder for Audit logs is /opt/solr/ranger_audit_server/data
Wed Jun 1 11:59:00 CST 2022|INFO|To start Solr run /opt/solr/ranger_audit_server/scripts/start_solr.sh
Wed Jun 1 11:59:00 CST 2022|INFO|To stop Solr run /opt/solr/ranger_audit_server/scripts/stop_solr.sh
Wed Jun 1 11:59:00 CST 2022|INFO|After starting Solr for RangerAudit, it will listen at 6083. E.g http://app01-saas:6083
Wed Jun 1 11:59:00 CST 2022|INFO|Configure Ranger to use the following URL http://app01-saas:6083/solr/ranger_audits
Wed Jun 1 11:59:00 CST 2022|INFO| ** NOTE: If Solr is Secured then solrclient JAAS configuration has to be added to Ranger Admin and Ranger Plugin properties
Wed Jun 1 11:59:00 CST 2022|INFO| ** Refer documentation on how to configure Ranger for audit to Secure Solr
########## Done ###################
Created file /opt/solr/ranger_audit_server/install_notes.txt with instructions to start and stop
###################################

启动solr

pwd:/opt/solr/ranger_audit_server/scripts
# ./start_solr.sh -force
NOTE: Please install lsof as this script needs it to determine if Solr is listening on port 6083.

Started Solr server on port 6083 (pid=30207). Happy searching!
#check
# netstat -anp | grep 6083
tcp6 0 0 :::6083 :::* LISTEN 30207/java

初始化admin

pwd:/soft/ranger-2.1.0-admin
./setup.sh
2022-06-01 12:03:27,774  [JISQL] /usr/java/jdk1.8.0_231-amd64/bin/java  -cp /usr/share/java/mysql-connector-java.jar:/soft/ranger-2.1.0-admin/jisql/lib/* org.apache.util.sql.Jisql -driver mysqlconj -cstring jdbc:mysql://localhost:3306/ranger -u 'ranger' -p '********' -noheader -trim -c \;  -query "select 1;"
2022-06-01 12:03:28,301 [I] Checking connection passed.
Installation of Ranger PolicyManager Web Application is completed.

启动admin

# ranger-admin start
Starting Apache Ranger Admin Service
Apache Ranger Admin Service with pid 32471 has started.
#CHECK
# netstat -anp | grep 6080
tcp6 0 0 :::6080 :::* LISTEN 32471/java

审计功能

usersync安装

properties配置

pwd:/soft/ranger-2.1.0-usersync
vim install.properties
#修改项
POLICY_MGR_URL =http://localhost:6080
unix_user=root
unix_group=root
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. # The base path for the usersync process
ranger_base_dir = /etc/ranger #
# The following URL should be the base URL for connecting to the policy manager web application
# For example:
#
# POLICY_MGR_URL = http://policymanager.xasecure.net:6080
#
POLICY_MGR_URL =http://localhost:6080 # sync source, only unix and ldap are supported at present
# defaults to unix
SYNC_SOURCE = unix #
# Minimum Unix User-id to start SYNC.
# This should avoid creating UNIX system-level users in the Policy Manager
#
MIN_UNIX_USER_ID_TO_SYNC = 500 # Minimum Unix Group-id to start SYNC.
# This should avoid creating UNIX system-level users in the Policy Manager
#
MIN_UNIX_GROUP_ID_TO_SYNC = 500 # sync interval in minutes
# user, groups would be synced again at the end of each sync interval
# defaults to 5 if SYNC_SOURCE is unix
# defaults to 360 if SYNC_SOURCE is ldap
SYNC_INTERVAL =1 #User and group for the usersync process
unix_user=root
unix_group=root #change password of rangerusersync user. Please note that this password should be as per rangerusersync user in ranger
rangerUsersync_password= #Set to run in kerberos environment
usersync_principal=
usersync_keytab=
hadoop_conf=/etc/hadoop/conf
#
# The file where all credential is kept in cryptic format
#
CRED_KEYSTORE_FILENAME=/etc/ranger/usersync/conf/rangerusersync.jceks # SSL Authentication
AUTH_SSL_ENABLED=false
AUTH_SSL_KEYSTORE_FILE=/etc/ranger/usersync/conf/cert/unixauthservice.jks
AUTH_SSL_KEYSTORE_PASSWORD=UnIx529p
AUTH_SSL_TRUSTSTORE_FILE=
AUTH_SSL_TRUSTSTORE_PASSWORD= # ---------------------------------------------------------------
# The following properties are relevant only if SYNC_SOURCE = ldap
# --------------------------------------------------------------- # The below properties ROLE_ASSIGNMENT_LIST_DELIMITER, USERS_GROUPS_ASSIGNMENT_LIST_DELIMITER, USERNAME_GROUPNAME_ASSIGNMENT_LIST_DELIMITER,
#and GROUP_BASED_ROLE_ASSIGNMENT_RULES can be used to assign role to LDAP synced users and groups
#NOTE all the delimiters should have different values and the delimiters should not contain characters that are allowed in userName or GroupName # default value ROLE_ASSIGNMENT_LIST_DELIMITER = &
ROLE_ASSIGNMENT_LIST_DELIMITER = & #default value USERS_GROUPS_ASSIGNMENT_LIST_DELIMITER = :
USERS_GROUPS_ASSIGNMENT_LIST_DELIMITER = : #default value USERNAME_GROUPNAME_ASSIGNMENT_LIST_DELIMITER = ,
USERNAME_GROUPNAME_ASSIGNMENT_LIST_DELIMITER = , # with above mentioned delimiters a sample value would be ROLE_SYS_ADMIN:u:userName1,userName2&ROLE_SYS_ADMIN:g:groupName1,groupName2&ROLE_KEY_ADMIN:u:userName&ROLE_KEY_ADMIN:g:groupName&ROLE_USER:u:userName3,userName4&ROLE_USER:g:groupName3
#&ROLE_ADMIN_AUDITOR:u:userName&ROLE_KEY_ADMIN_AUDITOR:u:userName&ROLE_KEY_ADMIN_AUDITOR:g:groupName&ROLE_ADMIN_AUDITOR:g:groupName
GROUP_BASED_ROLE_ASSIGNMENT_RULES = # URL of source ldap
# a sample value would be: ldap://ldap.example.com:389
# Must specify a value if SYNC_SOURCE is ldap
SYNC_LDAP_URL = # ldap bind dn used to connect to ldap and query for users and groups
# a sample value would be cn=admin,ou=users,dc=hadoop,dc=apache,dc=org
# Must specify a value if SYNC_SOURCE is ldap
SYNC_LDAP_BIND_DN = # ldap bind password for the bind dn specified above
# please ensure read access to this file is limited to root, to protect the password
# Must specify a value if SYNC_SOURCE is ldap
# unless anonymous search is allowed by the directory on users and group
SYNC_LDAP_BIND_PASSWORD = # ldap delta sync flag used to periodically sync users and groups based on the updates in the server
# please customize the value to suit your deployment
# default value is set to true when is SYNC_SOURCE is ldap
SYNC_LDAP_DELTASYNC = # search base for users and groups
# sample value would be dc=hadoop,dc=apache,dc=org
SYNC_LDAP_SEARCH_BASE = # search base for users
# sample value would be ou=users,dc=hadoop,dc=apache,dc=org
# overrides value specified in SYNC_LDAP_SEARCH_BASE
SYNC_LDAP_USER_SEARCH_BASE = # search scope for the users, only base, one and sub are supported values
# please customize the value to suit your deployment
# default value: sub
SYNC_LDAP_USER_SEARCH_SCOPE = sub # objectclass to identify user entries
# please customize the value to suit your deployment
# default value: person
SYNC_LDAP_USER_OBJECT_CLASS = person # optional additional filter constraining the users selected for syncing
# a sample value would be (dept=eng)
# please customize the value to suit your deployment
# default value is empty
SYNC_LDAP_USER_SEARCH_FILTER = # attribute from user entry that would be treated as user name
# please customize the value to suit your deployment
# default value: cn
SYNC_LDAP_USER_NAME_ATTRIBUTE = cn # attribute from user entry whose values would be treated as
# group values to be pushed into Policy Manager database
# You could provide multiple attribute names separated by comma
# default value: memberof, ismemberof
SYNC_LDAP_USER_GROUP_NAME_ATTRIBUTE = memberof,ismemberof
#
# UserSync - Case Conversion Flags
# possible values: none, lower, upper
SYNC_LDAP_USERNAME_CASE_CONVERSION=lower
SYNC_LDAP_GROUPNAME_CASE_CONVERSION=lower #user sync log path
logdir=/soft/ranger-2.1.0-usersync/logs
#/var/log/ranger/usersync # PID DIR PATH
USERSYNC_PID_DIR_PATH=/var/run/ranger # do we want to do ldapsearch to find groups instead of relying on user entry attributes
# valid values: true, false
# any value other than true would be treated as false
# default value: false
SYNC_GROUP_SEARCH_ENABLED= # do we want to do ldapsearch to find groups instead of relying on user entry attributes and
# sync memberships of those groups
# valid values: true, false
# any value other than true would be treated as false
# default value: false
SYNC_GROUP_USER_MAP_SYNC_ENABLED= # search base for groups
# sample value would be ou=groups,dc=hadoop,dc=apache,dc=org
# overrides value specified in SYNC_LDAP_SEARCH_BASE, SYNC_LDAP_USER_SEARCH_BASE
# if a value is not specified, takes the value of SYNC_LDAP_SEARCH_BASE
# if SYNC_LDAP_SEARCH_BASE is also not specified, takes the value of SYNC_LDAP_USER_SEARCH_BASE
SYNC_GROUP_SEARCH_BASE= # search scope for the groups, only base, one and sub are supported values
# please customize the value to suit your deployment
# default value: sub
SYNC_GROUP_SEARCH_SCOPE= # objectclass to identify group entries
# please customize the value to suit your deployment
# default value: groupofnames
SYNC_GROUP_OBJECT_CLASS= # optional additional filter constraining the groups selected for syncing
# a sample value would be (dept=eng)
# please customize the value to suit your deployment
# default value is empty
SYNC_LDAP_GROUP_SEARCH_FILTER= # attribute from group entry that would be treated as group name
# please customize the value to suit your deployment
# default value: cn
SYNC_GROUP_NAME_ATTRIBUTE= # attribute from group entry that is list of members
# please customize the value to suit your deployment
# default value: member
SYNC_GROUP_MEMBER_ATTRIBUTE_NAME= # do we want to use paged results control during ldapsearch for user entries
# valid values: true, false
# any value other than true would be treated as false
# default value: true
# if the value is false, typical AD would not return more than 1000 entries
SYNC_PAGED_RESULTS_ENABLED= # page size for paged results control
# search results would be returned page by page with the specified number of entries per page
# default value: 500
SYNC_PAGED_RESULTS_SIZE=
#LDAP context referral could be ignore or follow
SYNC_LDAP_REFERRAL =ignore # if you want to enable or disable jvm metrics for usersync process
# valid values: true, false
# any value other than true would be treated as false
# default value: false
# if the value is false, jvm metrics is not created
JVM_METRICS_ENABLED= # filename of jvm metrics created for usersync process
# default value: ranger_usersync_metric.json
JVM_METRICS_FILENAME= #file directory for jvm metrics
# default value : logdir
JVM_METRICS_FILEPATH= #frequency for jvm metrics to be updated
# default value : 10000 milliseconds
JVM_METRICS_FREQUENCY_TIME_IN_MILLIS=

usersync初始化

# ./setup.sh
INFO: moving [/etc/ranger/usersync/conf/java_home.sh] to [/etc/ranger/usersync/conf/.java_home.sh.01062022121036] .......
Direct Key not found:SYNC_GROUP_USER_MAP_SYNC_ENABLED
Direct Key not found:hadoop_conf
Direct Key not found:ranger_base_dir
Direct Key not found:USERSYNC_PID_DIR_PATH
Direct Key not found:rangerUsersync_password
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
log4j:WARN No appenders could be found for logger (org.apache.htrace.core.Tracer).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
The alias usersync.ssl.key.password already exists!! Will try to delete first.
FOUND value of [interactive] field in the Class [org.apache.hadoop.security.alias.CredentialShell] = [true]
Deleting credential: usersync.ssl.key.password from CredentialProvider: jceks://file/etc/ranger/usersync/conf/rangerusersync.jceks
Credential usersync.ssl.key.password has been successfully deleted.
Provider jceks://file/etc/ranger/usersync/conf/rangerusersync.jceks was updated.
WARNING: You have accepted the use of the default provider password
by not configuring a password in one of the two following locations:
* In the environment variable HADOOP_CREDSTORE_PASSWORD
* In a file referred to by the configuration entry
hadoop.security.credstore.java-keystore-provider.password-file.
Please review the documentation regarding provider passwords in
the keystore passwords section of the Credential Provider API
Continuing with the default provider password. usersync.ssl.key.password has been successfully created.
Provider jceks://file/etc/ranger/usersync/conf/rangerusersync.jceks was updated.
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
log4j:WARN No appenders could be found for logger (org.apache.htrace.core.Tracer).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
The alias ranger.usersync.ldap.bindalias already exists!! Will try to delete first.
FOUND value of [interactive] field in the Class [org.apache.hadoop.security.alias.CredentialShell] = [true]
Deleting credential: ranger.usersync.ldap.bindalias from CredentialProvider: jceks://file/etc/ranger/usersync/conf/rangerusersync.jceks
Credential ranger.usersync.ldap.bindalias has been successfully deleted.
Provider jceks://file/etc/ranger/usersync/conf/rangerusersync.jceks was updated.
WARNING: You have accepted the use of the default provider password
by not configuring a password in one of the two following locations:
* In the environment variable HADOOP_CREDSTORE_PASSWORD
* In a file referred to by the configuration entry
hadoop.security.credstore.java-keystore-provider.password-file.
Please review the documentation regarding provider passwords in
the keystore passwords section of the Credential Provider API
Continuing with the default provider password. ranger.usersync.ldap.bindalias has been successfully created.
Provider jceks://file/etc/ranger/usersync/conf/rangerusersync.jceks was updated.
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
log4j:WARN No appenders could be found for logger (org.apache.htrace.core.Tracer).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
The alias usersync.ssl.truststore.password already exists!! Will try to delete first.
FOUND value of [interactive] field in the Class [org.apache.hadoop.security.alias.CredentialShell] = [true]
Deleting credential: usersync.ssl.truststore.password from CredentialProvider: jceks://file/etc/ranger/usersync/conf/rangerusersync.jceks
Credential usersync.ssl.truststore.password has been successfully deleted.
Provider jceks://file/etc/ranger/usersync/conf/rangerusersync.jceks was updated.
WARNING: You have accepted the use of the default provider password
by not configuring a password in one of the two following locations:
* In the environment variable HADOOP_CREDSTORE_PASSWORD
* In a file referred to by the configuration entry
hadoop.security.credstore.java-keystore-provider.password-file.
Please review the documentation regarding provider passwords in
the keystore passwords section of the Credential Provider API
Continuing with the default provider password. usersync.ssl.truststore.password has been successfully created.
Provider jceks://file/etc/ranger/usersync/conf/rangerusersync.jceks was updated.
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
log4j:WARN No appenders could be found for logger (org.apache.htrace.core.Tracer).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
The alias ranger.usersync.policymgr.password already exists!! Will try to delete first.
FOUND value of [interactive] field in the Class [org.apache.hadoop.security.alias.CredentialShell] = [true]
Deleting credential: ranger.usersync.policymgr.password from CredentialProvider: jceks://file/etc/ranger/usersync/conf/rangerusersync.jceks
Credential ranger.usersync.policymgr.password has been successfully deleted.
Provider jceks://file/etc/ranger/usersync/conf/rangerusersync.jceks was updated.
WARNING: You have accepted the use of the default provider password
by not configuring a password in one of the two following locations:
* In the environment variable HADOOP_CREDSTORE_PASSWORD
* In a file referred to by the configuration entry
hadoop.security.credstore.java-keystore-provider.password-file.
Please review the documentation regarding provider passwords in
the keystore passwords section of the Credential Provider API
Continuing with the default provider password. ranger.usersync.policymgr.password has been successfully created.
Provider jceks://file/etc/ranger/usersync/conf/rangerusersync.jceks was updated.
INFO: moving [/etc/ranger/usersync/conf/ranger-ugsync-site.xml] to [/etc/ranger/usersync/conf/.ranger-ugsync-site.xml.01062022121042] .......
WARNING: Unix Authentication Program (/soft/ranger-2.1.0-usersync/native/pamCredValidator.uexe) is not available for setting chmod(4550), chown(root:root)

ranger-ugsync-site.xml配置

# pwd : /soft/ranger-2.1.0-usersync/conf
# vim ranger-ugsync-site.xml
# 更改
<property>
<name>ranger.usersync.enabled</name>
<value>false</value>
</property>
#新值
<property>
<name>ranger.usersync.enabled</name>
<value>true</value>
</property>

启动usersync

# pwd:/soft/ranger-2.1.0-usersync
# ./start.sh
NOTE: This script is provided for backward compatibility only. All scripts calling this should now use '/usr/bin/ranger-usersync start' instead
Apache Ranger Usersync Service is already running [pid={1367}]

usersync同步结果

待续插件集成

ranger2.1.0源码编译以及安装的更多相关文章

  1. Spark系列(一)Spark1.0.0源码编译及安装

    最近想对自己学的东西做些回顾,想到写博客是个不错的方式,方便他人也有利自己,刚开始写不足之处大家多担待. 编译前需要安装JDK1.6以上.scala.Maven.Ant.hadoop2.20 如下图( ...

  2. ambari 2.5.0源码编译安装

    参考:https://www.ibm.com/developerworks/cn/opensource/os-cn-bigdata-ambari/index.html Ambari 是什么 Ambar ...

  3. hadoop-1.2.0源码编译

    以下为在CentOS-6.4下hadoop-1.2.0源码编译步骤. 1. 安装并且配置ant 下载ant,将ant目录下的bin文件夹加入到PATH变量中. 2. 安装git,安装autoconf, ...

  4. hadoop-2.6.0源码编译问题汇总

    在上一篇文章中,介绍了hadoop-2.6.0源码编译的一般流程,因个人计算机环境的不同, 编译过程中难免会出现一些错误,下面是我编译过程中遇到的错误. 列举出来并附上我解决此错误的方法,希望对大家有 ...

  5. Spark1.0.0 源码编译和部署包生成

    问题导读:1.如何对Spark1.0.0源码编译?2.如何生成Spark1.0的部署包?3.如何获取包资源? Spark1.0.0的源码编译和部署包生成,其本质只有两种:Maven和SBT,只不过针对 ...

  6. 使用Maven将Hadoop2.2.0源码编译成Eclipse项目

    编译环境: OS:RHEL 6.3 x64 Maven:3.2.1 Eclipse:Juno SR2 Linux x64 libprotoc:2.5.0 JDK:1.7.0_51 x64 步骤: 1. ...

  7. Ubuntu 环境 TensorFlow (最新版1.4) 源码编译、安装

    Ubuntu 环境 TensorFlow 源码编译安装 基于(Ubuntu 14.04LTS/Ubuntu 16.04LTS/) 一.编译环境 1) 安装 pip sudo apt-get insta ...

  8. Ubantu16.04进行Android 8.0源码编译

    参考这篇博客 经过测试,8.0源码下载及编译之后,占用100多G的硬盘空间,尽量给ubantu系统多留一些硬盘空间,如果后续需要在编译好的源码上进行开发,需要预留更多的控件,为了防止后续出现文件权限问 ...

  9. Spark2.0.0源码编译

    Hive默认使用MapReduce作为执行引擎,即Hive on mr,Hive还可以使用Tez和Spark作为其执行引擎,分别为Hive on Tez和Hive on Spark.由于MapRedu ...

  10. Tomcat8.0源码编译

    最近打算开始研究一下Tomcat的工作原理,拜读一下源码.所以先从编译源码开始了.尽管网上有那么多的资料,但是总是觉得,自己研究一遍,写一遍,在动手做一遍能够让我们更加深入的了解.现在整个社会都流行着 ...

随机推荐

  1. lua变量、数据类型、if判断条件和数据结构table以及【lua 函数】

    一.lua变量[ 全局变量和局部变量和表中的域] Lua 变量有三种类型:全局变量和局部变量和表中的域. 全局变量:默认情况下,Lua中所有的变量都是全局变量. 局部变量:使用local 显式声明在函 ...

  2. 【机器学习与深度学习理论要点】26.请列举AlexNet的特点

    请列举AlexNet的特点 使用ReLU作为激活函数,并验证其效果在较深的网络超过了Sigmoid,成功解决了sigmoid在网络较深时梯度消失问题 使用dropout(丢弃学习)随机忽略一部分神经元 ...

  3. RocketMQ的简单使用

    大家好,我是Leo!今天来和大家分享RocketMQ的一些用法. 领域模型介绍 Producer: 用于生产消息的运行实体. Topic: 主题,用于消息传输和存储的分组容器. MessageQueu ...

  4. 一文搞懂 x64 IA-64 AMD64 Inte64 IA-32e 架构之间的关系

    想要搞清楚 x64.IA64.AMD64 指令集之间的关系,就要先了解 Intel 和 AMD 这两家公司在生产处理器上的发展历史. x86 处理器 1978年 Intel 生产了它的第一款 16bi ...

  5. golang在编程语言排行榜上排名第10,请不要说golang已死。

    四月头条:编程语言 Zig 进入 TIOBE 指数前 50 名 最近,我们讨论了高性能编程语言的出现.由于需要处理的数据量越来越大,这些编程语言正在蓬勃发展.因此,C 和 C++ 在前十名中表现良好, ...

  6. 2022-11-15:这里有 n 个航班,它们分别从 1 到 n 进行编号。 有一份航班预订表 bookings , 表中第 i 条预订记录 bookings[i] = [firsti, lasti,

    2022-11-15:这里有 n 个航班,它们分别从 1 到 n 进行编号. 有一份航班预订表 bookings , 表中第 i 条预订记录 bookings[i] = [firsti, lasti, ...

  7. 2022-05-06:给你一个整数数组 arr,请你将该数组分隔为长度最多为 k 的一些(连续)子数组。分隔完成后,每个子数组的中的所有值都会变为该子数组中的最大值。 返回将数组分隔变换后能够得到的元

    2022-05-06:给你一个整数数组 arr,请你将该数组分隔为长度最多为 k 的一些(连续)子数组.分隔完成后,每个子数组的中的所有值都会变为该子数组中的最大值. 返回将数组分隔变换后能够得到的元 ...

  8. ubuntu为navicat创建快捷方式

    一.前言 最近在ubuntu上安装了navicat,但是发现不能将其固定在启动栏阿!!!不能每次都用terminal运行吧!于是在上网查,有一说一,网上很多文章写的方法都不能实现(不排除是ubuntu ...

  9. bootstrap treeview基本运用

    虽然现在有了很多新的前端框架,但是有的时候我们做一个不需要任何其他js编译环境就可以运行的项目,那还是的使用一些老式技术,接下来就来回顾一些bootstrap treeview + jquery的使用 ...

  10. web自动化05-鼠标操作

    鼠标操作方法   1.常见的鼠标操作   点击.右击.双击.悬停.拖拽等   2.selenium中的封装鼠标操作   说明:在Selenium中将操作鼠标的方法封装在ActionChains类中   ...