ansible应用
前言:
假如让你在一组服务器安装某个软件,服务器少的话还可以接受,但如果有上百台服务器的话,这样会耗费大量时间,在这时候Ansible就由此而生;总之Ansible提供的很多模块十分强大。
一、关于ansible
1、ansible是什么
(1)https://www.w3cschool.cn/automate_with_ansible/automate_with_ansible-atvo27or.html
2、ansible环境部署
(2)https://www.w3cschool.cn/automate_with_ansible/automate_with_ansible-1khc27p1.html
(3)https://www.cnblogs.com/gzxbkk/p/7515634.html
3、ansible安装
4、其他的一些系列介绍可以参考如下,重点是ansible所包含的模块信息,具体我个人感觉不用都记住,用到的时候去查看相关模块即可:
(1)http://www.cnblogs.com/f-ck-need-u/p/7576137.html#auto_id_2
(2)http://www.zsythink.net/ 中的运维技术->ansible
(3)http://www.ansible.com.cn/index.html 中文档
(4)https://docs.ansible.com/ansible/latest/user_guide/playbooks.html ansible playbooks官方指南
二、实战实例
1、ansible批量安装jdk(yml文件)
- - hosts: clickhouse_cluster_setup_beijing #hosts 定义单个主机或组
- remote_user: root #以root账户执行
- tasks:
- - name: copy jdk remote hosts
- copy: src=/root/usr/jdk-8u201-linux-x64.tar.gz dest=/usr/local/ backup=yes
- - name: tar jdk
- shell: chdir=/usr/local/ tar -xzvf jdk-8u201-linux-x64.tar.gz
- - name: create links
- file: src=/usr/local/jdk1..0_201 dest=/usr/local/java state=link
- - name: java_profile config
- shell: /bin/echo {{ item }} >> /etc/profile
- with_items:
- - export JAVA_HOME=/usr/local/java
- - export JRE_HOME=/usr/local/java/jre
- - export CLASSPATH=.:\$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:\$JRE_HOME/lib:\$CLASSPATH
- - export PATH=\$JAVA_HOME/bin:\$PATH
- - name: take effect
- shell: source /etc/profile
这里注意的是hosts后面的clickhouse_cluster_setup_beijing,当我们在主控机器上安装好ansible后,需要生成的/etc/ansible/hosts文件里面配置自己需要安装的机器IP,例如:
- clickhouse_cluster_setup_beijing对应的有五台机器:
1、ansible批量安装clickhouse
这里有一个稍微的难点是使用template模块根据自己配置的jinja2模版去加载配置到所有的机器上面去,关于jinja2的介绍可以参见http://docs.jinkan.org/docs/jinja2/、http://www.zsythink.net/archives/3021,另外这里要知道clickhouse的相关部署知识,才可以看懂为什么这么来做。
config.xml.j2模版文件:
- <?xml version="1.0"?>
- <!--
- NOTE: User and query level settings are set up in "users.xml" file.
- -->
- <yandex>
- <logger>
- <!-- Possible levels: https://github.com/pocoproject/poco/blob/develop/Foundation/include/Poco/Logger.h#L105 -->
- <level>trace</level>
- <log>/data/clickhouse/logs/server.log</log>
- <errorlog>/data/clickhouse/logs/error.log</errorlog>
- <size>1000M</size>
- <count></count>
- <!-- <console></console> --> <!-- Default behavior is autodetection (log to console if not daemon mode and is tty) -->
- </logger>
- <!--display_name>production</display_name--> <!-- It is the name that will be shown in the client -->
- <http_port></http_port>
- <tcp_port></tcp_port>
- <!-- For HTTPS and SSL over native protocol. -->
- <!--
- <https_port></https_port>
- <tcp_port_secure></tcp_port_secure>
- -->
- <!-- Used with https_port and tcp_port_secure. Full ssl options list: https://github.com/ClickHouse-Extras/poco/blob/master/NetSSL_OpenSSL/include/Poco/Net/SSLManager.h#L71 -->
- <openSSL>
- <server> <!-- Used for https server AND secure tcp port -->
- <!-- openssl req -subj "/CN=localhost" -new -newkey rsa: -days -nodes -x509 -keyout /etc/clickhouse-server/server.key -out /etc/clickhouse-server/server.crt -->
- <certificateFile>/etc/clickhouse-server/server.crt</certificateFile>
- <privateKeyFile>/etc/clickhouse-server/server.key</privateKeyFile>
- <!-- openssl dhparam -out /etc/clickhouse-server/dhparam.pem -->
- <dhParamsFile>/etc/clickhouse-server/dhparam.pem</dhParamsFile>
- <verificationMode>none</verificationMode>
- <loadDefaultCAFile>true</loadDefaultCAFile>
- <cacheSessions>true</cacheSessions>
- <disableProtocols>sslv2,sslv3</disableProtocols>
- <preferServerCiphers>true</preferServerCiphers>
- </server>
- <client> <!-- Used for connecting to https dictionary source -->
- <loadDefaultCAFile>true</loadDefaultCAFile>
- <cacheSessions>true</cacheSessions>
- <disableProtocols>sslv2,sslv3</disableProtocols>
- <preferServerCiphers>true</preferServerCiphers>
- <!-- Use for self-signed: <verificationMode>none</verificationMode> -->
- <invalidCertificateHandler>
- <!-- Use for self-signed: <name>AcceptCertificateHandler</name> -->
- <name>RejectCertificateHandler</name>
- </invalidCertificateHandler>
- </client>
- </openSSL>
- <!-- Default root page on http[s] server. For example load UI from https://tabix.io/ when opening http://localhost:8123 -->
- <!--
- <http_server_default_response><![CDATA[<html ng-app="SMI2"><head><base href="http://ui.tabix.io/"></head><body><div ui-view="" class="content-ui"></div><script src="http://loader.tabix.io/master.js"></script></body></html>]]></http_server_default_response>
- -->
- <!-- Port for communication between replicas. Used for data exchange. -->
- <interserver_http_port></interserver_http_port>
- <!-- Hostname that is used by other replicas to request this server.
- If not specified, than it is determined analoguous to 'hostname -f' command.
- This setting could be used to switch replication to another network interface.
- -->
- <!--
- <interserver_http_host>example.yandex.ru</interserver_http_host>
- -->
- <!-- Listen specified host. use :: (wildcard IPv6 address), if you want to accept connections both with IPv4 and IPv6 from everywhere. -->
- <!-- <listen_host>::</listen_host> -->
- <!-- Same for hosts with disabled ipv6: -->
- <listen_host>0.0.0.0</listen_host>
- <!-- Default values - try listen localhost on ipv4 and ipv6: -->
- <!--
- <listen_host>::</listen_host>
- <listen_host>127.0.0.1</listen_host>
- -->
- <!-- Don't exit if ipv6 or ipv4 unavailable, but listen_host with this protocol specified -->
- <!-- <listen_try></listen_try> -->
- <!-- Allow listen on same address:port -->
- <!-- <listen_reuse_port></listen_reuse_port> -->
- <!-- <listen_backlog></listen_backlog> -->
- <max_connections></max_connections>
- <keep_alive_timeout></keep_alive_timeout>
- <!-- Maximum number of concurrent queries. -->
- <max_concurrent_queries></max_concurrent_queries>
- <!-- Set limit on number of open files (default: maximum). This setting makes sense on Mac OS X because getrlimit() fails to retrieve
- correct maximum value. -->
- <!-- <max_open_files></max_open_files> -->
- <!-- Size of cache of uncompressed blocks of data, used in tables of MergeTree family.
- In bytes. Cache is single for server. Memory is allocated only on demand.
- Cache is used when 'use_uncompressed_cache' user setting turned on (off by default).
- Uncompressed cache is advantageous only for very short queries and in rare cases.
- -->
- <uncompressed_cache_size></uncompressed_cache_size>
- <!-- Approximate size of mark cache, used in tables of MergeTree family.
- In bytes. Cache is single for server. Memory is allocated only on demand.
- You should not lower this value.
- -->
- <mark_cache_size></mark_cache_size>
- <!-- Path to data directory, with trailing slash. -->
- <path>/data/clickhouse/</path>
- <!-- Path to temporary data for processing hard queries. -->
- <tmp_path>/data/clickhouse/tmp/</tmp_path>
- <!-- Directory with user provided files that are accessible by 'file' table function. -->
- <user_files_path>/data/clickhouse/user_files/</user_files_path>
- <!-- Path to configuration file with users, access rights, profiles of settings, quotas. -->
- <users_config>users.xml</users_config>
- <!-- Default profile of settings. -->
- <default_profile>default</default_profile>
- <!-- System profile of settings. This settings are used by internal processes (Buffer storage, Distibuted DDL worker and so on). -->
- <!-- <system_profile>default</system_profile> -->
- <!-- Default database. -->
- <default_database>default</default_database>
- <!-- Server time zone could be set here.
- Time zone is used when converting between String and DateTime types,
- when printing DateTime in text formats and parsing DateTime from text,
- it is used in date and time related functions, if specific time zone was not passed as an argument.
- Time zone is specified as identifier from IANA time zone database, like UTC or Africa/Abidjan.
- If not specified, system time zone at server startup is used.
- Please note, that server could display time zone alias instead of specified name.
- Example: W-SU is an alias for Europe/Moscow and Zulu is an alias for UTC.
- -->
- <!-- <timezone>Europe/Moscow</timezone> -->
- <!-- You can specify umask here (see "man umask"). Server will apply it on startup.
- Number is always parsed as octal. Default umask is (other users cannot read logs, data files, etc; group can only read).
- -->
- <!-- <umask></umask> -->
- <!-- Perform mlockall after startup to lower first queries latency
- and to prevent clickhouse executable from being paged out under high IO load.
- Enabling this option is recommended but will lead to increased startup time for up to a few seconds.
- -->
- <mlock_executable>false</mlock_executable>
- <!-- Configuration of clusters that could be used in Distributed tables.
- https://clickhouse.yandex/docs/en/table_engines/distributed/
- -->
- <remote_servers incl="clickhouse_remote_servers" />
- <!-- If element has 'incl' attribute, then for it's value will be used corresponding substitution from another file.
- By default, path to file with substitutions is /etc/metrika.xml. It could be changed in config in 'include_from' element.
- Values for substitutions are specified in /yandex/name_of_substitution elements in that file.
- -->
- <include_from>/etc/clickhouse-server/metrika.xml</include_from>
- <!-- ZooKeeper is used to store metadata about replicas, when using Replicated tables.
- Optional. If you don't use replicated tables, you could omit that.
- See https://clickhouse.yandex/docs/en/table_engines/replication/
- -->
- <zookeeper incl="zookeeper-servers" optional="true" />
- <!-- Substitutions for parameters of replicated tables.
- Optional. If you don't use replicated tables, you could omit that.
- See https://clickhouse.yandex/docs/en/table_engines/replication/#creating-replicated-tables
- -->
- <macros incl="macros" optional="true" />
- <!-- Reloading interval for embedded dictionaries, in seconds. Default: . -->
- <builtin_dictionaries_reload_interval></builtin_dictionaries_reload_interval>
- <!-- Maximum session timeout, in seconds. Default: . -->
- <max_session_timeout></max_session_timeout>
- <!-- Default session timeout, in seconds. Default: . -->
- <default_session_timeout></default_session_timeout>
- <!-- Sending data to Graphite for monitoring. Several sections can be defined. -->
- <!--
- interval - send every X second
- root_path - prefix for keys
- hostname_in_path - append hostname to root_path (default = true)
- metrics - send data from table system.metrics
- events - send data from table system.events
- asynchronous_metrics - send data from table system.asynchronous_metrics
- -->
- <!--
- <graphite>
- <host>localhost</host>
- <port></port>
- <timeout>0.1</timeout>
- <interval></interval>
- <root_path>one_min</root_path>
- <hostname_in_path>true</hostname_in_path>
- <metrics>true</metrics>
- <events>true</events>
- <asynchronous_metrics>true</asynchronous_metrics>
- </graphite>
- <graphite>
- <host>localhost</host>
- <port></port>
- <timeout>0.1</timeout>
- <interval></interval>
- <root_path>one_sec</root_path>
- <metrics>true</metrics>
- <events>true</events>
- <asynchronous_metrics>false</asynchronous_metrics>
- </graphite>
- -->
- <!-- Query log. Used only for queries with setting log_queries = . -->
- <query_log>
- <!-- What table to insert data. If table is not exist, it will be created.
- When query log structure is changed after system update,
- then old table will be renamed and new table will be created automatically.
- -->
- <database>system</database>
- <table>query_log</table>
- <!--
- PARTITION BY expr https://clickhouse.yandex/docs/en/table_engines/custom_partitioning_key/
- Example:
- event_date
- toMonday(event_date)
- toYYYYMM(event_date)
- toStartOfHour(event_time)
- -->
- <partition_by>toYYYYMM(event_date)</partition_by>
- <!-- Interval of flushing data. -->
- <flush_interval_milliseconds></flush_interval_milliseconds>
- </query_log>
- <!-- Query thread log. Has information about all threads participated in query execution.
- Used only for queries with setting log_query_threads = . -->
- <query_thread_log>
- <database>system</database>
- <table>query_thread_log</table>
- <partition_by>toYYYYMM(event_date)</partition_by>
- <flush_interval_milliseconds></flush_interval_milliseconds>
- </query_thread_log>
- <!-- Uncomment if use part log.
- Part log contains information about all actions with parts in MergeTree tables (creation, deletion, merges, downloads).
- <part_log>
- <database>system</database>
- <table>part_log</table>
- <flush_interval_milliseconds></flush_interval_milliseconds>
- </part_log>
- -->
- <!-- Parameters for embedded dictionaries, used in Yandex.Metrica.
- See https://clickhouse.yandex/docs/en/dicts/internal_dicts/
- -->
- <!-- Path to file with region hierarchy. -->
- <!-- <path_to_regions_hierarchy_file>/opt/geo/regions_hierarchy.txt</path_to_regions_hierarchy_file> -->
- <!-- Path to directory with files containing names of regions -->
- <!-- <path_to_regions_names_files>/opt/geo/</path_to_regions_names_files> -->
- <!-- Configuration of external dictionaries. See:
- https://clickhouse.yandex/docs/en/dicts/external_dicts/
- -->
- <dictionaries_config>*_dictionary.xml</dictionaries_config>
- <!-- Uncomment if you want data to be compressed -% better.
- Don't do that if you just started using ClickHouse.
- -->
- <compression incl="clickhouse_compression">
- <!--
- <!- - Set of variants. Checked in order. Last matching case wins. If nothing matches, lz4 will be used. - ->
- <case>
- <!- - Conditions. All must be satisfied. Some conditions may be omitted. - ->
- <min_part_size></min_part_size> <!- - Min part size in bytes. - ->
- <min_part_size_ratio>0.01</min_part_size_ratio> <!- - Min size of part relative to whole table size. - ->
- <!- - What compression method to use. - ->
- <method>zstd</method>
- </case>
- -->
- </compression>
- <!-- Allow to execute distributed DDL queries (CREATE, DROP, ALTER, RENAME) on cluster.
- Works only if ZooKeeper is enabled. Comment it if such functionality isn't required. -->
- <distributed_ddl>
- <!-- Path in ZooKeeper to queue with DDL queries -->
- <path>/clickhouse/task_queue/ddl</path>
- <!-- Settings from this profile will be used to execute DDL queries -->
- <!-- <profile>default</profile> -->
- </distributed_ddl>
- <!-- Settings to fine tune MergeTree tables. See documentation in source code, in MergeTreeSettings.h -->
- <!--
- <merge_tree>
- <max_suspicious_broken_parts></max_suspicious_broken_parts>
- </merge_tree>
- -->
- <!-- Protection from accidental DROP.
- If size of a MergeTree table is greater than max_table_size_to_drop (in bytes) than table could not be dropped with any DROP query.
- If you want do delete one table and don't want to restart clickhouse-server, you could create special file <clickhouse-path>/flags/force_drop_table and make DROP once.
- By default max_table_size_to_drop is 50GB; max_table_size_to_drop= allows to DROP any tables.
- The same for max_partition_size_to_drop.
- Uncomment to disable protection.
- -->
- <!-- <max_table_size_to_drop></max_table_size_to_drop> -->
- <!-- <max_partition_size_to_drop></max_partition_size_to_drop> -->
- <!-- Example of parameters for GraphiteMergeTree table engine -->
- <graphite_rollup_example>
- <pattern>
- <regexp>click_cost</regexp>
- <function>any</function>
- <retention>
- <age></age>
- <precision></precision>
- </retention>
- <retention>
- <age></age>
- <precision></precision>
- </retention>
- </pattern>
- <default>
- <function>max</function>
- <retention>
- <age></age>
- <precision></precision>
- </retention>
- <retention>
- <age></age>
- <precision></precision>
- </retention>
- <retention>
- <age></age>
- <precision></precision>
- </retention>
- </default>
- </graphite_rollup_example>
- <!-- Directory in <clickhouse-path> containing schema files for various input formats.
- The directory will be created if it doesn't exist.
- -->
- <format_schema_path>/data/clickhouse/format_schemas/</format_schema_path>
- <!-- Uncomment to disable ClickHouse internal DNS caching. -->
- <!-- <disable_internal_dns_cache></disable_internal_dns_cache> -->
- </yandex>
users.xml.j2模版文件:
- <?xml version="1.0"?>
- <yandex>
- <!-- Profiles of settings. -->
- <profiles>
- <!-- Default settings. -->
- <default>
- <!-- Maximum memory usage for processing single query, in bytes. -->
- <max_memory_usage></max_memory_usage>
- <!-- Use cache of uncompressed blocks of data. Meaningful only for processing many of very short queries. -->
- <use_uncompressed_cache></use_uncompressed_cache>
- <!-- How to choose between replicas during distributed query processing.
- random - choose random replica from set of replicas with minimum number of errors
- nearest_hostname - from set of replicas with minimum number of errors, choose replica
- with minimum number of different symbols between replica's hostname and local hostname
- (Hamming distance).
- in_order - first live replica is chosen in specified order.
- -->
- <load_balancing>random</load_balancing>
- <!-- log values for select queries -->
- <log_queries></log_queries>
- </default>
- <!-- Profile that allows only read queries. -->
- <readonly>
- <max_memory_usage></max_memory_usage>
- <use_uncompressed_cache></use_uncompressed_cache>
- <load_balancing>random</load_balancing>
- <readonly></readonly>
- </readonly>
- </profiles>
- <!-- Users and ACL. -->
- <users>
- <!-- If user name was not specified, 'default' user is used. -->
- <default>
- <!-- Password could be specified in plaintext or in SHA256 (in hex format).
- If you want to specify password in plaintext (not recommended), place it in 'password' element.
- Example: <password>qwerty</password>.
- Password could be empty.
- If you want to specify SHA256, place it in 'password_sha256_hex' element.
- Example: <password_sha256_hex>65e84be33532fb784c48129675f9eff3a682b27168c0ea744b2cf58ee02337c5</password_sha256_hex>
- How to generate decent password:
- Execute: PASSWORD=$(base64 < /dev/urandom | head -c8); echo "$PASSWORD"; echo -n "$PASSWORD" | sha256sum | tr -d '-'
- In first line will be password and in second - corresponding SHA256.
- -->
- <password></password>
- <!-- List of networks with open access.
- To open access from everywhere, specify:
- <ip>::/</ip>
- To open access only from localhost, specify:
- <ip>::</ip>
- <ip>127.0.0.1</ip>
- Each element of list has one of the following forms:
- <ip> IP-address or network mask. Examples: 213.180.204.3 or 10.0.0.1/ or 10.0.0.1/255.255.255.0
- 2a02:6b8:: or 2a02:6b8::/ or 2a02:6b8::/ffff:ffff:ffff:ffff::.
- <host> Hostname. Example: server01.yandex.ru.
- To check access, DNS query is performed, and all received addresses compared to peer address.
- <host_regexp> Regular expression for host names. Example, ^server\d\d-\d\d-\d\.yandex\.ru$
- To check access, DNS PTR query is performed for peer address and then regexp is applied.
- Then, for result of PTR query, another DNS query is performed and all received addresses compared to peer address.
- Strongly recommended that regexp is ends with $
- All results of DNS requests are cached till server restart.
- -->
- <networks incl="networks" replace="replace">
- <ip>::/</ip>
- </networks>
- <!-- Settings profile for user. -->
- <profile>default</profile>
- <!-- Quota for user. -->
- <quota>default</quota>
- </default>
- <!-- Example of user with readonly access. -->
- <readonly>
- <password></password>
- <networks incl="networks" replace="replace">
- <ip>::</ip>
- <ip>127.0.0.1</ip>
- </networks>
- <profile>readonly</profile>
- <quota>default</quota>
- </readonly>
- </users>
- <!-- Quotas. -->
- <quotas>
- <!-- Name of quota. -->
- <default>
- <!-- Limits for time interval. You could specify many intervals with different limits. -->
- <interval>
- <!-- Length of interval. -->
- <duration></duration>
- <!-- No limits. Just calculate resource usage for time interval. -->
- <queries></queries>
- <errors></errors>
- <result_rows></result_rows>
- <read_rows></read_rows>
- <execution_time></execution_time>
- </interval>
- </default>
- </quotas>
- </yandex>
metrika.xml.j2模版文件:
- <yandex>
- <clickhouse_remote_servers>
- <cluster-shard{{shard_num}}replica{{replica_num}}>
- {% for i in range(,,) %}
- {% if i< %}
- <shard>
- <internal_replication>true</internal_replication>
- <replica>
- <host>{{shard_host_pre}}{{i}}</host>
- <port>{{shard_port}}</port>
- <user>{{shard_user}}</user>
- </replica>
- </shard>
- {% else %}
- <shard>
- <internal_replication>true</internal_replication>
- <replica>
- <host>{{shard_host_pre}}{{i}}</host>
- <port>{{shard_port}}</port>
- <user>{{shard_user}}</user>
- </replica>
- </shard>
- {%endif%}
- {% endfor %}
- </cluster-shard{{shard_num}}replica{{replica_num}}>
- </clickhouse_remote_servers>
- <zookeeper-servers>
- {% for i in range(,,) %}
- {% if i< %}
- <node index="{{i}}">
- <host>{{zk_host}}{{i}}</host>
- <port>{{zk_prot}}</port>
- </node>
- {% else %}
- <node index="{{i}}">
- <host>{{zk_host}}{{i}}</host>
- <port>{{zk_prot}}</port>
- </node>
- {%endif%}
- {% endfor %}
- </zookeeper-servers>
- <macros>
- </macros>
- <clickhouse_compression>
- <case>
- <min_part_size></min_part_size>
- <min_part_size_ratio>0.01</min_part_size_ratio>
- <method>lz4</method>
- </case>
- </clickhouse_compression>
- <networks>
- <ip>::/</ip>
- </networks>
- </yandex>
这里是我有5台机器需要安装,所以配置还比较简单。
yml文件:
- #ansible-playbook playbook.yml --list-hosts
- #ansible-playbook /etc/ansible/install_file/clickhouse_install.yml --list-hosts
- #https://www.cnblogs.com/f-ck-need-u/p/7571974.html
- - hosts: clickhouse_cluster_setup_beijing #hosts 定义单个主机或组
- remote_user: root #以root账户执行
- vars: #定义变量
- ck_version: 19.4.0.49-.el7 #ck rpm module version
- #shard of ck variable parameter
- shard_port:
- shard_user: default
- shard_host_pre: bjg-techcenter-appservice-appservice-push-push-clickhouse-
- shard_num:
- replica_num:
- #zk variable parameter
- zk_prot:
- zk_host: bje-data-platform-zookeeper-
- tasks:
- - name: download and install curl #在所有机器上下载并安装 curl
- shell: yum install -y curl
- - name: Download and execute the clickhouse installation script provided by packagecloud.io on the replica, distributed, chproxy machine # 将指定版本的 clickhouse-server, clickhouse-client 安装到 replica 和 distributed 机器上
- shell: curl -s https://packagecloud.io/install/repositories/altinity/clickhouse/script.rpm.sh | sudo bash
- - name: Install clickhouse-server, clickhouse-client on replica and distributed machines
- shell: sudo yum install -y clickhouse-server-{{ck_version}} clickhouse-client-{{ck_version}} clickhouse-compressor-{{ck_version}}
- - name: Batch modify startup scripts # 批量修改启动脚本
- shell: sed -i 's/\/var\/log\/clickhouse-server/\/data\/clickhouse\/logs/g' /etc/init.d/clickhouse-server
- - name: write the metrika config file
- template: src=/etc/ansible/install_file/metrika.xml.j2 dest=/etc/clickhouse-server/metrika.xml backup=yes
- - name: write the config config file
- template: src=/etc/ansible/install_file/config.xml.j2 dest=/etc/clickhouse-server/config.xml backup=yes
- - name: write the user config file
- template: src=/etc/ansible/install_file/users.xml.j2 dest=/etc/clickhouse-server/users.xml backup=yes
- - name: Synchronous configuration #将 clickhouse 用户设置为 login 用户
- shell: usermod -s /bin/bash clickhouse
- - name: Synchronous mkdir configuration
- shell: mkdir /data/clickhouse/logs -p
- - name: Synchronous chown configuration #将 clickhouse 放置到 /data/clickhouse/ 下
- shell: chown clickhouse.clickhouse /data/clickhouse/ -R
- - name: service clickhouse-server restart #重新启动服务
- shell: service clickhouse-server restart
这里因为clickhose安装需要config.xml\users.xml\metika.xml文件,因为这边通过在主控机器上配置通用的模版来批量在每一个管理机器上创建并写入文件,我的例子中是将主控机器上的/etc/ansible/install_file/下面的三个.j2文件根据模版语法写入到管理机器上的/etc/clickhouse-server/目录下面.xml文件中,当然具体还需要根据自己实际情况来修改模版即可。
3、定时任务删除ck集群分区数据
yml文件
- - hosts: delete_ck_host #hosts 定义单个主机或组
- remote_user: root #以root账户执行
- vars: #定义变量
- port:
- tableName:
- - ck_local_qukan_report_cmd_11001
- - ck_local_qukan_report_cmd_under8
- tasks:
- - name: echo date
- command: date -d "2 days ago" +%Y-%m-%d
- register: date_output
- - name: echo partition
- command: clickhouse-client --host {{inventory_hostname}} --port {{port}} --database default --multiquery -q "SELECT DISTINCT formatDateTime(log_timestamp, '%F') AS partition FROM {{item}}"
- loop: "{{tableName}}"
- register: partitions
- - name: execute shell
- shell: clickhouse-client --host {{inventory_hostname}} --port {{port}} --database default --multiquery -q "alter table {{item[0]}} drop partition '{{item[1]}}'"
- when: item[] < "{{date_output.stdout}}"
- with_nested:
- - "{{tableName}}"
- - "{{partitions.results[0].stdout_lines}}"
sh 文件
- #!/bin/bash
- echo "---------------------------------delete_ck.sh task start---------------------------------------------"
- ansible-playbook /etc/ansible/install_file/task/delete_ck.yml
- echo "---------------------------------delete_ck.sh task end--------------------------------------------"
- # * * * /etc/ansible/install_file/task/delete_ck.sh
ansible应用的更多相关文章
- 如何利用ansible callback插件对执行结果进行解析
最近在写一个批量巡检工具,利用ansible将脚本推到各个机器上执行,然后将执行的结果以json格式返回来. 如下所示: # ansible node2 -m script -a /root/pyth ...
- 《Ansible权威指南》笔记(2)——Inventory配置
四.Inventory配置ansible通过Inventory来定义主机和组,使用时通过-i指定读取,默认/etc/ansible/hosts.可以存在多个Inventory,支持动态生成.1.定义主 ...
- useful Ansible commands
This article includes some useful Ansible commands. I will try to write blogs by English. You may wa ...
- 《Ansible权威指南》笔记(4)——Playbook
七.Playbook1.语法特性如下:(1)"---"首行顶格开始(2)#号注释(3)缩进统一,不同的缩进代表不同的级别,缩进要对齐,空格和tab不能混用(4)区别大小写,键值对k ...
- 《Ansible权威指南》笔记(3)——Ad-Hoc命令集,常用模块
五.Ad-Hoc命令集1.Ad-Hoc命令集通过/usr/bin/ansible命令实现:ansible <host-pattern> [options] -v,--verbose ...
- 《Ansible权威指南》笔记(1)——安装,ssh密钥登陆,命令
2016-12-23 读这本<Ansible权威指南>学习ansible,根据本书内容和网上的各种文档,以及经过自己测试,写出以下笔记.另,这本书内容很好,但印刷错误比较多,作者说第二版会 ...
- 自动化运维工具ansible部署以及使用
测试环境master 192.168.16.74webserver1 192.168.16.70webserver2 192.168.16.72安装ansiblerpm -Uvh http://ftp ...
- Ansible Ubuntu 安装部署
一.安装: $ sudo apt-get install ansible 二.配置: a.基本配置 $ cd /etc/ansible/ $ sudo cp hosts hosts_back 备份一个 ...
- Ansible 模块命令介绍
copy模块: 目的:把主控端/root目录下的a.sh文件拷贝到到指定节点上 命令:ansible 10.1.1.113 -m copy -a 'src=/root/a.sh dest=/tmp/' ...
- 用Vagrant和Ansible搭建持续交付平台
这是一个关于Vagrant的学习系列,包含如下文章: Vagrant入门 创建自己的Vagrant box 用Vagrant搭建Jenkins构建环境 用Vagrant和Ansible搭建持续交付平台 ...
随机推荐
- Jquery js框架使用
jquery 众所周知 ,强大的 js框架 自己使用的一些笔记 //1.json格式定义方法 var product_obj={ check_init:function(){ ...
- Xcode 10 关于 CocoaPods 安装失败的问题RuntimeError
xcode 10的情况下执行pod install报错了 RuntimeError - [!] Xcodeproj doesn't know about the following attribute ...
- .NET中二进制图片的存储与读取
判断HttpContext是否为空: string configPath = "img/defaultPhoto.png"; if (HttpContext.Current != ...
- jQuery学习笔记1——操作属性
一.获得和设置内容 三个简单实用的用于 DOM 操作的 jQuery 方法: text() - 设置或返回所选元素的文本内容, 得到匹配元素集合中每个元素的文本内容结合,包括他们的后代, 即由所有匹配 ...
- hdu 3001(状压dp)
题目链接:http://acm.hdu.edu.cn/showproblem.php?pid=3001 思路:这道题类似于TSP问题,只不过题目中说明每个城市至少要走一次,至多走2次,因此要用到三进制 ...
- java -Mac搭建本地服务器并映射到外网
最近在学习Html,小有进步变想着写一个浪漫的静态页面给女朋友浪漫一下,那么问题就来了,如何把我的网页让对网络一窍不通的女朋友看到,所以便想到了是用自己电脑作为服务器的想法.百度以后整理如下: 首先搭 ...
- Windows查看网络端口被占用情况netstat命令
在windows命令行窗口下执行: C:\>netstat -aon|findstr "80" TCP 127.0.0.1:80 0.0.0.0:0 ...
- STL中的排序算法
本文转自:STL中的排序算法 1. 所有STL sort算法函数的名字列表: 函数名 功能描述 sort 对给定区间所有元素进行排序 stable_sort 对给定区间所有元素进行稳定排序 ...
- Codeforces Round #361 (Div. 2) E. Mike and Geometry Problem
题目链接:传送门 题目大意:给你n个区间,求任意k个区间交所包含点的数目之和. 题目思路:将n个区间都离散化掉,然后对于一个覆盖的区间,如果覆盖数cnt>=k,则数目应该加上 区间长度*(cnt ...
- Less-css扩展指定多层嵌套选择器样式
//扩展Extend Use Method:以在study上扩展指定多层嵌套选择器样式 //Share style .test{ font-size:18px; color:#ffffff; ul{ ...