基于ssh,shell,python,iptables,fabric,supervisor和模板文件的多服务器配置管理
新服务器:NS 主服务器:OS
一:OS上新建模板目录例如 mkdir bright 用于导入一些不方便在远程修改的配置文件、redis.conf等,到需要配置的步骤时用远程cp命令覆盖掉
(重要:覆盖后要记得执行chmod修改文件必要的权限,传过去的文件权限会变 例如 chmod 755 /etc/rc.local)
除了配置文件外还有:xxx.sh shell文件将多命令放到一起
例如
export LC_ALL=C
pip install update
apt-get install python-pip
pip install -i http://pypi.douban.com/simple/ saltTesting
pip install -r requirements.txt
pip install pexpect
pip install pymongo==3.2
pip install tornado
pip install supervisor
start.sh
其中有一句
pip install -r requirements.txt
这是在OS上 pip freeze > requirements.txt 得到的环境依赖(当然执行的时候有些会因为版本之类的问题跑失败,那么测试时要找出无法执行的在requirements.txt中去掉并找到合适的安装命令写到shell 例如start.sh中)
简单的安装如此做即可,像redis,nginx等复杂安装我们用python脚本来做(一是为了更好的控制命令,可以利用灵活的语法变更一些配置,自由定制。二是方便保存下来重复利用,除了这个项目拿到别的地方也可以继续用。)
例如一键(git提交代码,安装redis,安装mongo,安装nginx)脚本,这个只是示例,具体自己适配吧,写的比较简单
#coding:utf-8
import subprocess
import datetime
import time
import random
import sys
import os MONGO = 'https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-3.0.6.tgz'
REDIS = 'http://download.redis.io/releases/redis-3.0.7.tar.gz'
NGINX ='http://nginx.org/download/nginx-1.9.15.tar.gz'
MONGOTAR ='mongodb-linux-x86_64-3.0.6.tgz'
MONGODIR ='mongodb-linux-x86_64-3.0.6'
REDISTAR ='redis-3.0.7.tar.gz'
REDISDIR ='redis-3.0.7'
NGINXTAR ='nginx-1.9.15.tar.gz'
NGINXDIR = 'nginx-1.9.15' SUDOPWD = 'bright' print datetime.datetime.now() print "当前支持功能 : 0:自定义命令文本 1:git一键提交代码 2:一键mongo安装\n3:一键redis安装(执行一遍后需要手动更改( vi /usr/local/redis/etc/redis.conf )中daemon = yes bind 127.0.0.1再次执行) \n4:一键nginx安装(执行一遍后需要手动更改(vi /usr/local/nginx/etc/nginx.conf)再次执行) \n使用技巧:当前0是直接复制history命令粘贴形式,如其他形式更方便使用可自行更改strcmds函数。 "
FUNCTIONS = input("输入功能 :" ) def strcmds(file):
f = open(file)
lines = f.readlines()
data=[]
for line in lines:
temp = line.replace("\n", "")
temp=temp[7:]
data.append(temp)
print data
return data def shcall(cmd):
if cmd[0] == 's' and cmd[1] == 'u' and cmd[2] == 'd':
#f = open("adpwd.txt","w")
#f.write(SUDOPWD)
p=subprocess.Popen(cmd,shell=True,close_fds=True,universal_newlines=True,stdin=subprocess.PIPE,stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
p.stdin.write(b'%s\n' % SUDOPWD)
#f.close()
else:
p=subprocess.Popen(cmd,shell=True,close_fds=True,universal_newlines=True,stdin=subprocess.PIPE,stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while p.poll() == None:
time.sleep(0.1)
print p.stdout.readline()
print p.stdout.read()
#print 'returen code:', p.returncode def showmsg(count,cmds):
print "luckynum: ",random.randint(0,999),"Connect me WeChat: brightlike"
print 'Your work has done: ',"%.2f" % round(float(count)*100/len(cmds),2),'%' def main():
count = 0
if FUNCTIONS == 0:
cmds=strcmds('a.txt') #redis
for i in cmds:
print i
if i[0] == 'v' and i[1] == 'i':
print 'vim process'
shcall(i)
if i[0] == 's' and i[1] == 'u' and i[2] == 'd':
print 'admin process'
shcall(i)
count = count+1
showmsg(count,cmds)
if FUNCTIONS == 1:
cmds=['git status','git add -A .','git commit -a -m "fast commit all"','git push','git status']
for i in cmds:
with open('git.txt','wr') as f:
f.write('%s\n' % i)
print i
if i[4] == 'p' and i[5] == 'u' and i[6] == 's':
print 'push process'
shcall(i)
else:
shcall(i)
count = count+1
showmsg(count,cmds)
if FUNCTIONS == 2: #pip install pymongo==3.2
cmds=['sudo chmod 777 /usr/local','sudo mkdir -p /data/db','mkdir /usr/local/mongodb','curl -O %s' % MONGO,'tar -zxvf %s' % MONGOTAR,'mv %s/ /usr/local/mongodb' % MONGODIR,'export PATH=<mongodb-install-directory>/bin:$PATH','export LC_ALL=C','echo "/usr/local/mongodb/bin/mongod --dbpath=/usr/local/mongodb/data –logpath=/usr/local/mongodb/logs –logappend --auth –port=27017" >> /etc/rc.local','mongod --repair','./mongod --dbpath=/data/db --rest','sudo find /usr/local -name mongodb.conf']
for i in cmds:
with open('mongo.txt','wr') as f:
f.write('%s\n' % i)
print i
shcall(i)
count = count+1
showmsg(count,cmds)
if FUNCTIONS == 3: #pip install redis
if os.path.exists('/usr/local/redis/etc/'):
cmds=['find /usr/local -name redis.conf','/usr/local/redis/bin/redis-server /usr/local/redis/etc/redis.conf','ps aux |grep redis']
cmds=['find /usr/local -name redis.conf','/usr/local/redis/bin/redis-server /usr/local/redis/etc/redis.conf','ps aux |grep redis','sudo chmod 777 /usr/local','mkdir -p /usr/local/redis/bin','mkdir -p /usr/local/redis/etc','curl -O %s' % REDIS,'tar xvf %s' % REDISTAR,'mv %s /usr/local' % REDISDIR,'export DESTDIR=/usr/local/%s' % REDISDIR,'./configure --prefix=/usr/local/%s' % REDISDIR,'make -C /usr/local/%s && make install -C /usr/local/%s' % (REDISDIR,REDISDIR),'cp /usr/local/%s/redis.conf /usr/local/redis/etc' % REDISDIR,'cp -r /usr/local/%s/src/. /usr/local/redis/bin' % REDISDIR]
for i in cmds:
with open('redis.txt','wr') as f:
f.write('%s\n' % i)
print i
shcall(i)
count = count+1
showmsg(count,cmds)
if FUNCTIONS == 4: #pip install nginx
if os.path.exists('/usr/local/%s/' % NGINXDIR):
cmds=['find /usr/local -name nginx.conf','/usr/local/nginx/etc/nginx.conf start','ps aux |grep nginx']
cmds=['yum -y install pcre-devel','yum -y install openssl openssl-devel','pip install pcre-devel','pip install install openssl openssl-devel','curl -O %s' % NGINX,'tar xvf %s' % NGINXTAR,'mkdir -p /usr/local/nginx/bin','mkdir -p /usr/local/nginx/etc','mv %s /usr/local' % NGINXDIR,'export DESTDIR=/usr/local/%s' % NGINXDIR,'cd /usr/local/%s && ./configure' % NGINXDIR,'make -C /usr/local/%s && make install -C /usr/local/%s' % (NGINXDIR,NGINXDIR),'cp /usr/local/%s/nginx.conf /usr/local/nginx/etc' % NGINXDIR,'find /usr/local -name nginx.conf']
for i in cmds:
print i
shcall(i)
count = count+1
showmsg(count,cmds)
print "Hope you happy to work, even working on computer every day is boring.\nConnect me WeChat: brightlike"
#shcall('ping -c 10 -i 0.5 119.75.217.109') if __name__ == '__main__':
main()
shells.py
二.ssh 有了稳定连接的可能才可以谈远程配置
OS:
cat .ssh/id_rsa.pub 查看key
NS:
ssh-keygen -t rsa -C “” (引号内输入便于识别内容) 生成ssh目录及公钥
vi .ssh/authorized_keys 添加OS上key
三:fabric 本文核心 远程操作基于此工具 安装: pip install fabric
命令执行规则 : fabric + fabfile.py文件中函数名 例 fab setup
示例代码
#coding:utf-8
from fabric.api import *
import os
#传文件后记得更改文件权限
SENDROUTE ='/root'
SENDFILE ='start.sh'
env.hosts=['111.111.111.111:1111','222.222.222.222']#服务器列表
#进行角色分组,需要安装的服务器,后期优化的服务器等等
env.roledefs = {
'setup':['111.111.111.111:1111','222.222.222.222'],
'seo'['111.111.111.111:1111','222.222.222.222:2222','3.3.3.3']
}
#下面这行语法是指定哪些角色服务器执行,这里是安装,不用这行代表对env.hosts列表中服务器执行
@roles ('setup')
def setup():
with lcd('/root'):#lcd是在本地目录运行命令
with settings(warn_only=True):
result = put("/root/xxx.tar") #xxx.tar为本机上打包的所有的模板文件和程序代码功能模块等
if result.failed and not confirm("put file failed, Continue[Y/N]?"):
abort("Aborting file put task!")
with cd('/root'):#cd为在远程服务器上运行命令
run('tar zxvf xxx.tar && rm -f xxx.tar')
with cd('/root/bright'):
run('bash start.sh')
#这后面都是对应执行命令只举这一个例子,这里是mongo安装
with cd('/root/bright/mongo'):
run('python mongo.py')
run('bash mongo.sh') #单独传文件主要安装配置后优化更新代码用
def send():
with cd('%s' % SENDROUTE):
with settings(warn_only=True):
result = put("%s/%s" % (SENDROUTE,SENDFILE))
with cd('/root'):
run('mv %s %s' % (SENDFILE,SENDROUTE))
run('chmod 755 %s/%s' % (SENDROUTE,SENDFILE))
if result.failed and not confirm("put file failed, Continue[Y/N]?"):
abort("Aborting file put task!")
fabfile.py
需要提前归档好需要的文件和模板目录 tar --exclude=.git --exclude=.ssh -czvf xxx.tar /root (可以排除一些不必要的文件如git等)
另外还可以写一些查看日志(tail -20 xxx.stderr.log) 的命令,有个技巧,Fabric 与 nohup 的问题
nohup命令默认是无法直接fabric的,执行不生效,可以曲线救国加延迟(
run('echo "nohup python main.py &" > xxx.sh')
run('bash xxx.sh && sleep 1')
)解决
四:shell 解决命令太多等问题,以及安装配置启动文件等
这里有个获取本地ip的技巧,利用sed拿到,当然这个不一定适用,有些网络环境比较复杂的机器需要稍微更改下匹配语句
ifconfig | sed -En 's/127.0.0.*//;s/.*inet (addr:)?(([0-9]*\.){3}[0-9]*).*/\2/p'
get_ip.sh
拿到ip后我们要用到需要更改ip的模板文件上,再来个bash
cmd="s/{HOST_IP}/"$(bash get_ip.sh)"/"
sed -i "$cmd" /root/bright/xxx.py #xxx.py为模板文件 需要更改为当前ip的地方提前改成HOST = '{HOST_IP}'
replace_ip.sh
五:supervisor 进程管理工具,安装 :pip install supervisor
配置文件直接放到模板目录里
例
[unix_http_server]
file = /var/run/supervisor.sock
chmod = 0777
chown= root:root [inet_http_server]
# Web管理界面设定
port=11111#(端口自定)
username = bright
password = bright [rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface [supervisorctl]
serverurl = unix:///var/run/supervisor.sock [supervisord]
logfile=/var/log/supervisord/supervisord.log ;(main log file;default $CWD/supervisord.log)
logfile_maxbytes=50MB ;(max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10 ; (num of main logfile rotation backups;default 10)
loglevel=info ; (log level;default info; others: debug,warn,trace)
pidfile=/var/run/supervisord.pid ;(supervisord pidfile;default supervisord.pid)
#nodaemon=true ; (start in foreground if true;default false)
minfds=1024 ; (min. avail startup file descriptors;default 1024)
minprocs=200 ; (min. avail process descriptors;default 200)
user=root ; (default is current user, required if root)
#下面为示例项目启动,有些项目会有多个进程要跑,按下面格式复用即可
[program:xxx]
directory=/root/xxx/xxx
command =python main.py
user = root
autostart = true
autoresart = true
stderr_logfile = /var/log/supervisor/xxx.stderr.log
stdout_logfile = /var/log/supervisor/xxx.stdout.log
supervisord.conf
当然现在还无法开机或reboot自启,还需要再来个bash脚本
cp /root/bright/supervisord.conf /etc/
mkdir /var/log/supervisor #没有log文件路径可能会报错
cp /root/bright/rc.local /etc/
chmod /etc/rc.local #rc.local为自启文件
chmod /etc/supervisord.conf
autorun.sh
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
supervisord -c /etc/supervisord.conf
exit 0 #如有其它也需要自启却不用supervisor的可以加在此句之前,保存后记得ls -l /etc/rc.local检查下文件权限,如还不能自启可尝试运行dpkg-reconfigure dash 选择NO选项
rc.local
六: iptables 配置过滤规则,尽可能避免外部入侵,如数据库等
这里举个防止redis入侵的情况,不小心把端口开在可公网是十分危险的,默认启动redis是暴露在公网,所以配置文件中bind ip那里切记修改为本地127.0.0.1,运行时也要通过配置文件跑
iptables -F
iptables -A INPUT -p tcp -s 127.0.0.1 --dport -j ACCEPT #6379为redis默认启动端口,其它照葫芦画瓢即可,iptables要配置谨慎,不然自己都连不上去会很惨,嗯嗯
iptables -A INPUT -p tcp --dport -j DROP
iptables.sh
示例redis.conf(文件较长)
# Redis configuration file example.
#
# Note that in order to read the configuration file, Redis must be
# started with the file path as first argument:
#
# ./redis-server /path/to/redis.conf # Note on units: when memory size is needed, it is possible to specify
# it in the usual form of 1k 5GB 4M and so forth:
#
# 1k => 1000 bytes
# 1kb => 1024 bytes
# 1m => 1000000 bytes
# 1mb => 1024*1024 bytes
# 1g => 1000000000 bytes
# 1gb => 1024*1024*1024 bytes
#
# units are case insensitive so 1GB 1Gb 1gB are all the same. ################################## INCLUDES ################################### # Include one or more other config files here. This is useful if you
# have a standard template that goes to all Redis servers but also need
# to customize a few per-server settings. Include files can include
# other files, so use this wisely.
#
# Notice option "include" won't be rewritten by command "CONFIG REWRITE"
# from admin or Redis Sentinel. Since Redis always uses the last processed
# line as value of a configuration directive, you'd better put includes
# at the beginning of this file to avoid overwriting config change at runtime.
#
# If instead you are interested in using includes to override configuration
# options, it is better to use include as the last line.
#
# include /path/to/local.conf
# include /path/to/other.conf ################################ GENERAL ##################################### # By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
daemonize yes # When running daemonized, Redis writes a pid file in /var/run/redis.pid by
# default. You can specify a custom pid file location here.
pidfile /var/run/redis/redis-server.pid # Accept connections on the specified port, default is 6379.
# If port 0 is specified Redis will not listen on a TCP socket.
port 6379 # TCP listen() backlog.
#
# In high requests-per-second environments you need an high backlog in order
# to avoid slow clients connections issues. Note that the Linux kernel
# will silently truncate it to the value of /proc/sys/net/core/somaxconn so
# make sure to raise both the value of somaxconn and tcp_max_syn_backlog
# in order to get the desired effect.
tcp-backlog 511 # By default Redis listens for connections from all the network interfaces
# available on the server. It is possible to listen to just one or multiple
# interfaces using the "bind" configuration directive, followed by one or
# more IP addresses.
#
# Examples:
#
# bind 192.168.1.100 10.0.0.1
bind 127.0.0.1 # Specify the path for the Unix socket that will be used to listen for
# incoming connections. There is no default, so Redis will not listen
# on a unix socket when not specified.
#
# unixsocket /var/run/redis/redis.sock
# unixsocketperm 700 # Close the connection after a client is idle for N seconds (0 to disable)
timeout 0 # TCP keepalive.
#
# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
# of communication. This is useful for two reasons:
#
# 1) Detect dead peers.
# 2) Take the connection alive from the point of view of network
# equipment in the middle.
#
# On Linux, the specified value (in seconds) is the period used to send ACKs.
# Note that to close the connection the double of the time is needed.
# On other kernels the period depends on the kernel configuration.
#
# A reasonable value for this option is 60 seconds.
tcp-keepalive 0 # Specify the server verbosity level.
# This can be one of:
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
loglevel notice # Specify the log file name. Also the empty string can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
logfile /var/log/redis/redis-server.log # To enable logging to the system logger, just set 'syslog-enabled' to yes,
# and optionally update the other syslog parameters to suit your needs.
# syslog-enabled no # Specify the syslog identity.
# syslog-ident redis # Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
# syslog-facility local0 # Set the number of databases. The default database is DB 0, you can select
# a different one on a per-connection basis using SELECT <dbid> where
# dbid is a number between 0 and 'databases'-1
databases 16 ################################ SNAPSHOTTING ################################
#
# Save the DB on disk:
#
# save <seconds> <changes>
#
# Will save the DB if both the given number of seconds and the given
# number of write operations against the DB occurred.
#
# In the example below the behaviour will be to save:
# after 900 sec (15 min) if at least 1 key changed
# after 300 sec (5 min) if at least 10 keys changed
# after 60 sec if at least 10000 keys changed
#
# Note: you can disable saving completely by commenting out all "save" lines.
#
# It is also possible to remove all the previously configured save
# points by adding a save directive with a single empty string argument
# like in the following example:
#
# save "" save 900 1
save 300 10
save 60 10000 # By default Redis will stop accepting writes if RDB snapshots are enabled
# (at least one save point) and the latest background save failed.
# This will make the user aware (in a hard way) that data is not persisting
# on disk properly, otherwise chances are that no one will notice and some
# disaster will happen.
#
# If the background saving process will start working again Redis will
# automatically allow writes again.
#
# However if you have setup your proper monitoring of the Redis server
# and persistence, you may want to disable this feature so that Redis will
# continue to work as usual even if there are problems with disk,
# permissions, and so forth.
stop-writes-on-bgsave-error yes # Compress string objects using LZF when dump .rdb databases?
# For default that's set to 'yes' as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no' but
# the dataset will likely be bigger if you have compressible values or keys.
rdbcompression yes # Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
# This makes the format more resistant to corruption but there is a performance
# hit to pay (around 10%) when saving and loading RDB files, so you can disable it
# for maximum performances.
#
# RDB files created with checksum disabled have a checksum of zero that will
# tell the loading code to skip the check.
rdbchecksum yes # The filename where to dump the DB
dbfilename dump.rdb # The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
dir /var/lib/redis ################################# REPLICATION ################################# # Master-Slave replication. Use slaveof to make a Redis instance a copy of
# another Redis server. A few things to understand ASAP about Redis replication.
#
# 1) Redis replication is asynchronous, but you can configure a master to
# stop accepting writes if it appears to be not connected with at least
# a given number of slaves.
# 2) Redis slaves are able to perform a partial resynchronization with the
# master if the replication link is lost for a relatively small amount of
# time. You may want to configure the replication backlog size (see the next
# sections of this file) with a sensible value depending on your needs.
# 3) Replication is automatic and does not need user intervention. After a
# network partition slaves automatically try to reconnect to masters
# and resynchronize with them.
#
# slaveof <masterip> <masterport> # If the master is password protected (using the "requirepass" configuration
# directive below) it is possible to tell the slave to authenticate before
# starting the replication synchronization process, otherwise the master will
# refuse the slave request.
#
# masterauth <master-password> # When a slave loses its connection with the master, or when the replication
# is still in progress, the slave can act in two different ways:
#
# 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will
# still reply to client requests, possibly with out of date data, or the
# data set may just be empty if this is the first synchronization.
#
# 2) if slave-serve-stale-data is set to 'no' the slave will reply with
# an error "SYNC with master in progress" to all the kind of commands
# but to INFO and SLAVEOF.
#
slave-serve-stale-data yes # You can configure a slave instance to accept writes or not. Writing against
# a slave instance may be useful to store some ephemeral data (because data
# written on a slave will be easily deleted after resync with the master) but
# may also cause problems if clients are writing to it because of a
# misconfiguration.
#
# Since Redis 2.6 by default slaves are read-only.
#
# Note: read only slaves are not designed to be exposed to untrusted clients
# on the internet. It's just a protection layer against misuse of the instance.
# Still a read only slave exports by default all the administrative commands
# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
# security of read only slaves using 'rename-command' to shadow all the
# administrative / dangerous commands.
slave-read-only yes # Replication SYNC strategy: disk or socket.
#
# -------------------------------------------------------
# WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY
# -------------------------------------------------------
#
# New slaves and reconnecting slaves that are not able to continue the replication
# process just receiving differences, need to do what is called a "full
# synchronization". An RDB file is transmitted from the master to the slaves.
# The transmission can happen in two different ways:
#
# 1) Disk-backed: The Redis master creates a new process that writes the RDB
# file on disk. Later the file is transferred by the parent
# process to the slaves incrementally.
# 2) Diskless: The Redis master creates a new process that directly writes the
# RDB file to slave sockets, without touching the disk at all.
#
# With disk-backed replication, while the RDB file is generated, more slaves
# can be queued and served with the RDB file as soon as the current child producing
# the RDB file finishes its work. With diskless replication instead once
# the transfer starts, new slaves arriving will be queued and a new transfer
# will start when the current one terminates.
#
# When diskless replication is used, the master waits a configurable amount of
# time (in seconds) before starting the transfer in the hope that multiple slaves
# will arrive and the transfer can be parallelized.
#
# With slow disks and fast (large bandwidth) networks, diskless replication
# works better.
repl-diskless-sync no # When diskless replication is enabled, it is possible to configure the delay
# the server waits in order to spawn the child that transfers the RDB via socket
# to the slaves.
#
# This is important since once the transfer starts, it is not possible to serve
# new slaves arriving, that will be queued for the next RDB transfer, so the server
# waits a delay in order to let more slaves arrive.
#
# The delay is specified in seconds, and by default is 5 seconds. To disable
# it entirely just set it to 0 seconds and the transfer will start ASAP.
repl-diskless-sync-delay 5 # Slaves send PINGs to server in a predefined interval. It's possible to change
# this interval with the repl_ping_slave_period option. The default value is 10
# seconds.
#
# repl-ping-slave-period 10 # The following option sets the replication timeout for:
#
# 1) Bulk transfer I/O during SYNC, from the point of view of slave.
# 2) Master timeout from the point of view of slaves (data, pings).
# 3) Slave timeout from the point of view of masters (REPLCONF ACK pings).
#
# It is important to make sure that this value is greater than the value
# specified for repl-ping-slave-period otherwise a timeout will be detected
# every time there is low traffic between the master and the slave.
#
# repl-timeout 60 # Disable TCP_NODELAY on the slave socket after SYNC?
#
# If you select "yes" Redis will use a smaller number of TCP packets and
# less bandwidth to send data to slaves. But this can add a delay for
# the data to appear on the slave side, up to 40 milliseconds with
# Linux kernels using a default configuration.
#
# If you select "no" the delay for data to appear on the slave side will
# be reduced but more bandwidth will be used for replication.
#
# By default we optimize for low latency, but in very high traffic conditions
# or when the master and slaves are many hops away, turning this to "yes" may
# be a good idea.
repl-disable-tcp-nodelay no # Set the replication backlog size. The backlog is a buffer that accumulates
# slave data when slaves are disconnected for some time, so that when a slave
# wants to reconnect again, often a full resync is not needed, but a partial
# resync is enough, just passing the portion of data the slave missed while
# disconnected.
#
# The bigger the replication backlog, the longer the time the slave can be
# disconnected and later be able to perform a partial resynchronization.
#
# The backlog is only allocated once there is at least a slave connected.
#
# repl-backlog-size 1mb # After a master has no longer connected slaves for some time, the backlog
# will be freed. The following option configures the amount of seconds that
# need to elapse, starting from the time the last slave disconnected, for
# the backlog buffer to be freed.
#
# A value of 0 means to never release the backlog.
#
# repl-backlog-ttl 3600 # The slave priority is an integer number published by Redis in the INFO output.
# It is used by Redis Sentinel in order to select a slave to promote into a
# master if the master is no longer working correctly.
#
# A slave with a low priority number is considered better for promotion, so
# for instance if there are three slaves with priority 10, 100, 25 Sentinel will
# pick the one with priority 10, that is the lowest.
#
# However a special priority of 0 marks the slave as not able to perform the
# role of master, so a slave with priority of 0 will never be selected by
# Redis Sentinel for promotion.
#
# By default the priority is 100.
slave-priority 100 # It is possible for a master to stop accepting writes if there are less than
# N slaves connected, having a lag less or equal than M seconds.
#
# The N slaves need to be in "online" state.
#
# The lag in seconds, that must be <= the specified value, is calculated from
# the last ping received from the slave, that is usually sent every second.
#
# This option does not GUARANTEE that N replicas will accept the write, but
# will limit the window of exposure for lost writes in case not enough slaves
# are available, to the specified number of seconds.
#
# For example to require at least 3 slaves with a lag <= 10 seconds use:
#
# min-slaves-to-write 3
# min-slaves-max-lag 10
#
# Setting one or the other to 0 disables the feature.
#
# By default min-slaves-to-write is set to 0 (feature disabled) and
# min-slaves-max-lag is set to 10. ################################## SECURITY ################################### # Require clients to issue AUTH <PASSWORD> before processing any other
# commands. This might be useful in environments in which you do not trust
# others with access to the host running redis-server.
#
# This should stay commented out for backward compatibility and because most
# people do not need auth (e.g. they run their own servers).
#
# Warning: since Redis is pretty fast an outside user can try up to
# 150k passwords per second against a good box. This means that you should
# use a very strong password otherwise it will be very easy to break.
#
# requirepass foobared # Command renaming.
#
# It is possible to change the name of dangerous commands in a shared
# environment. For instance the CONFIG command may be renamed into something
# hard to guess so that it will still be available for internal-use tools
# but not available for general clients.
#
# Example:
#
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
#
# It is also possible to completely kill a command by renaming it into
# an empty string:
#
# rename-command CONFIG ""
#
# Please note that changing the name of commands that are logged into the
# AOF file or transmitted to slaves may cause problems. ################################### LIMITS #################################### # Set the max number of connected clients at the same time. By default
# this limit is set to 10000 clients, however if the Redis server is not
# able to configure the process file limit to allow for the specified limit
# the max number of allowed clients is set to the current file limit
# minus 32 (as Redis reserves a few file descriptors for internal uses).
#
# Once the limit is reached Redis will close all the new connections sending
# an error 'max number of clients reached'.
#
# maxclients 10000 # Don't use more memory than the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys
# according to the eviction policy selected (see maxmemory-policy).
#
# If Redis can't remove keys according to the policy, or if the policy is
# set to 'noeviction', Redis will start to reply with errors to commands
# that would use more memory, like SET, LPUSH, and so on, and will continue
# to reply to read-only commands like GET.
#
# This option is usually useful when using Redis as an LRU cache, or to set
# a hard memory limit for an instance (using the 'noeviction' policy).
#
# WARNING: If you have slaves attached to an instance with maxmemory on,
# the size of the output buffers needed to feed the slaves are subtracted
# from the used memory count, so that network problems / resyncs will
# not trigger a loop where keys are evicted, and in turn the output
# buffer of slaves is full with DELs of keys evicted triggering the deletion
# of more keys, and so forth until the database is completely emptied.
#
# In short... if you have slaves attached it is suggested that you set a lower
# limit for maxmemory so that there is some free RAM on the system for slave
# output buffers (but this is not needed if the policy is 'noeviction').
#
# maxmemory <bytes> # MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached. You can select among five behaviors:
#
# volatile-lru -> remove the key with an expire set using an LRU algorithm
# allkeys-lru -> remove any key according to the LRU algorithm
# volatile-random -> remove a random key with an expire set
# allkeys-random -> remove a random key, any key
# volatile-ttl -> remove the key with the nearest expire time (minor TTL)
# noeviction -> don't expire at all, just return an error on write operations
#
# Note: with any of the above policies, Redis will return an error on write
# operations, when there are no suitable keys for eviction.
#
# At the date of writing these commands are: set setnx setex append
# incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
# sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
# zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
# getset mset msetnx exec sort
#
# The default is:
#
# maxmemory-policy noeviction # LRU and minimal TTL algorithms are not precise algorithms but approximated
# algorithms (in order to save memory), so you can tune it for speed or
# accuracy. For default Redis will check five keys and pick the one that was
# used less recently, you can change the sample size using the following
# configuration directive.
#
# The default of 5 produces good enough results. 10 Approximates very closely
# true LRU but costs a bit more CPU. 3 is very fast but not very accurate.
#
# maxmemory-samples 5 ############################## APPEND ONLY MODE ############################### # By default Redis asynchronously dumps the dataset on disk. This mode is
# good enough in many applications, but an issue with the Redis process or
# a power outage may result into a few minutes of writes lost (depending on
# the configured save points).
#
# The Append Only File is an alternative persistence mode that provides
# much better durability. For instance using the default data fsync policy
# (see later in the config file) Redis can lose just one second of writes in a
# dramatic event like a server power outage, or a single write if something
# wrong with the Redis process itself happens, but the operating system is
# still running correctly.
#
# AOF and RDB persistence can be enabled at the same time without problems.
# If the AOF is enabled on startup Redis will load the AOF, that is the file
# with the better durability guarantees.
#
# Please check http://redis.io/topics/persistence for more information. appendonly no # The name of the append only file (default: "appendonly.aof") appendfilename "appendonly.aof" # The fsync() call tells the Operating System to actually write data on disk
# instead of waiting for more data in the output buffer. Some OS will really flush
# data on disk, some other OS will just try to do it ASAP.
#
# Redis supports three different modes:
#
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# always: fsync after every write to the append only log. Slow, Safest.
# everysec: fsync only one time every second. Compromise.
#
# The default is "everysec", as that's usually the right compromise between
# speed and data safety. It's up to you to understand if you can relax this to
# "no" that will let the operating system flush the output buffer when
# it wants, for better performances (but if you can live with the idea of
# some data loss consider the default persistence mode that's snapshotting),
# or on the contrary, use "always" that's very slow but a bit safer than
# everysec.
#
# More details please check the following article:
# http://antirez.com/post/redis-persistence-demystified.html
#
# If unsure, use "everysec". # appendfsync always
appendfsync everysec
# appendfsync no # When the AOF fsync policy is set to always or everysec, and a background
# saving process (a background save or AOF log background rewriting) is
# performing a lot of I/O against the disk, in some Linux configurations
# Redis may block too long on the fsync() call. Note that there is no fix for
# this currently, as even performing fsync in a different thread will block
# our synchronous write(2) call.
#
# In order to mitigate this problem it's possible to use the following option
# that will prevent fsync() from being called in the main process while a
# BGSAVE or BGREWRITEAOF is in progress.
#
# This means that while another child is saving, the durability of Redis is
# the same as "appendfsync none". In practical terms, this means that it is
# possible to lose up to 30 seconds of log in the worst scenario (with the
# default Linux settings).
#
# If you have latency problems turn this to "yes". Otherwise leave it as
# "no" that is the safest pick from the point of view of durability. no-appendfsync-on-rewrite no # Automatic rewrite of the append only file.
# Redis is able to automatically rewrite the log file implicitly calling
# BGREWRITEAOF when the AOF log size grows by the specified percentage.
#
# This is how it works: Redis remembers the size of the AOF file after the
# latest rewrite (if no rewrite has happened since the restart, the size of
# the AOF at startup is used).
#
# This base size is compared to the current size. If the current size is
# bigger than the specified percentage, the rewrite is triggered. Also
# you need to specify a minimal size for the AOF file to be rewritten, this
# is useful to avoid rewriting the AOF file even if the percentage increase
# is reached but it is still pretty small.
#
# Specify a percentage of zero in order to disable the automatic AOF
# rewrite feature. auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb # An AOF file may be found to be truncated at the end during the Redis
# startup process, when the AOF data gets loaded back into memory.
# This may happen when the system where Redis is running
# crashes, especially when an ext4 filesystem is mounted without the
# data=ordered option (however this can't happen when Redis itself
# crashes or aborts but the operating system still works correctly).
#
# Redis can either exit with an error when this happens, or load as much
# data as possible (the default now) and start if the AOF file is found
# to be truncated at the end. The following option controls this behavior.
#
# If aof-load-truncated is set to yes, a truncated AOF file is loaded and
# the Redis server starts emitting a log to inform the user of the event.
# Otherwise if the option is set to no, the server aborts with an error
# and refuses to start. When the option is set to no, the user requires
# to fix the AOF file using the "redis-check-aof" utility before to restart
# the server.
#
# Note that if the AOF file will be found to be corrupted in the middle
# the server will still exit with an error. This option only applies when
# Redis will try to read more data from the AOF file but not enough bytes
# will be found.
aof-load-truncated yes ################################ LUA SCRIPTING ############################### # Max execution time of a Lua script in milliseconds.
#
# If the maximum execution time is reached Redis will log that a script is
# still in execution after the maximum allowed time and will start to
# reply to queries with an error.
#
# When a long running script exceeds the maximum execution time only the
# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
# used to stop a script that did not yet called write commands. The second
# is the only way to shut down the server in the case a write command was
# already issued by the script but the user doesn't want to wait for the natural
# termination of the script.
#
# Set it to 0 or a negative value for unlimited execution without warnings.
lua-time-limit 5000 ################################ REDIS CLUSTER ###############################
#
# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
# WARNING EXPERIMENTAL: Redis Cluster is considered to be stable code, however
# in order to mark it as "mature" we need to wait for a non trivial percentage
# of users to deploy it in production.
# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
#
# Normal Redis instances can't be part of a Redis Cluster; only nodes that are
# started as cluster nodes can. In order to start a Redis instance as a
# cluster node enable the cluster support uncommenting the following:
#
# cluster-enabled yes # Every cluster node has a cluster configuration file. This file is not
# intended to be edited by hand. It is created and updated by Redis nodes.
# Every Redis Cluster node requires a different cluster configuration file.
# Make sure that instances running in the same system do not have
# overlapping cluster configuration file names.
#
# cluster-config-file nodes-6379.conf # Cluster node timeout is the amount of milliseconds a node must be unreachable
# for it to be considered in failure state.
# Most other internal time limits are multiple of the node timeout.
#
# cluster-node-timeout 15000 # A slave of a failing master will avoid to start a failover if its data
# looks too old.
#
# There is no simple way for a slave to actually have a exact measure of
# its "data age", so the following two checks are performed:
#
# 1) If there are multiple slaves able to failover, they exchange messages
# in order to try to give an advantage to the slave with the best
# replication offset (more data from the master processed).
# Slaves will try to get their rank by offset, and apply to the start
# of the failover a delay proportional to their rank.
#
# 2) Every single slave computes the time of the last interaction with
# its master. This can be the last ping or command received (if the master
# is still in the "connected" state), or the time that elapsed since the
# disconnection with the master (if the replication link is currently down).
# If the last interaction is too old, the slave will not try to failover
# at all.
#
# The point "2" can be tuned by user. Specifically a slave will not perform
# the failover if, since the last interaction with the master, the time
# elapsed is greater than:
#
# (node-timeout * slave-validity-factor) + repl-ping-slave-period
#
# So for example if node-timeout is 30 seconds, and the slave-validity-factor
# is 10, and assuming a default repl-ping-slave-period of 10 seconds, the
# slave will not try to failover if it was not able to talk with the master
# for longer than 310 seconds.
#
# A large slave-validity-factor may allow slaves with too old data to failover
# a master, while a too small value may prevent the cluster from being able to
# elect a slave at all.
#
# For maximum availability, it is possible to set the slave-validity-factor
# to a value of 0, which means, that slaves will always try to failover the
# master regardless of the last time they interacted with the master.
# (However they'll always try to apply a delay proportional to their
# offset rank).
#
# Zero is the only value able to guarantee that when all the partitions heal
# the cluster will always be able to continue.
#
# cluster-slave-validity-factor 10 # Cluster slaves are able to migrate to orphaned masters, that are masters
# that are left without working slaves. This improves the cluster ability
# to resist to failures as otherwise an orphaned master can't be failed over
# in case of failure if it has no working slaves.
#
# Slaves migrate to orphaned masters only if there are still at least a
# given number of other working slaves for their old master. This number
# is the "migration barrier". A migration barrier of 1 means that a slave
# will migrate only if there is at least 1 other working slave for its master
# and so forth. It usually reflects the number of slaves you want for every
# master in your cluster.
#
# Default is 1 (slaves migrate only if their masters remain with at least
# one slave). To disable migration just set it to a very large value.
# A value of 0 can be set but is useful only for debugging and dangerous
# in production.
#
# cluster-migration-barrier 1 # By default Redis Cluster nodes stop accepting queries if they detect there
# is at least an hash slot uncovered (no available node is serving it).
# This way if the cluster is partially down (for example a range of hash slots
# are no longer covered) all the cluster becomes, eventually, unavailable.
# It automatically returns available as soon as all the slots are covered again.
#
# However sometimes you want the subset of the cluster which is working,
# to continue to accept queries for the part of the key space that is still
# covered. In order to do so, just set the cluster-require-full-coverage
# option to no.
#
# cluster-require-full-coverage yes # In order to setup your cluster make sure to read the documentation
# available at http://redis.io web site. ################################## SLOW LOG ################################### # The Redis Slow Log is a system to log queries that exceeded a specified
# execution time. The execution time does not include the I/O operations
# like talking with the client, sending the reply and so forth,
# but just the time needed to actually execute the command (this is the only
# stage of command execution where the thread is blocked and can not serve
# other requests in the meantime).
#
# You can configure the slow log with two parameters: one tells Redis
# what is the execution time, in microseconds, to exceed in order for the
# command to get logged, and the other parameter is the length of the
# slow log. When a new command is logged the oldest one is removed from the
# queue of logged commands. # The following time is expressed in microseconds, so 1000000 is equivalent
# to one second. Note that a negative number disables the slow log, while
# a value of zero forces the logging of every command.
slowlog-log-slower-than 10000 # There is no limit to this length. Just be aware that it will consume memory.
# You can reclaim memory used by the slow log with SLOWLOG RESET.
slowlog-max-len 128 ################################ LATENCY MONITOR ############################## # The Redis latency monitoring subsystem samples different operations
# at runtime in order to collect data related to possible sources of
# latency of a Redis instance.
#
# Via the LATENCY command this information is available to the user that can
# print graphs and obtain reports.
#
# The system only logs operations that were performed in a time equal or
# greater than the amount of milliseconds specified via the
# latency-monitor-threshold configuration directive. When its value is set
# to zero, the latency monitor is turned off.
#
# By default latency monitoring is disabled since it is mostly not needed
# if you don't have latency issues, and collecting data has a performance
# impact, that while very small, can be measured under big load. Latency
# monitoring can easily be enabled at runtime using the command
# "CONFIG SET latency-monitor-threshold <milliseconds>" if needed.
latency-monitor-threshold 0 ############################# EVENT NOTIFICATION ############################## # Redis can notify Pub/Sub clients about events happening in the key space.
# This feature is documented at http://redis.io/topics/notifications
#
# For instance if keyspace events notification is enabled, and a client
# performs a DEL operation on key "foo" stored in the Database 0, two
# messages will be published via Pub/Sub:
#
# PUBLISH __keyspace@0__:foo del
# PUBLISH __keyevent@0__:del foo
#
# It is possible to select the events that Redis will notify among a set
# of classes. Every class is identified by a single character:
#
# K Keyspace events, published with __keyspace@<db>__ prefix.
# E Keyevent events, published with __keyevent@<db>__ prefix.
# g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...
# $ String commands
# l List commands
# s Set commands
# h Hash commands
# z Sorted set commands
# x Expired events (events generated every time a key expires)
# e Evicted events (events generated when a key is evicted for maxmemory)
# A Alias for g$lshzxe, so that the "AKE" string means all the events.
#
# The "notify-keyspace-events" takes as argument a string that is composed
# of zero or multiple characters. The empty string means that notifications
# are disabled.
#
# Example: to enable list and generic events, from the point of view of the
# event name, use:
#
# notify-keyspace-events Elg
#
# Example 2: to get the stream of the expired keys subscribing to channel
# name __keyevent@0__:expired use:
#
# notify-keyspace-events Ex
#
# By default all notifications are disabled because most users don't need
# this feature and the feature has some overhead. Note that if you don't
# specify at least one of K or E, no events will be delivered.
notify-keyspace-events "" ############################### ADVANCED CONFIG ############################### # Hashes are encoded using a memory efficient data structure when they have a
# small number of entries, and the biggest entry does not exceed a given
# threshold. These thresholds can be configured using the following directives.
hash-max-ziplist-entries 512
hash-max-ziplist-value 64 # Similarly to hashes, small lists are also encoded in a special way in order
# to save a lot of space. The special representation is only used when
# you are under the following limits:
list-max-ziplist-entries 512
list-max-ziplist-value 64 # Sets have a special encoding in just one case: when a set is composed
# of just strings that happen to be integers in radix 10 in the range
# of 64 bit signed integers.
# The following configuration setting sets the limit in the size of the
# set in order to use this special memory saving encoding.
set-max-intset-entries 512 # Similarly to hashes and lists, sorted sets are also specially encoded in
# order to save a lot of space. This encoding is only used when the length and
# elements of a sorted set are below the following limits:
zset-max-ziplist-entries 128
zset-max-ziplist-value 64 # HyperLogLog sparse representation bytes limit. The limit includes the
# 16 bytes header. When an HyperLogLog using the sparse representation crosses
# this limit, it is converted into the dense representation.
#
# A value greater than 16000 is totally useless, since at that point the
# dense representation is more memory efficient.
#
# The suggested value is ~ 3000 in order to have the benefits of
# the space efficient encoding without slowing down too much PFADD,
# which is O(N) with the sparse encoding. The value can be raised to
# ~ 10000 when CPU is not a concern, but space is, and the data set is
# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.
hll-sparse-max-bytes 3000 # Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
# order to help rehashing the main Redis hash table (the one mapping top-level
# keys to values). The hash table implementation Redis uses (see dict.c)
# performs a lazy rehashing: the more operation you run into a hash table
# that is rehashing, the more rehashing "steps" are performed, so if the
# server is idle the rehashing is never complete and some more memory is used
# by the hash table.
#
# The default is to use this millisecond 10 times every second in order to
# actively rehash the main dictionaries, freeing memory when possible.
#
# If unsure:
# use "activerehashing no" if you have hard latency requirements and it is
# not a good thing in your environment that Redis can reply from time to time
# to queries with 2 milliseconds delay.
#
# use "activerehashing yes" if you don't have such hard requirements but
# want to free memory asap when possible.
activerehashing yes # The client output buffer limits can be used to force disconnection of clients
# that are not reading data from the server fast enough for some reason (a
# common reason is that a Pub/Sub client can't consume messages as fast as the
# publisher can produce them).
#
# The limit can be set differently for the three different classes of clients:
#
# normal -> normal clients including MONITOR clients
# slave -> slave clients
# pubsub -> clients subscribed to at least one pubsub channel or pattern
#
# The syntax of every client-output-buffer-limit directive is the following:
#
# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>
#
# A client is immediately disconnected once the hard limit is reached, or if
# the soft limit is reached and remains reached for the specified number of
# seconds (continuously).
# So for instance if the hard limit is 32 megabytes and the soft limit is
# 16 megabytes / 10 seconds, the client will get disconnected immediately
# if the size of the output buffers reach 32 megabytes, but will also get
# disconnected if the client reaches 16 megabytes and continuously overcomes
# the limit for 10 seconds.
#
# By default normal clients are not limited because they don't receive data
# without asking (in a push way), but just after a request, so only
# asynchronous clients may create a scenario where data is requested faster
# than it can read.
#
# Instead there is a default limit for pubsub and slave clients, since
# subscribers and slaves receive data in a push fashion.
#
# Both the hard or the soft limit can be disabled by setting them to zero.
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60 # Redis calls an internal function to perform many background tasks, like
# closing connections of clients in timeout, purging expired keys that are
# never requested, and so forth.
#
# Not all tasks are performed with the same frequency, but Redis checks for
# tasks to perform according to the specified "hz" value.
#
# By default "hz" is set to 10. Raising the value will use more CPU when
# Redis is idle, but at the same time will make Redis more responsive when
# there are many keys expiring at the same time, and timeouts may be
# handled with more precision.
#
# The range is between 1 and 500, however a value over 100 is usually not
# a good idea. Most users should use the default of 10 and raise this up to
# 100 only in environments where very low latency is required.
hz 10 # When a child rewrites the AOF file, if the following option is enabled
# the file will be fsync-ed every 32 MB of data generated. This is useful
# in order to commit the file to the disk more incrementally and avoid
# big latency spikes.
aof-rewrite-incremental-fsync yes
redis.conf
有问题请留言,暂时先写这么多。
基于ssh,shell,python,iptables,fabric,supervisor和模板文件的多服务器配置管理的更多相关文章
- django中将views.py中的python方法传递给html模板文件
常规的模板渲染 from django.db import models # Create your models here. class ArticalType(models.Model): cap ...
- python使用Fabric模块实现自动化运维
简介:Fabric是基于Python实现的SSH命令行工具,简化了SSH的应用程序部署及系统管理任务,它提供了系统基础的操作组件,可以实现本地或远程shell命令,包括:命令执行.文件上传.下载及完整 ...
- 【Ansible】 基于SSH的远程管理工具
[Ansible] 参考文档:[http://www.ansible.com.cn/docs/intro.html] 和ansible类似的工具还有saltstack,puppet,sshpass等, ...
- Git使用之搭建基于SSH的Gitserver(上篇)
1. 须要软件 msysgit (Gitfor Windows) Copssh (OpenSSHfor Windows,新版本号已经開始收费了大家能够去搜索引擎找曾经的免费版Copssh_4.1.0下 ...
- 基于ssh反向代理实现的远程协助
本文描述了怎么通过ssh反向代理实现远程协助,并提供了相关代码. 可满足web开启远程协助功能后,维护人员能够通过ssh和http登录客户机器(包括在nat环境下) web开启该功能后,ssh才能登录 ...
- [转]python3之paramiko模块(基于ssh连接进行远程登录服务器执行命令和上传下载文件的功能)
转自:https://www.cnblogs.com/zhangxinqi/p/8372774.html 阅读目录 1.paramiko模块介绍 2.paramiko的使用方法 回到顶部 1.para ...
- 2019/01/17 基于windows使用fabric将gitlab的文件远程同步到服务器(git)
觉得django项目把本地更新push到gitlab,再执行fabric脚本从gitlab更新服务器项目挺方便的,当然从本地直接到服务器就比较灵活. 2019/01/17 基于windows使用fab ...
- 基于Linux Shell的开机启动服务
CentOS重启后,很多服务需要手动启动,很是麻烦,今天把需要开机启动或关闭的服务整理了一下,放入Linux Shell中,再将该Shell加入/etc/rc.local中,即可实现存储的自动挂载.S ...
- paramiko模块(基于SSH用于连接远程服务器)
paramiko模块,基于SSH用于连接远程服务器并执行相关操作 class SSHConnection(object): def __init__(self, host_dict): self.ho ...
随机推荐
- iOS关于友盟分享弹不出面板问题
在程序代理类中声明 [NSThread sleepForTimeInterval:10];//设置启动页面时间 [self.window makeKeyAndVisible]; [[UMSocialM ...
- scss语法介绍
这里既然是对语法的介绍,那么至于如何安装和编译scss我就不多少了,主要是因为本人在群里认识的小伙伴有的不会用scss,所以就借着放假的机会来对scss语法做个总结,博主在开发过程中用scss蛮多,所 ...
- Java进阶之网络编程
网络编程 网络编程对于很多的初学者来说,都是很向往的一种编程技能,但是很多的初学者却因为很长一段时间无法进入网络编程的大门而放弃了对于该部分技术的学习. 在 学习网络编程以前,很多初学者可能觉得网络编 ...
- 在app中屏蔽第三方键盘
iOS8开放了安装第三方键盘的权限,但是在项目开发中,有些情况是需要禁用第三方键盘的.比如说,数字键盘上需要自定义按钮,但是在第三方键盘弹出时,按钮就覆盖在这上面了,在这中情况下,最好的办法是禁用第三 ...
- 打印pid,写着玩。
#include <stdio.h> #include <string.h> #include <dirent.h> #include <limits.h&g ...
- Android中java层使用LocalSocket和底层进行通讯
原始文件:frameworks\base\services\java\com\android\server\NativeDaemonConnector.java private void listen ...
- window server2012 许可证过期
研发的服务器装得windows server 2012 Standard ,许可证只有半年使用时间,过期了老是自动关机,于是在网上找了下,最终找了个可以用的方法,记录下,留用 步骤: 1.cmd命令打 ...
- BS结构中,web如何将数据进行DES加密并写道IC卡中
在IC卡应用系统中,一般都要对IC卡数据进行DES加密,以保证数据的安全.友我科技RFID读写器云服务2.0充分考虑了这个需求,只需要软件工程师简单的配置即可实现数据的加解密并且写到数据块中.如下图所 ...
- Angular.js学习笔记 (一)
- angular中最重要的概念是指令(directive)- ng-model 是双向数据绑定的指令,效果就是将当前元素的value属性和模型中的[user.name]建立绑定关系### 模块(Mo ...
- 前端代码组织优化--小demo(进阶你的思路)
事出必有因 最近在看老项目的代码,一个富客户端的js代码,几千行的代码,全是function(){} var...的垂直布局,真的是要感动的哭了. 一开始都是这样,想实现什么功能,不管三七二十一,fu ...