SVN全量备份+增量备份脚本
一、全量备份
环境:一台主SVN,一台备SVN(主要提供备份功能),后续可通过钩子脚本进行实时备份,后续发给大家。
工作原理:通过svn的hotcopy命令过行热备份,并进行一系列的检查,备份后通过rsync推送到备份机上。
脚本如下:
=======
#!/bin/bash
#Version: V2
#Date: 2015-02-03
#Author: wang
CONFDIR=/usr/local/httpd
BASEDIR=/home/xxx/scripts/svn_full_bak
SVNCMD=/usr/local/subversion/bin/svnadmin
SVNDIR=/data/svn
BAKDIR=$BASEDIR/full_bak_dir
SCDIR=/home/xxx/scripts
DISK=`df -h |sed -n '2p'|awk '{print $4}'|tr G " "|awk '{print $1}'`
######define function########
clear_bak_dir(){
sleep 2
${BAKDIR:=/home/xxx/scripts/svn_full_bak/full_bak_dir} &>/dev/null
rm -rf $BAKDIR/${repo}.bak
}
rm -f /home/xxx/scripts/svn_full_bak/logs/linshi.log
echo -e "\n" >>$BASEDIR/logs/Info.log
echo "######################### Backup Start in Time: $(date +%F-%T)#########################" >>$BASEDIR/logs/Info.log
echo "## SVN周备开始 Time:$(date +%F-%T) ##" >>/home/xxx/scripts/svn_full_bak/logs/linshi.log
##########Begin Loop backup SVN Repository##########
while read repo other
do
[ $DISK -lt 30 ] && {
# echo "Warning: Disk free less then 30G,svn backup failed (Time: $(date +%F-%T) ==>repo:$repo)"|mailx -s "Disk Free Check" wangbogui@xxx.com
echo "Warning: Disk free less then 30G,svn backup failed (Time: $(date +%F-%T) ==>repo:$repo)" >>$BASEDIR/logs/Info.log
echo "######################### Backup Stop in Time: $(date +%F-%T)#########################" >>$BASEDIR/logs/Info.log
echo "Warning: Disk free less then 30G,svn backup failed (Time: $(date +%F-%T) ==>repo:$repo)" >>/home/xxx/scripts/svn_full_bak/logs/linshi.log
echo "## SVN周备结束 Time:$(date +%F-%T) ##" >>/home/xxx/scripts/svn_full_bak/logs/linshi.log
mailx -s "SVN 周日全备" wangbogui@xxx.com < /home/xxx_a
dmin/scripts/svn_full_bak/logs/linshi.log
exit 1
}
${BAKDIR:=/home/xxx/scripts/svn_full_bak/full_bak_dir} &>/dev/null
[ ! -d $BAKDIR ] && mkdir -p $BAKDIR
/bin/chown -R xxx $BAKDIR
[ -d $BAKDIR/${repo}.bak ] && {
rm -rf $BAKDIR/${repo}.bak
}
[ ! -z "$other" ] && continue
[ ! -d $SVNDIR/$repo ] && {
echo "---- $repo repository ---- not exist ..."
echo "---- $repo repository ---- not exist ..." >>$BASEDIR/logs/Info.log
continue
}
echo " local backup start in Time: $(date +%F-%T) ==>repo:$repo" >>$BASEDIR/logs/Info.log
$SVNCMD hotcopy $SVNDIR/$repo $BAKDIR/${repo}.bak
[ $? -ne 0 ] && {
echo " local backup failed in Time: $(date +%F-%T) ==>repo:$repo" >>$BASEDIR/logs/Info.log
echo "$repo Local is Failed" >>/home/xxx/scripts/svn_full_bak/logs/linshi.log
clear_bak_dir
continue
}
############Begin remote back##############
/usr/bin/rsync -avz $BAKDIR/${repo}.bak xxx_web@xxx::${repo}/${repo}_$(date +%F) --password-file=/etc/rsyncd.passwd
[ $? -ne 0 ] && {
echo "remote backup failed in Time: $(date +%F-%T) ==>repo:$repo" >>$BASEDIR/logs/Info.log
echo "$repo is Failed" >>/home/xxx/scripts/svn_full_bak/logs/linshi.log
continue
}||{
echo "remote backup ..OK. in Time: $(date +%F-%T) ==>repo:$repo" >>$BASEDIR/logs/Info.log
echo "$repo is OK" >>/home/xxx/scripts/svn_full_bak/logs/linshi.log
}
clear_bak_dir
sleep 3
done < /home/xxx/scripts/svn_full_bak/repository.txt
###########Begin back svn conf directory##############
while read conf_dir rename other
do
[ ! -z $conf_dir -a -d $SVNDIR/$conf_dir ] && continue
[ -z $rename ] && continue
[ ! -z "$other" ] && {
echo "---- $conf_dir ---- Invalid format,please define Two parameter"
continue
}
[ ! -d $conf_dir ] && {
echo "---- $conf_dir conf_dir ---- not exist ..."
continue
}
echo "####### remote start backup --**${conf_dir}**--#######"
/usr/bin/rsync -avz ${conf_dir} xxx_web@xxx::bakdir/${rename}_$(date +%F) --password-file=/etc/rsyncd.passwd &>/dev/null
[ $? -eq 0 ] && {
echo "remote backup ${conf_dir} ..OK. in Time: $(date +%F-%T)" >>$BASEDIR/logs/Info.log
echo "${conf_dir} is OK" >>/home/xxx/scripts/svn_full_bak/logs/linshi.log
echo "####### remote backup .OK.. --**${conf_dir}**--#######"
}||{
echo "remote backup ${conf_dir} failed in Time: $(date +%F-%T)" >>$BASEDIR/logs/Info.log
echo "${conf_dir} is Failed" >>/home/xxx/scripts/svn_full_bak/logs/linshi.log
echo "remote backup ${conf_dir} is failed.."
}
done < /home/xxx/scripts/svn_full_bak/repository.txt
echo "######################### Backup Complete in Time: $(date +%F-%T)#########################" >>$BASEDIR/logs/Info.log
echo "## SVN周备结束 Time:$(date +%F-%T) ##" >>/home/xxx/scripts/svn_full_bak/logs/linshi.log
mailx -s "SVN 周日全备" wangbogui@xxx.com < /home/xxx/scripts/svn_full_bak/logs/linshi.log
二、增量备份
环境:一台主SVN,一台备SVN(主要提供备份功能),后续可通过钩子脚本进行实时备份,后续发给大家。
工作原理:
1、备机上通过nfs挂载到主机上,主要实现通过svnlook youngest来查看主和备间各版本库的版本号差别。(挂载后相当于在本地一样,就可以使用youngest参数或取到主和备版本号的区别别,并进行打包。)
2、挂载并打包后,将打包后的文件存放到挂载目录中,在到备机上通过还原脚本分别对更新的版本库进行还原,并将结果通过邮件发给告知人。
脚本如下:
======
主SVN上备份脚本:
#!/bin/bash
CMD1=/usr/local/subversion/bin/svnlook
CMD2=/usr/local/subversion/bin/svnadmin
SCDIR=/home/xxx/scripts/svn_incre_bak
BAKDIR=$SCDIR/incre_bak_dir
LOGS=/home/xxx/scripts/svn_incre_bak/logs
SVNDIR=/data/svn
SVNBAK=/svnbak
rm -f /svnbak/linshi.log
echo -e "\n" >>$LOGS/Info.log
echo "==================> Incremental Start in Time: $(date +%F-%T) <=========================" >>$LOGS/Info.log
echo "##SVN 增量备份开始 Time:$(date +%F-%T)##" >>/svnbak/linshi.log
#######check mount###############################
count=`ls /svnbak |wc -l`
/bin/mount |/bin/grep xxx
[ $? -ne 0 -o $count -lt 1 ] && {
/bin/mount -t nfs xxx:/svn/data /svnbak
RET=$?
}||{
RET=0
}
times=0
while true
do
if [ $RET -ne 0 ]
then
/bin/umount -lf /svnbak
/bin/mount -t nfs xxx:/data/svn/ /svnbak
count=`ls /svnbak |wc -l`
/bin/mount |grep xxx
[ $? -ne 0 -o $count -lt 1 ] && {
RET=$?
}||{
RET=0
}
sleep 1
let times++
else
break
fi
[ $times -eq 10 ]&&{
echo "mount xxx failed in Time: $(date +%F-%T)" >>$LOGS/logs/Info.log
echo "==================> Incremental Stop(failed:umount) in Time: $(date +%F-%T) <============" >>$LOGS/Info.loged.log
echo "mount xxx is Failed Time: $(date +%F-%T)" >>/svnbak/linshi.log
echo "##SVN 增量备份结束 Time:$(date +%F-%T)##" >>/svnbak/linshi.log
exit 1
}
done
##################################Begin backup################################################
rm -rf $SVNBAK/tmp/*
while read repo
do
[ ! -d $SVNBAK/$repo ] && {
echo "$repo not exist" >>$LOGS/Info.log
sleep 1
continue
}
V_NEW=`$CMD1 youngest $SVNDIR/$repo`
V_OLD=`$CMD1 youngest $SVNBAK/$repo`
V_OLD_1=$((${V_OLD}+1))
if [ $V_OLD -lt $V_NEW ]
then
sleep 1
$CMD2 dump --incremental -r ${V_OLD_1}:${V_NEW} $SVNDIR/$repo >$BAKDIR/${repo}_${V_OLD_1}:${V_NEW}
[ $? -ne 0 ] && {
echo "bakcup $repo dump failed" >>$LOGS/Info.log
echo "$repo dump is Failed" >>/svnbak/linshi.log
continue
}||{
echo "$repo dump is OK" >>$LOGS/Info.log
echo "$repo dump is OK" >>/svnbak/linshi.log
/bin/mkdir $SVNBAK/tmp/$repo -p
sleep 1
\cp -r $BAKDIR/${repo}_${V_OLD_1}:${V_NEW} $SVNBAK/tmp/$repo/
}
[ $? -eq 0 ] && {
# echo "$repo" >>$SCDIR/repository_remote.txt
/bin/rm -rf $BAKDIR/${repo}_${V_OLD_1}:${V_NEW}
}||{
echo "$repo copy to /svnbak/tmp is faild" >>$LOGS/Info.log
echo "$repo copy to /svnbak/tmp is faild" >>/svnbak/linshi.log
}
else
echo "$repo version is newest" >>$LOGS/Info.log
echo "$repo version is newest" >>/svnbak/linshi.log
sleep 1
continue
fi
done < $SCDIR/repository.txt
###################rsync repository.txt to xxx host########################
#[ ! -f $SCDIR/repository_remote.txt ] && {
#echo "==================> Incremental Stop(Version:newest) in Time: $(date +%F-%T) <===========" >>$LOGS/Info.log
#exit 0
#}
#sleep 1
#/usr/bin/rsync -avrz --delete $SCDIR/repository_remote.txt xxx_web@xxx::repolist/repository.txt --password-file=/etc/rsyncd.passwd
#[ $? -ne 0 ] && {
#echo "repository_remote.txt transful failed Time: $(date +%F-%T)" >>$LOGS/Info.log
#}||{
#\cp $SCDIR/repository_remote.txt $SCDIR/repository_remote.txt.bak
#echo "repository_remote.txt transful Successful Time: $(date +%F-%T)" >>$LOGS/Info.log
#/bin/rm -rf $SCDIR/repository_remote.txt
#}
echo "==================> Incremental Stop(Status:Complete) in Time: $(date +%F-%T) <==========" >>$LOGS/Info.log
echo "##SVN 增量备份结束 Time:$(date +%F-%T)##" >>/svnbak/linshi.log
备SVN上还原脚本:
#!/bin/bash
echo -e "\n" >>/root/scripts/svn_incre_restore/logs/Info.log
echo -e "\n" >>/data/svn/linshi.log
echo "##SVN 增量还原开始 Time:$(date +%F-%T)##" >>/data/svn/linshi.log
echo "####################### Recovery Start in Time: $(date +%F-%T) ######################" >>/root/scripts/svn_incre_restore/logs/Info.log
[ ! -d /data/svn/tmp/* ] && {
echo "No Repository Will Be Recovery ..." >>/root/scripts/svn_incre_restore/logs/Info.log
echo "####################### Recovery Complete in Time: $(date +%F-%T) ######################" >>/root/scripts/svn_incre_restore/logs/Info.log
echo "All repository is Neweast" >>/data/svn/linshi.log
echo "##SVN 增量还原结束 Time:$(date +%F-%T)##" >>/data/svn/linshi.log
mailx -s "SVN 每日增量备份及还原" wangbogui@xxx.com </data/svn/linshi.log
#rm -f /data/svn/linshi.log
exit 0
}
while read repo
do
[ ! -d /data/svn/tmp/$repo ] && continue
/usr/local/subversion/bin/svnadmin load /data/svn/$repo < `ls /data/svn/tmp/$repo/*`
V_NEW=`ls /data/svn/tmp/$repo/*|awk -F ":" '{print $2}'`
V_OLD=`/usr/local/subversion/bin/svnlook youngest /data/svn/$repo`
[ $V_NEW -eq $V_OLD ] && {
/bin/rm -rf /data/svn/tmp/$repo
echo "$repo recovery ..OK.." >>/root/scripts/svn_incre_restore/logs/Info.log
echo "$repo recovery is OK" >>/data/svn/linshi.log
}||{
echo "$repo recovery failed" >>/root/scripts/svn_incre_restore/logs/Info.log
echo "$repo recovery is Failed" >>/data/svn/linshi.log
}
done </root/scripts/svn_incre_restore/repository.txt
sleep 1
/bin/chown -R daemon /data/svn
echo "####################### Recovery Complete in Time: $(date +%F-%T) ######################" >>/root/scripts/svn_incre_restore/logs/Info.log
echo "##SVN 增量还原结束 Time:$(date +%F-%T)##" >>/data/svn/linshi.log
mailx -s "SVN 每日增量备份及还原" wangbogui@xxx.com </data/svn/linshi.log
#rm -f /data/svn/linshi.log
SVN全量备份+增量备份脚本的更多相关文章
- mysql全量和增量备份详解(带脚本)
在日常运维工作中,对mysql数据库的备份是万分重要的,以防在数据库表丢失或损坏情况出现,可以及时恢复数据. 下面对这种备份方案详细说明下:1.MySQLdump增量备份配置执行增量备份的前提条件是M ...
- Python实现目录文件的全量和增量备份
目标: 1.传入3个参数:源文件路径,目标文件路径,md5文件 2.每周一实现全量备份,其余时间增量备份 1.通过传入的路径,获取该路径下面的所有目录和文件(递归) 方法一:使用os.listdir ...
- MySQL5.7.18 备份、Mysqldump,mysqlpump,xtrabackup,innobackupex 全量,增量备份,数据导入导出
粗略介绍冷备,热备,温暖,及Mysqldump,mysqlpump,xtrabackup,innobackupex 全量,增量备份 --备份的目的 灾难恢复:意外情况下(如服务器宕机.磁盘损坏等)对损 ...
- python实现对文件的全量、增量备份
#!/user/bin/env python # @Time :2018/6/6 10:10 # @Author :PGIDYSQ #@File :FileBackup2.py import os i ...
- oracle全量、增量备份
采用0221222增量备份策略,7天一个轮回 也就是周日0级备份,周1 2 4 5 6 采用2级增量备份,周3采用1级增量备份 打开控制文件自动备份 CONFIGURE CONTROLFILE AUT ...
- 关于Subversion主从备份方式的调整(全量、增量脚本)更新
本文引用于http://blog.chinaunix.net/uid-25266990-id-3369172.html 之前对Subversion服务器作了迁移,关于SVN的架构也走了调整,有单一的服 ...
- innobackupex在线备份及恢复(全量和增量)
Xtrabackup是由percona开发的一个开源软件,它是innodb热备工具ibbackup(收费的商业软件)的一个开源替代品.Xtrabackup由个部分组成:xtrabackup和innob ...
- Mysql备份工具xtraback全量和增量测试
Mysql备份工具xtraback全量和增量测试 xtrabackup 是 percona 的一个开源项目,可以热备份innodb ,XtraDB,和MyISAM(会锁表) 官方网址http:// ...
- [MySQL] innobackupex在线备份及恢复(全量和增量)
安装percona-xtrabackup 方法1: percona-xtrabackup-2.1.9-744-Linux-x86_64.tar.gz(D:\share\src\linux-mysql) ...
- 全量、增量数据在HBase迁移的多种技巧实践
作者经历了多次基于HBase实现全量与增量数据的迁移测试,总结了在使用HBase进行数据迁移的多种实践,本文针对全量与增量数据迁移的场景不同,提供了1+2的技巧分享. HBase全量与增量数据迁移的方 ...
随机推荐
- GNU C中x++是原子操作吗?
http://blog.csdn.net/liuaigui/article/details/4141563
- System.Data.SQLite.EF6
2015.1.21 到目前为止这个破玩意不支持code first 建数据库 建表 代替方案 SQL Server Compact -------------------------------- ...
- 1015. Reversible Primes (20)
the problem is from PAT,which website is http://pat.zju.edu.cn/contests/pat-a-practise/1015 this pro ...
- dl-ssl.google.com
转载:http://jingyan.baidu.com/article/64d05a02752300de55f73b99.html 搭建Android就会用到Android SDK,而安装SDK有个恶 ...
- ISO8583报文解析
在此只写了一个8583报文的拆包,组包其实也差不多的. 不多说直接上文件, 具体思路过程,在解析类里面写的有. 其中包含了四个文件 8583resp.txt报文 ISO8583medata配置文件 B ...
- CSS拾遗+技巧集合
1.实现尖角符号. 这是内联inline-block标签独有的特性. <!DOCTYPE html> <html lang="en"> <head&g ...
- 移动互联网(APP)产品设计的经验分享【转】
随着移动互联网的发展,越来越多的Web产品开始布局移动端,因此最近经常碰到PM们在交流讨论移动APP产品的设计.我从事移动互联网已经有一年多了,通过不断的学习和实践也积累了一些心得,今天整理并分享一下 ...
- Java Bean validation specification...
http://www.ibm.com/developerworks/cn/java/j-lo-beanvalid/index.html
- oracle数据库性能调优
一:注意WHERE子句中的连接顺序: ORACLE采用自下而上的顺序解析WHERE子句,根据这个原理,表之间的连接必须写在其他WHERE条件之前, 那些可以过滤掉最大数量记录的条件必须写在WHERE子 ...
- ubuntu下安装Sublime Text并支持中文输入
Sublime Text还是文本编辑器中比较不错的,就是他的文件对比有些差劲吧,还有中文输入需要打补丁,不知道开发者是怎么想的... 当然,这个软件是收费的,但是不买也能一直的使用,在我天朝就这点好处 ...