一、简介

简单的说Rakefile就是使用Ruby语法的makefile, 对应make的工具就是rake. 在Ruby on Rails里面, 不管是数据库的初始化, 内容初始化, 删除, 还是测试, 都是用rake来完成的.

特点:

1.以任务的方式创建和运行脚本

2.追踪和管理任务之间的依赖

二、语法

Rakefile分几个基本的build规则,

依赖关系: =>

默认任务: default

命名空间: namespace

任务描述: desc

任务调用: invoke

三、实例

程序1:数据备份

# = S3 Rake - Use S3 as a backup repository for your SVN repository, code directory, and MySQL database
#
# Author:: Adam Greene
# Copyright:: (c) 2006 6 Bar 8, LLC., Sweetspot.dm
# License:: GNU
#
# Feedback appreciated: adam at [nospam] 6bar8 dt com
#
# = Synopsis
#
# from the CommandLine within your RubyOnRails application folder
# $ rake -T
# rake s3:backup # Backup code, database, and scm to S3
# rake s3:backup:code # Backup the code to S3
# rake s3:backup:db # Backup the database to S3
# rake s3:backup:scm # Backup the scm repository to S3
# rake s3:manage:clean_up # Remove all but the last 10 most recent backup archive or optionally specify KEEP=5 to keep
# the last 5
# rake s3:manage:delete_bucket # delete bucket. You need to pass in NAME=bucket_to_delete. Set FORCE=true if you want to
# # delete the bucket even if there are items in it.
# rake s3:manage:list # list all your backup archives
# rake s3:manage:list_buckets # list all your S3 buckets
# rake s3:retrieve # retrieve the latest revision of code, database, and scm from S3.
# # If you need to specify a specific version, call the individual retrieve tasks
# rake s3:retrieve:code # retrieve the latest code backup from S3, or optionally specify a VERSION=this_archive.tar.gz
# rake s3:retrieve:db # retrieve the latest db backup from S3, or optionally specify a VERSION=this_archive.tar.gz
# rake s3:retrieve:scm # retrieve the latest scm backup from S3, or optionally specify a VERSION=this_archive.tar.gz
#
# = Description
#
# There are a few prerequisites to get this up and running:
# * please download the Amazon S3 ruby library and place it in your ./lib/ directory
# http://developer.amazonwebservices.com/connect/entry.jspa?externalID=135&categoryID=47
# * You will need a 's3.yml' file in ./config/. Sure, you can hard-code the information in this rake task,
# but I like the idea of keeping all your configuration information in one place. The File will need to look like:
# aws_access_key: ''
# aws_secret_access_key: ''
# options:
# use_ssl: true #set it to true or false
#
# Once these two requirements are met, you can easily integrate these rake tasks into capistrano tasks or into cron.
# * For cron, put this into a file like .backup.cron. You can drop this file into /etc/cron.daily,
# and make sure you chmod +x .backup.cron. Also make sure it is owned by the appropriate user (probably 'root'.):
#
# #!/bin/sh
#
# # change the paths as you need...
# cd /var/www/apps//current/ && rake s3:backup >/dev/null 2>&1
# cd /var/www/apps/staging./current/ && rake s3:backup >/dev/null 2>&1
#
# * within your capistrano recipe file, you can add tasks like these:
#
# task :before_migrate, :roles => [:app, :db, :web] do
# # this will back up your svn repository, your code directory, and your mysql db.
# run "cd #{current_path} && rake --trace RAILS_ENV=production s3:backup"
# end
#
# = Future enhancements
#
# * encrypt the files before they are sent to S3
# * when doing a retrieve, uncompress and untar the files for the user.
# * any other enhancements?
#
# = Credits and License
#
# inspired by rshll, developed by Dominic Da Silva:
# http://rubyforge.org/projects/rsh3ll/
#
# This library is licensed under the GNU General Public License (GPL)
# [http://dev.perl.org/licenses/gpl1.html].
#
#
require 's3'
require 'yaml'
require 'erb'
require 'active_record'
namespace :s3 do desc "Backup code, database, and scm to S3"
task :backup => [ "s3:backup:code", "s3:backup:db", "s3:backup:scm"] namespace :backup do
desc "Backup the code to S3"
task :code do
msg "backing up CODE to S3"
make_bucket('code')
archive = "/tmp/#{archive_name('code')}" # copy it to tmp just to play it safe...
cmd = "cp -rp #{Dir.pwd} #{archive}"
msg "extracting code directory"
puts cmd
result = system(cmd)
raise("copy of code dir failed.. msg: #{$?}") unless result send_to_s3('code', archive)
end #end code task desc "Backup the database to S3"
task :db do
msg "backing up the DATABASE to S3"
make_bucket('db')
archive = "/tmp/#{archive_name('db')}" msg "retrieving db info"
database, user, password = retrieve_db_info msg "dumping db"
cmd = "mysqldump --opt --skip-add-locks -u#{user} "
puts cmd + "... [password filtered]"
cmd += " -p'#{password}' " unless password.nil?
cmd += " #{database} > #{archive}"
result = system(cmd)
raise("mysqldump failed. msg: #{$?}") unless result send_to_s3('db', archive)
end desc "Backup the scm repository to S3"
task :scm do
msg "backing up the SCM repository to S3"
make_bucket('scm')
archive = "/tmp/#{archive_name('scm')}"
# archive = "/tmp/#{archive_name('scm')}.tar.gz"
svn_info = {}
IO.popen("svn info") do |f|
f.each do |line|
line.strip!
next if line.empty?
split = line.split(':')
svn_info[split.shift.strip] = split.join(':').strip
end
end url_type, repo_path = svn_info['URL'].split('://')
repo_path.gsub!(/\/+/, '/').strip!
url_type.strip! use_svnadmin = true
final_path = svn_info['URL']
if url_type =~ /^file/
puts "'#{svn_info['URL']} is local!"
final_path = find_scm_dir(repo_path)
else
puts "'#{svn_info['URL']}' is not local!\nWe will see if we can find a local path."
repo_path = repo_path[repo_path.index('/')...repo_path.size]
repo_path = find_scm_dir(repo_path)
if File.exists?(repo_path)
uuid = File.read("#{repo_path}/db/uuid").strip!
if uuid == svn_info['Repository UUID']
puts "We have found the same SVN repo at: #{repo_path} with a matching UUID of '#{uuid}'"
final_path = find_scm_dir(repo_path)
else
puts "We have not found the SVN repo at: #{repo_path}. The uuid's are different."
use_svnadmin = false
final_path = svn_info['URL']
end
else
puts "No SVN repository at #{repo_path}."
use_svnadmin = false
final_path = svn_info['URL']
end
end #ok, now we need to do the work...
cmd = use_svnadmin ? "svnadmin dump -q #{final_path} > #{archive}" : "svn co -q --ignore-externals --non-interactive #{final_path} #{archive}"
msg "extracting svn repository"
puts cmd
result = system(cmd)
raise "previous command failed. msg: #{$?}" unless result
send_to_s3('scm', archive)
end #end scm task end # end backup namespace desc "retrieve the latest revision of code, database, and scm from S3. If you need to specify a specific version, call the individual retrieve tasks"
task :retrieve => [ "s3:retrieve:code", "s3:retrieve:db", "s3:retrieve:scm"] namespace :retrieve do
desc "retrieve the latest code backup from S3, or optionally specify a VERSION=this_archive.tar.gz"
task :code do
retrieve_file 'code', ENV['VERSION']
end desc "retrieve the latest db backup from S3, or optionally specify a VERSION=this_archive.tar.gz"
task :db do
retrieve_file 'db', ENV['VERSION']
end desc "retrieve the latest scm backup from S3, or optionally specify a VERSION=this_archive.tar.gz"
task :scm do
retrieve_file 'scm', ENV['VERSION']
end
end #end retrieve namespace namespace :manage do
desc "Remove all but the last 10 most recent backup archive or optionally specify KEEP=5 to keep the last 5"
task :clean_up do
keep_num = ENV['KEEP'] ? ENV['KEEP'].to_i : 10
puts "keeping the last #{keep_num}"
cleanup_bucket('code', keep_num)
cleanup_bucket('db', keep_num)
cleanup_bucket('scm', keep_num)
end desc "list all your backup archives"
task :list do
print_bucket 'code'
print_bucket 'db'
print_bucket 'scm'
end desc "list all your S3 buckets"
task :list_buckets do
puts conn.list_all_my_buckets.entries.map { |bucket| bucket.name }
end desc "delete bucket. You need to pass in NAME=bucket_to_delete. Set FORCE=true if you want to delete the bucket even if there are items in it."
task :delete_bucket do
name = ENV['NAME']
raise "Specify a NAME=bucket that you want deleted" if name.blank?
force = ENV['FORCE'] == 'true' ? true : false cleanup_bucket(name, 0, false) if force
response = conn.delete_bucket(name).http_response.message
response = "Yes" if response == 'No Content'
puts "deleting bucket #{bucket_name(name)}. Successful? #{response}"
end
end #end manage namespace
end private def find_scm_dir(path)
#double check if the path is a real physical path vs a svn path
final_path = path
tmp_path = final_path
len = tmp_path.split('/').size
while !File.exists?(tmp_path) && len > 0 do
len -= 1
tmp_path = final_path.split('/')[0..len].join('/')
end
final_path = tmp_path if len > 1
final_path
end # will save the file from S3 in the pwd.
def retrieve_file(name, specific_file)
msg "retrieving a #{name} backup from S3"
entries = conn.list_bucket(bucket_name(name)).entries
raise "No #{name} backups to retrieve" if entries.size < 1 entry = entries.find{|entry| entry.key == specific_file}
raise "Could not find the file '#{specific_key}' in the #{name} bucket" if entry.nil? && !specific_file.nil?
entry_key = specific_file.nil? ? entries.last.key : entry.key
msg "retrieving archive: #{entry_key}"
data = conn.get(bucket_name('db'), entry_key).object.data
File.open(entry_key, "wb") { |f| f.write(data) }
msg "retrieved file './#{entry_key}'"
end # print information about an item in a particular bucket
def print_bucket(name)
msg "#{bucket_name(name)} Bucket"
conn.list_bucket(bucket_name(name)).entries.map do |entry|
puts "size: #{entry.size/1.megabyte}MB, Name: #{entry.key}, Last Modified: #{Time.parse( entry.last_modified ).to_s(:short)} UTC"
end
end # go through and keep a certain number of items within a particular bucket,
# and remove everything else.
def cleanup_bucket(name, keep_num, convert_name=true)
msg "cleaning up the #{name} bucket"
bucket = convert_name ? bucket_name(name) : name
entries = conn.list_bucket(bucket).entries #will only retrieve the last 1000
remove = entries.size-keep_num-1
entries[0..remove].each do |entry|
response = conn.delete(bucket, entry.key).http_response.message
response = "Yes" if response == 'No Content'
puts "deleting #{bucket}/#{entry.key}, #{Time.parse( entry.last_modified ).to_s(:short)} UTC. Successful? #{response}"
end unless remove < 0
end # open a S3 connection
def conn
@s3_configs ||= YAML::load(ERB.new(IO.read("#{RAILS_ROOT}/config/s3.yml")).result)
@conn ||= S3::AWSAuthConnection.new(@s3_configs['aws_access_key'], @s3_configs['aws_secret_access_key'], @s3_configs['options']['use_ssl'])
end # programatically figure out what to call the backup bucket and
# the archive files. Is there another way to do this?
def project_name
# using Dir.pwd will return something like:
# /var/www/apps/staging.sweetspot.dm/releases/20061006155448
# instead of
# /var/www/apps/staging.sweetspot.dm/current
pwd = ENV['PWD'] || Dir.pwd
#another hack..ugh. If using standard capistrano setup, pwd will be the 'current' symlink.
pwd = File.dirname(pwd) if File.symlink?(pwd)
File.basename(pwd)
end # create S3 bucket. If it already exists, not a problem!
def make_bucket(name)
msg = conn.create_bucket(bucket_name(name)).http_response.message
raise "Could not make bucket #{bucket_name(name)}. Msg: #{msg}" if msg != 'OK'
msg "using bucket: #{bucket_name(name)}"
end def bucket_name(name)
# it would be 'nicer' if could use '/' instead of '_' for bucket names...but for some reason S3 doesn't like that
"#{token(name)}_backup"
end def token(name)
"#{project_name}_#{name}"
end def archive_name(name)
@timestamp ||= Time.now.utc.strftime("%Y%m%d%H%M%S")
token(name).sub('_', '.') + ".#{RAILS_ENV}.#{@timestamp}"
end # put files in a zipped tar everything that goes to s3
# send it to the appropriate backup bucket
# then does a cleanup
def send_to_s3(name, tmp_file)
archive = "/tmp/#{archive_name(name)}.tar.gz" msg "archiving #{name}"
cmd = "tar -cpzf #{archive} #{tmp_file}"
puts cmd
system cmd msg "sending archived #{name} to S3"
# put file with default 'private' ACL
bytes = nil
File.open(archive, "rb") { |f| bytes = f.read }
#set the acl as private
headers = { 'x-amz-acl' => 'private', 'Content-Length' => FileTest.size(archive).to_s }
response = conn.put(bucket_name(name), archive.split('/').last, bytes, headers).http_response.message
msg "finished sending #{name} S3" msg "cleaning up"
cmd = "rm -rf #{archive} #{tmp_file}"
puts cmd
system cmd
end def msg(text)
puts " -- msg: #{text}"
end def retrieve_db_info
# read the remote database file....
# there must be a better way to do this...
result = File.read "#{RAILS_ROOT}/config/database.yml"
result.strip!
config_file = YAML::load(ERB.new(result).result)
return [
config_file[RAILS_ENV]['database'],
config_file[RAILS_ENV]['username'],
config_file[RAILS_ENV]['password']
]
end

Rakefile实例教程的更多相关文章

  1. 图解CSS3制作圆环形进度条的实例教程

    圆环形进度条制作的基本思想还是画出基本的弧线图形,然后CSS3中我们可以控制其旋转来串联基本图形,制造出部分消失的效果,下面就来带大家学习图解CSS3制作圆环形进度条的实例教程 首先,当有人说你能不能 ...

  2. Python导出Excel为Lua/Json/Xml实例教程(三):终极需求

    相关链接: Python导出Excel为Lua/Json/Xml实例教程(一):初识Python Python导出Excel为Lua/Json/Xml实例教程(二):xlrd初体验 Python导出E ...

  3. Python导出Excel为Lua/Json/Xml实例教程(二):xlrd初体验

    Python导出Excel为Lua/Json/Xml实例教程(二):xlrd初体验 相关链接: Python导出Excel为Lua/Json/Xml实例教程(一):初识Python Python导出E ...

  4. Python导出Excel为Lua/Json/Xml实例教程(一):初识Python

    Python导出Excel为Lua/Json/Xml实例教程(一):初识Python 相关链接: Python导出Excel为Lua/Json/Xml实例教程(一):初识Python Python导出 ...

  5. 详解Linux交互式shell脚本中创建对话框实例教程_linux服务器

    本教程我们通过实现来讲讲Linux交互式shell脚本中创建各种各样对话框,对话框在Linux中可以友好的提示操作者,感兴趣的朋友可以参考学习一下. 当你在终端环境下安装新的软件时,你可以经常看到信息 ...

  6. Solr 4.0 部署实例教程

    Solr 4.0 部署实例教程 Solr 4.0的入门基础教程,先说一点部署之后肯定会有人用solrj,solr 4.0好像添加了不少东西,其中CommonsHttpSolrServer这个类改名为H ...

  7. React 入门实例教程(转载)

    本人转载自: React 入门实例教程

  8. 分享本年度最佳的15个 Photoshop 实例教程

    毫无疑问,Photoshop 是任何其类型的设计相关工作的最佳工具.有这么多东西,你可以用它来设计,发挥你的想象力,一切皆有可能. 现在,几乎所有的封面图像都会用 Photoshop 来修饰. 您可能 ...

  9. 值得 Web 开发人员学习的20个 jQuery 实例教程

    这篇文章挑选了20个优秀的 jQuery 实例教程,这些 jQuery 教程将帮助你把你的网站提升到一个更高的水平.其中,既有网站中常用功能的的解决方案,也有极具吸引力的亮点功能的实现方法,相信通过对 ...

随机推荐

  1. NHibernate.Cfg.HibernateConfigException

    在用NHibernate 框架做web 项目时,当项目被成功编译后,按F5 启动调试时,一开始就出现这个错误,刚开始就很郁闷,到底出在哪里?连自己都 不知道,在网上搜来搜去,找了很多的资料终于弄明白了 ...

  2. video元素和audio元素

    内容: 1.video元素 2.audio元素 注:这两个元素均是HTML5新增的元素 1.video元素 (1)用途 <video> 标签定义视频,比如电影片段或其他视频流 (2)标签属 ...

  3. 写下thinkphp5和thinkphp3.2的不同

    只列出一些自己的直观感受 1 引入了命令行,估计来源是laravel,前阵子刚练手完laravel5.0的系统, 感觉thinkphp5的命令行和laravel的很像 2 引入了路由,来源估计也是la ...

  4. 28. 表单css样式定义格式

    form>table>tbody>tr>td{padding:5px;font-size:14px;font-family:"Microsoft YaHei" ...

  5. xe7 c++builder 日期时间头文件函数大全 date

    c++builde r时间日期函数大全,在头文件System.DateUtils.hpp,不过没有IncMonth,因为这个函数定义在System.SysUtils.hpp里头了,唉 date,dat ...

  6. 可视化库-seaborn-回归分析绘图(第五天)

    1. sns.regplot() 和 sns.lmplot() 绘制回归曲线 import numpy as np import pandas as pd from scipy import stat ...

  7. JAVA NIO学习记录1-buffer和channel

    什么是NIO? Java NIO(New IO)是从Java 1.4版本开始引入的一个新的IO API,可以替代标准的Java IO API.NIO与原来的IO有同样的作用和目的,但是使用的方式完全不 ...

  8. JAVA WEB开发中的会话跟踪

    常用的会话跟踪技术是Cookie与Session.Cookie通过在客户端记录信息确定用户身份,Session通过在服务器端记录信息确定用户身份. Http协议是一种无状态的协议,一旦数据交换完毕,客 ...

  9. 一步步实现 easyui datagrid表格宽度自适应,效果非常好

    一步步实现 easyui datagrid表格宽度自适应,效果非常好: 一.设置公共方法,使得datagrid的属性  fitColumns:true $(function(){ //初始加载,表格宽 ...

  10. JPEG和Variant的转换

    unit Unit1; interface uses   Windows, Messages, SysUtils, Classes, Graphics, Controls,       Forms, ...