http://lwandersonmusings.blogspot.com/2011/06/workspace-cloning-sharing-in-hudson.html

 

What's inside...
Huge workspace (6GB) + long build time (6 hours) + many post-build steps (17) = huge consumption of resources. Replicate the build? Share the workspace? Clone the workspace? Clone and share the workspace?

Background
Recently I was challenged with the opportunity to convert a large project (1,600,000+ LOC) from a commercial build system to Hudson as part of an enterprise-wide effort of a client to standardize their build environment. Without getting into the details of the project itself, one of the greatest challenges was overcoming the size of the source code repository as well as the length of the integration build, 6GB and 6 hours, respectively. The entire build process, including post-build scanning and reporting (Findbugs, Fortify, etc.) took over 12 hours.

 
One of the primary objectives of this particular conversion was to reduce the time for overall throughput of the build process. An additional requirement was the publication of Sonar analysis reports for management consumption. And since each sub-project is autonomous in the reporting scheme, we needed to run the Sonar analysis on each of the 16 sub-projects as well.

Let's get started...

I started by adding 17 steps to the build job, one for each sub-project as well as one for the entire project as a whole. But we were experiencing intermittent errors in the Sonar analysis steps which basically aborted the entire process. And aborting the entire process meant no publication of build statistics, etc., to the Hudson dashboard even though the build itself was completing successfully.

Share the workspace...
So we decided to create individual Hudson jobs for each of the Sonar analysis steps so that if one analysis step (job) failed, the others would run. And more importantly, the build job itself would complete publishing its code coverage and test results reports to the Hudson dashboards. But since we didn’t want to propagate a 6GB workspace for each of the 17 Sonar analysis jobs, not to mention the time required to repeatedly download the 1.6M LOC, we decided to use the ‘Custom workspace’ Advanced Project configuration option to point back to the main build job’s Hudson workspace. It worked. 

All well and good… so we thought.

And now the nightly build process was taking 14 hours. Granted Sonar analysis was added to the process, but one of the goals was to reduce the overall throughput time, not increase it. And with the implementation of the 17 additional Sonar analysis jobs that are required to run on the same slave because of re-using the main build’s Hudson workspace, we needed to “pin” all the jobs to the same slave. Even though I was testing this prototype process on a single slave, we wanted the capability to run this process on any slave of a Hudson master that was equipped to handle the build. We also wanted the capability to distribute the workload of the 17 Sonar analysis jobs across the slaves, effectively reducing the overall build process throughput time.

Zip and copy...

So my next challenge was to develop an architecture that would solve two problems that were introduced by the new implementation; namely, increased overall throughput, and pinning all jobs to a single slave. First I experimented with zipping up the workspace in a post-build step of the main build, archiving it, and then in the downstream Sonar analysis job, using the ‘Copy artifacts from another project’ build step to bring it into the workspace and then a build step to unzip it.
 
The main build job would sometimes fail zipping up the huge workspace. But most often the downstream job would fail either retrieving the archive across the network or unzipping the archive because it was sometimes corrupted (probably from the download failing). It was very error prone and inconsistent, presumably because of the huge workspace. I experimented with the CopyArchiverCopy ArtifactCopy Data to Workspace, and Copy to Slave plug-ins, but I finally abandoned the zip-and-copy idea never getting the entire process to run successfully from start to finish.

Ok, how about cloning?

I then discovered the Clone Workspace SCM plug-in that, in theory, would allow us to run the main integration build on any capable slave, clone its workspace, and then run the downstream jobs on any other capable slave using the ‘cloned’ workspace. Essentially, this was the same as my zip-archive-unzip process, but all supported through a Hudson plug-in, not via a hacked-up process.
 

After installing the plug-in, I reconfigured the main integration build job to utilize the ‘Archive for Clone Workspace SCM’ Post-Build action specifying ‘Most Recent Not Failed Build’:

 
This insures the workspace is not archived if the build fails preventing wasted space since the downstream job is not triggered if the build fails. When the archiver finishes archiving the workspace successfully, it automatically deletes the previous workspace archive.
 
I then reconfigured each of the downstream analysis jobs to specify the ‘Clone Workspace’ Source Code Management configuration option also specifying the ‘Most Recent Not Failed Build’ criteria to match the parent job specification:
 
But… the proverbial good news, bad news. 
The good news is that it recreates the workspace of the upstream job in the pre-build process as the Hudson workspace of the downstream job prior to running any build steps. The workspace of the downstream job is identical to the workspace of the main build upstream job. The bad news is that it recreates the workspace of the upstream job in the pre-build process as the Hudson workspace of the downstream job prior to running any build steps. (Yes- that’s not a typo- they are the same.)
 
Even though we consider it temporary (it is replaced on each execution of the downstream job), the workspace cannot be automatically removed when the job is finished remaining on the slave tying up valuable disk storage. We attempted to use the ‘CleanUp all other workspaces on the same slavegroup’ Hudson Build Environment option, but it did not do what we expected. 
 
So now we had a new challenge. Even though we could now run the 17 downstream jobs independently of the main build job, i.e., on any capable slave, we added another problem to the list; lack of disk storage. In propagating the 6GB workspace 17 times, we had quickly consumed all available disk space on the slave. And in the future when we intend to utilize more than one slave, we could potentially have 18 6GB workspaces on each slave.
Ok, how about cloning AND sharing?

 
After a sudden stroke of brilliance (he said sarcastically), I finally came up with what would be our final solution. While implementing, revising, and testing it, I kept asking myself, “Why didn’t I think of this before?” It was a relatively simple solution, combining both previously thought-of theories, using the ‘Custom workspace’ Advanced Project option as well as the Clone Workspace SCM plug-in. The challenge was fine-tuning it so it would work.
 
To test my new architecture I was given three slaves that we prepared identically capable of running the main integration build. I created three Hudson jobs, one for each of the three slaves, to create a custom workspace using the cloned workspace of the main build job. The important difference between this configuration and the configuration of the previous Sonar analysis jobs that were also using the cloned workspace from the main integration build is that the workspace of each of these three jobs, which we call the ‘unarchive’ jobs for clarity, is a custom workspace named identically in each of the jobs. This is key. It is not a Hudson-created workspace (which contains the name of the job) as with the previous Sonar analysis jobs.
 
Each of the jobs, each pinned to a single slave, creates aX:\Hudson\workspace\AcmeMainProject as a custom workspace. The custom workspace is specified in the Advanced Project Options section:
 
After the three unarchive jobs have finished executing, each of the slaves has an identically named workspace with identical content.
 
I then reconfigured each of the downstream Sonar analysis jobs to use the workspace created by the unarchive jobs, X:\Hudson\workspace\AcmeMainProject, as a custom workspace and replaced the ‘Clone workspace’ SCM option with ‘None’. The result is that any of the analysis jobs can run on any of the three slaves since each of the three slaves has an identical workspace at theX:\Hudson\workspace\AcmeMainProject location.

Some final tuning...

After some trial and error and understanding that Hudson groups all jobs to be triggered alphabetically before triggering them regardless of how they are grouped in the triggering job, I finally developed an architecture of cascading jobs that would give us the capability we needed with a relatively minimal amount of disk storage as well as reduced the overall throughput time.
 
The main build job runs the 6-hour Ant build to produce the artifacts of the project. Upon successful completion, a single downstream job is triggered. That single downstream job does nothing more than trigger the three unarchive jobs as well as a fourth ‘trigger’ job. Using the combination of job names and job priorities, the three unarchive jobs run first, one on each capable slave, creating the custom workspace on each slave. The trigger job runs next because of a lower priority, on any of the slaves, and also does nothing more than trigger the 17 Sonar analysis jobs, the Findbugs job, and the Fortify analysis job. The downstream jobs are then released and run in alphabetical sequence within priority on any of the three slaves.
 
Using the Priority Sorter plug-in, I assigned declining priorities to the jobs reflecting the sequence in which they must run: 100 for the unarchive jobs, 75 to the trigger job, 50 to the Sonar analysis jobs, and 25 to the Findbugs and Fortify analysis jobs.
 
 
This guarantees that if all or any one of the slaves are(is) not available (either offline or another job is running on the slave) when the unarchive jobs are triggered, the unarchive jobs will execute prior to any of the downstream analysis jobs when the slave(s) again become(s) available.
 
So what was the bottom line, you ask?
Stem to stern including archiving and unarchiving the cloned workspace, the total throughput time of the entire build process (when all three slaves are available) was reduced from over 14 hours to just over 8 hours. Not only did we reduce the overall throughput time, we also reduced the total disk storage required by sharing the cloned workspaces.
 
Mission accomplished.

Workspace Cloning / Sharing in Jenkins的更多相关文章

  1. How to fix TFS workspace mapping error in Jenkins

    Once you had update in TFS workspace for Jenkin TFS plugin, you might get error like bellow: [worksp ...

  2. Jenkins修改workspace和build目录

    Jenkins: Change Workspaces and Build Directory Locations  转自: http://ingorichter.blogspot.jp/2012/02 ...

  3. jenkins+git+docker实验环境的搭建

    持续集成(c/i)的实验环境 git/harbor服务器    ip 192.168.200.132 docker服务器          ip 192.168.200.149 Jenkins服务器 ...

  4. jenkins部署记录

    环境规划 主机分配 192.168.2.139 : gitlab 192.168.2.141 : jenkins 192.168.2.142 : haproxy01 192.168.2.143 :ha ...

  5. Git--08 Jenkins

    目录 Jenkins 01. 安装准备 02 .安装Jdk和Jenkins 03 .配置Jenkins 04. 插件安装 05. 创建项目 06. Jenkins获取Git源代码 07. 立即构建获取 ...

  6. 持续集成工具Jenkins学习总结

    概述 持续集成(Continuous Integration,简称CI)是一种软件开发实践,团队开发人员每次都通过自动化的构建(编译.发布.自动化测试)来验证,从而尽早的发现集成错误.持续集成最大的优 ...

  7. 搭建持续集成接口测试平台(Jenkins+Ant+Jmeter)

    一.环境准备: 1.JDK:http://www.oracle.com/technetwork/java/javase/downloads/index.html 2.Jmeter:http://jme ...

  8. jenkins中通过git发版操作记录

    之前说到的jenkins自动化构建发版是通过svn方式,今天这里介绍下通过git方式发本的操作记录. 一.不管是通过svn发版还是git发版,都要首先下载svn或git插件.登陆jenkins,依次点 ...

  9. 从零开始使用Jenkins来构建Docker容器(Ubuntu 14.04)

    当开发更新了代码,提交到Gitlab上,然后由测试人员触发Jenkins,于是一个应用的新版本就被构建了.听起来貌似很简单,duang~duang~duang,我用了是这样,你们用了也是这样,看起来这 ...

随机推荐

  1. sqlserver多表连接更新

    一.MS SQL Server 多表关联更新 sql server提供了update的from 子句,可以将要更新的表与其它的数据源连接起来.虽然只能对一个表进行更新,但是通过将要更新的表与其它的数据 ...

  2. iOS 支付 [支付宝、银联、微信](转载)

    资料 支付宝 //文档idk都包含了安卓.iOS版 银 联 银联官网资料 Demo Demo给了一个订单号,做测试使用,若出现支付失败什么的,可能是已经被别人给支付了,或者是服务器订单过期了 ~ 一. ...

  3. 利用utl_file来读取文件.

    以前写过用external table来加载trace文件,详情参考下面链接. http://www.cnblogs.com/princessd8251/p/3779145.html 今天要做到是用U ...

  4. convert Timestamp to Real time

    select r.ring_buffer_address, r.ring_buffer_type, dateadd (ms, r.[timestamp] - sysinfo.sqlserver_sta ...

  5. Linux 技巧:让进程在后台可靠运行的几种方法

    我们经常会碰到这样的问题,用 telnet/ssh 登录了远程的 Linux 服务器,运行了一些耗时较长的任务, 结果却由于网络的不稳定导致任务中途失败.如何让命令提交后不受本地关闭终端窗口/网络断开 ...

  6. python_模块

    1. 模块的导入 (1) python中import module时,系统通常在哪些路径下面查找模块? 在以下的路径查找模块:sys.path 如果你模块所在的目录,不在sys.path的目录下,可以 ...

  7. web文件上传的实现

    1,html页面,上传使用input type=file控件,其所在的form必须加上enctype="multipart/form-data" <form role=&qu ...

  8. XP下安装MAC OS虚拟系统

    参考baidu经验: http://jingyan.baidu.com/article/e5c39bf5876c8b39d760331a.html 工具: 1.虚拟机软件:vmware worksta ...

  9. HBase -- 基于HDFS的开源分布式NoSQL数据库

    HBase(Hadoop Database)是一个高可靠性.高性能.面向列.可伸缩的分布式存储系统,我们可以利用HBase技术在廉价的PC上搭建起大规模结构化存储集群.同Google的Bigtable ...

  10. iOS 事件穿透

    - (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view, typica ...