bam文件测序深度统计-bamdst
最近接触的数据都是靶向测序,或者全外测序的数据。对数据的覆盖深度及靶向捕获效率的评估成为了数据质量监控中必不可少的一环。
以前都是用samtools depth 算出单碱基的深度后,用perl来进行深度及捕获效率的计算。今天无意中看到了bamdst(https://github.com/shiquan/bamdst)这个软件,用起来也很方便,参考GitHub,在此记录使用方法。
下载并安装:下载安装包并解压后,
cd ./bamdst-master
make
安装好后,需要准备.bed文件及.bam文件,以软件提供的MT-RNR1.bed和test.bam为例:
./bamdst-master/bamdst -p ./bamdst-master/example/MT-RNR1.bed -o ./test ./bamdst-master/example/test.bam
输出的结果包含7个文件,为:
-coverage.report
-cumu.plot
-insert.plot
-chromosome.report
-region.tsv.gz
-depth.tsv.gz
-uncover.bed
其中coverage.report提供的信息很多,具体可参照如下:
[Total] Raw Reads (All reads) // All reads in the bam file(s).
[Total] QC Fail reads // Reads number failed QC, this flag is marked by other software,like bwa. See flag in the bam structure.
[Total] Raw Data(Mb) // Total reads data in the bam file(s).
[Total] Paired Reads // Paired reads numbers.
[Total] Mapped Reads // Mapped reads numbers.
[Total] Fraction of Mapped Reads // Ratio of mapped reads against raw reads.
[Total] Mapped Data(Mb) // Mapped data in the bam file(s).
[Total] Fraction of Mapped Data(Mb) // Ratio of mapped data against raw data.
[Total] Properly paired // Paired reads with properly insert size. See bam format protocol for details.
[Total] Fraction of Properly paired // Ratio of properly paired reads against mapped reads
[Total] Read and mate paired // Read (read1) and mate read (read2) paired.
[Total] Fraction of Read and mate paired // Ratio of read and mate paired against mapped reads
[Total] Singletons // Read mapped but mate read unmapped, and vice versa.
[Total] Read and mate map to diff chr // Read and mate read mapped to different chromosome, usually because mapping error and structure variants.
[Total] Read1 // First reads in mate paired sequencing
[Total] Read2 // Mate reads
[Total] Read1(rmdup) // First reads after remove duplications.
[Total] Read2(rmdup) // Mate reads after remove duplications.
[Total] forward strand reads // Number of forward strand reads.
[Total] backward strand reads // Number of backward strand reads.
[Total] PCR duplicate reads // PCR duplications.
[Total] Fraction of PCR duplicate reads // Ratio of PCR duplications.
[Total] Map quality cutoff value // Cutoff map quality score, this value can be set by -q. default is 20, because some variants caller like GATK only consider high quality reads.
[Total] MapQuality above cutoff reads // Number of reads with higher or equal quality score than cutoff value.
[Total] Fraction of MapQ reads in all reads // Ratio of reads with higher or equal Q score against raw reads.
[Total] Fraction of MapQ reads in mapped reads // Ratio of reads with higher or equal Q score against mapped reads.
[Target] Target Reads // Number of reads covered target region (specified by bed file).
[Target] Fraction of Target Reads in all reads // Ratio of target reads against raw reads.
[Target] Fraction of Target Reads in mapped reads // Ratio of target reads against mapped reads.
[Target] Target Data(Mb) // Total bases covered target region. If a read covered target region partly, only the covered bases will be counted.
[Target] Target Data Rmdup(Mb) // Total bases covered target region after remove PCR duplications.
[Target] Fraction of Target Data in all data // Ratio of target bases against raw bases.
[Target] Fraction of Target Data in mapped data // Ratio of target bases against mapped bases.
[Target] Len of region // The length of target regions.
[Target] Average depth // Average depth of target regions. Calculated by "target bases / length of regions".
[Target] Average depth(rmdup) // Average depth of target regions after remove PCR duplications.
[Target] Coverage (>0x) // Ratio of bases with depth greater than 0x in target regions, which also means the ratio of covered regions in target regions.
[Target] Coverage (>=4x) // Ratio of bases with depth greater than or equal to 4x in target regions.
[Target] Coverage (>=10x) // Ratio of bases with depth greater than or equal to 10x in target regions.
[Target] Coverage (>=30x) // Ratio of bases with depth greater than or equal to 30x in target regions.
[Target] Coverage (>=100x) // Ratio of bases with depth greater than or equal to 100x in target regions.
[Target] Coverage (>=Nx) // This is addtional line for user self-defined cutoff value, see --cutoffdepth
[Target] Target Region Count // Number of target regions. In normal practise,it is the total number of exomes.
[Target] Region covered > 0x // The number of these regions with average depth greater than 0x.
[Target] Fraction Region covered > 0x // Ratio of these regions with average depth greater than 0x.
[Target] Fraction Region covered >= 4x // Ratio of these regions with average depth greater than or equal to 4x.
[Target] Fraction Region covered >= 10x // Ratio of these regions with average depth greater than or equal to 10x.
[Target] Fraction Region covered >= 30x // Ratio of these regions with average depth greater than or equal to 30x.
[Target] Fraction Region covered >= 100x // Ratio of these regions with average depth greater than or equal to 100x.
[flank] flank size // The flank size will be count. 200 bp in default. Oligos could also capture the nearby regions of target regions.
[flank] Len of region (not include target region) // The length of flank regions (target regions will not be count).
[flank] Average depth // Average depth of flank regions.
[flank] flank Reads // The total number of reads covered the flank regions. Note: some reads covered the edge of target regions, will be count in flank regions also.
[flank] Fraction of flank Reads in all reads // Ratio of reads covered in flank regions against raw reads.
[flank] Fraction of flank Reads in mapped reads // Ration of reads covered in flank regions against mapped reads.
[flank] flank Data(Mb) // Total bases in the flank regions.
[flank] Fraction of flank Data in all data // Ratio of total bases in the flank regions against raw data.
[flank] Fraction of flank Data in mapped data // Ratio of total bases in the flank regions against mapped data.
[flank] Coverage (>0x) // Ratio of flank bases with depth greater than 0x.
[flank] Coverage (>=4x) // Ratio of flank bases with depth greater than or equal to 4x.
[flank] Coverage (>=10x) // Ratio of flank bases with depth greater than or equal to 10x.
[flank] Coverage (>=30x) // Ratio of flank bases with depth greater than or equal to 30x.
bam文件测序深度统计-bamdst的更多相关文章
- SAMTOOLS使用 SAM BAM文件处理
[怪毛匠子 整理] samtools学习及使用范例,以及官方文档详解 #第一步:把sam文件转换成bam文件,我们得到map.bam文件 system"samtools view -bS m ...
- Pysam 处理bam文件
Pysam可用来处理bam文件 安装: 用 pip 或者 conda即可 使用: Pysam的函数有很多,主要的读取函数有: AlignmentFile:读取BAM/CRAM/SAM文件 Varian ...
- SAM/BAM文件处理
当测序得到的fastq文件map到基因组之后,我们通常会得到一个sam或者bam为扩展名的文件.SAM的全称是sequence alignment/map format.而BAM就是SAM的二进制文件 ...
- bam文件softclip , hardclip ,markduplicate的探究
测序产生的bam文件,有一些reads在cigar值里显示存在softclip,有一些存在hardclip,究竟softclip和hardclip是怎么判断出来的,还有是怎么标记duplicate ...
- C++使用htslib库读入和写出bam文件
有时候我们需要使用C++处理bam文件,比如取出read1或者read2等符合特定条件的序列,根据cigar值对序列指定位置的碱基进行统计或者对序列进行处理并输出等,这时我们可以使用htslib库 ...
- 文件格式——Sam&bam文件
Sam&bam文件 SAM是一种序列比对格式标准, 由sanger制定,是以TAB为分割符的文本格式.主要应用于测序序列mapping到基因组上的结果表示,当然也可以表示任意的多重比对结果.当 ...
- 测序深度和覆盖度(Sequencing depth and coverage)
总是跑数据,却对数据一无所知,这说不过去吧. 看几篇文章吧 Sequencing depth and coverage: key considerations in genomic analyses( ...
- 键盘录入一个文件夹路径,统计该文件夹(包含子文件夹)中每种类型的文件及个数,注意:用文件类型(后缀名,不包含.(点),如:"java","txt")作为key, 用个数作为value,放入到map集合中,遍历map集合
package cn.it.zuoye5; import java.io.File;import java.util.HashMap;import java.util.Iterator;import ...
- java基础 File与递归练习 使用文件过滤器筛选将指定文件夹下的小于200K的小文件获取并打印按层次打印(包括所有子文件夹的文件) 多层文件夹情况统计文件和文件夹的数量 统计已知类型的数量 未知类型的数量
package com.swift.kuozhan; import java.io.File; import java.io.FileFilter; /*使用文件过滤器筛选将指定文件夹下的小于200K ...
随机推荐
- 取火柴游戏||Nim博弈
好久之前看的sg函数了 好像就记住一个nim博弈qwq 第一次啊看的时候很迷,现在感觉可以了qwq 首先我们来看一个其他的游戏.(以下游戏只有两个人参与,且足够聪明) 两个人在一张圆形的桌子上放等大的 ...
- hadoop二次排序
import java.io.DataInput; import java.io.DataOutput; import java.io.File; import java.io.IOException ...
- nmon监控linux系统性能
Nmon是一款计算机性能系统监控工具,使用 Nmon 可以很轻松的监控系统的 CPU.内存.网络.硬盘.文件系统.NFS.高耗进程.资源等信息.[简单方便] Nmon 安装 到https://sour ...
- 【TOJ 4309】表达式求值(模拟栈)
Description Dr.Kong设计的机器人卡多掌握了加减法运算以后,最近又学会了一些简单的函数求值,比如,它知道函数min(20,23)的值是20 ,add(10,98) 的值是108等等.经 ...
- CF练习记录
2018/5/6 Codeforces Round #478 (Div. 2) C http://codeforces.com/contest/975/problem/C Valhalla Siege ...
- mysql——查询重复数据,及删除重复数据只保留一条数据
查询 text 表中,user_name字段值重复的数据及重复次数 select user_name,count(*) as count from text 删除 text 表中,重复出现的数据只保留 ...
- DevOps - 项目构建 - Maven
Maven介绍Apache Maven是一个创新的软件项目管理和综合工具.Maven提供了一个基于项目对象模型(POM)文件的新概念来管理项目的构建,可以从一个中心资料片管理项目构建,报告和文件.Ma ...
- thinkphp-PHP实现pdf导出功能
Thinkphp框架引用tcpdf插件,插件下载地址:待续... 代码编写前先引入tcpdf整个文件夹到项目目录的ThinkPHP文件夹下 如:/ThinkPHP/Library/Vendor/tcp ...
- hadoop生态搭建(3节点)-07.hive配置
# http://archive.apache.org/dist/hive/hive-2.1.1/ # ================================================ ...
- Python的scrapy之爬取51job网站的职位
今天老师讲解了Python中的爬虫框架--scrapy,然后带领我们做了一个小爬虫--爬取51job网的职位信息,并且保存到数据库中 用的是Python3.6 pycharm编辑器 爬虫主体: im ...