通过命令tesseract -h 可查看 OCR操作脚本参数:

其中参数说明:

  • –-oem:指定使用的算法,0:代表老的算法;1:代表LSTM算法;2:代表两者的结合;3:代表系统自己选择。
  • –-psm:指定页面切分模式。默认是3,也就是自动的页面切分,但是不进行方向(Orientation)和文字(script,其实并不等同于文字,比如俄文和乌克兰文都使用相同的script,中文和日文的script也有重合的部分)的检测。如果我们要识别的是单行的文字,我可以指定7。我们这里已经知道文字是中文,并且方向是horizontal(从左往右再从上往下的写法,古代中国是从上往下从右往左),因此使用默认的3就可以了。

--psm:

combine_tessdata


  • -e:通过-e 指令,可以从一个已经合并了的traineddata文件中提取独立的组件。如:combine_tessdata -e tessdata/eng.traineddata \  /home/$USER/temp/eng.config /home/$USER/temp/eng.unicharset
  • -o:通过 -o 指令,可以覆盖一个给定的traineddata文件中的对应组件。如:combine_tessdata -o tessdata/eng.traineddata \ /home/$USER/temp/eng.config /home/$USER/temp/eng.unicharambigs
  • -u:通过 -u 指令,可以将所有组件解压到指定路径。如:combine_tessdata -u tessdata/eng.traineddata /home/$USER/temp/eng.

code


NAME
combine_tessdata - combine/extract/overwrite/list/compact Tesseract data
# 用于合并/提取/覆盖/list(-d)/压缩 tesseract data
SYNOPSIS
combine_tessdata [OPTION] FILE... DESCRIPTION
combine_tessdata(1) is the main program to
combine/extract/overwrite/list/compact tessdata components in
[lang].traineddata files.
# combine_tessdata 是主要的程序,用来合并/提取/覆盖/list/压缩 [lang].traineddata files 中的tessdata组件。 To combine all the individual tessdata components (unicharset, DAWGs,
classifier templates, ambiguities, language configs) located at, say,
/home/$USER/temp/eng.* run: combine_tessdata /home/$USER/temp/eng. The result will be a combined tessdata file
/home/$USER/temp/eng.traineddata
# 将所有独立的tessdat组件合并在一起 Specify option -e if you would like to extract individual components
from a combined traineddata file. For example, to extract language
config file and the unicharset from tessdata/eng.traineddata run: combine_tessdata -e tessdata/eng.traineddata \
/home/$USER/temp/eng.config /home/$USER/temp/eng.unicharset The desired config file and unicharset will be written to
/home/$USER/temp/eng.config /home/$USER/temp/eng.unicharset
# 通过-e 指令,可以从一个已经合并了的traineddata文件中提取独立的组件。 Specify option -o to overwrite individual components of the given
[lang].traineddata file. For example, to overwrite language config and
unichar ambiguities files in tessdata/eng.traineddata use: combine_tessdata -o tessdata/eng.traineddata \
/home/$USER/temp/eng.config /home/$USER/temp/eng.unicharambigs As a result, tessdata/eng.traineddata will contain the new language
config and unichar ambigs, plus all the original DAWGs, classifier
templates, etc.
# 通过 -o 指令,可以覆盖一个给定的traineddata文件中的对应组件。
# Note: the file names of the files to extract to and to overwrite from
should have the appropriate file suffixes (extensions) indicating their
tessdata component type (.unicharset for the unicharset, .unicharambigs
for unichar ambigs, etc). See k*FileSuffix variable in
ccutil/tessdatamanager.h.
# 要提取和覆盖的文件的文件名应具有对应文件相同的后缀名,以表明其tessdata组件的类型。 Specify option -u to unpack all the components to the specified path:
combine_tessdata -u tessdata/eng.traineddata /home/$USER/temp/eng. This will create /home/$USER/temp/eng.* files with individual tessdata
components from tessdata/eng.traineddata.
# 通过 -u 指令,可以将所有组件解压到指定路径 OPTIONS
-c .traineddata FILE...: Compacts the LSTM component in the
.traineddata file to int. -d .traineddata FILE...: Lists directory of components from the
.traineddata file. -e .traineddata FILE...: Extracts the specified components from the
.traineddata file -o .traineddata FILE...: Overwrites the specified components of the
.traineddata file with those provided on the command line. -u .traineddata PATHPREFIX Unpacks the .traineddata using the provided
prefix. CAVEATS
Prefix refers to the full file prefix, including period (.)
# 注意点 指令中的前缀要包含‘.’
COMPONENTS
#组件
The components in a Tesseract lang.traineddata file as of Tesseract 4.0
are briefly described below; For more information on many of these
files, see
https://github.com/tesseract-ocr/tesseract/wiki/TrainingTesseract and
https://github.com/tesseract-ocr/tesseract/wiki/TrainingTesseract-4.00 lang.config
(Optional) Language-specific overrides to default config variables.
For 4.0 traineddata files, lang.config provides control parameters
which can affect layout analysis, and sub-languages.
# 根据语言特定,用来覆盖默认的配置变量。
# 对于4.0的traineddata文件来说,config文件提供影响布局分析(不知道跟文字分割算法有关)和子语言的控制参数 lang.unicharset
(Required - 3.0x legacy tesseract) The list of symbols that
Tesseract recognizes, with properties. See unicharset(5).
# 3.0 必需的
# tesseract识别的符号列表,包含属性. lang.unicharambigs
(Optional - 3.0x legacy tesseract) This file contains information
on pairs of recognized symbols which are often confused. For
example, rn and m.
# 3.0 可选的
# 这个文件包含经常容易混淆的符号对的信息,例如‘rn和m’
# (如果识别中文的话,应该可以用来处理一些形似字,比如日和曰) lang.inttemp
(Required - 3.0x legacy tesseract) Character shape templates for
each unichar. Produced by mftraining(1).
# 3.0 必需
# 每个字符的形状模板
# 通过模仿mftraining创建 lang.pffmtable
(Required - 3.0x legacy tesseract) The number of features expected
for each unichar. Produced by mftraining(1) from .tr files.
# 3.0 必需
# 每个字符的期望特征数量
# 由 mftraining 通过 .tr文件产生 lang.normproto
(Required - 3.0x legacy tesseract) Character normalization
prototypes generated by cntraining(1) from .tr files.
# 3.0 必需
# 字符的归一化原型,由cntraining通过.tr文件生成 lang.punc-dawg
(Optional - 3.0x legacy tesseract) A dawg made from punctuation
patterns found around words. The "word" part is replaced by a
single space.
# 3.0 可选的
# 一个由字符周围标点符号构建的dawg
# word部分由一个单独的空格替代 lang.word-dawg
(Optional - 3.0x legacy tesseract) A dawg made from dictionary
words from the language.
# 3.0 可选
# 一个由字典单词构建的dawg lang.number-dawg
(Optional - 3.0x legacy tesseract) A dawg made from tokens which
originally contained digits. Each digit is replaced by a space
character.
# 3.0 可选
# 一个由符号构建的dawg,最初包含数字,每一个数字被一个空格字符代替???不是很理解 lang.freq-dawg
(Optional - 3.0x legacy tesseract) A dawg made from the most
frequent words which would have gone into word-dawg.
# 3.0 可选
# 一个由最常用单词构建的dawg,这些单词将会进入word-dwag lang.fixed-length-dawgs
(Optional - 3.0x legacy tesseract) Several dawgs of different fixed
lengths — useful for languages like Chinese.
# 3.0 可选
# 混合长度dawgs
# 对类似于中文的语言有用
# lang.shapetable
(Optional - 3.0x legacy tesseract) When present, a shapetable is an
extra layer between the character classifier and the word
recognizer that allows the character classifier to return a
collection of unichar ids and fonts instead of a single unichar-id
and font.
# 3.0 可选
# 如果存在,shapetable是字符分类器和单词识别器之间的额外层,允许字符分类器返回unichar ID和字体的集合,而不是单个unichar-id和字体。
# (应该是指用来应对多字符识别的,应该能够提高准确率) lang.bigram-dawg
(Optional - 3.0x legacy tesseract) A dawg of word bigrams where the
words are separated by a space and each digit is replaced by a ?.
# 一个由双字母组构成的dawg
# bigram??二元语法
[wiki bigram](https://en.wikipedia.org/wiki/N-gram) lang.unambig-dawg
(Optional - 3.0x legacy tesseract) . lang.params-model
(Optional - 3.0x legacy tesseract) . lang.lstm
(Required - 4.0 LSTM) Neural net trained recognition model
generated by lstmtraining.
# 4.0 必需
# 由lstmtraining生成的神经网络识别模型 lang.lstm-punc-dawg
(Optional - 4.0 LSTM) A dawg made from punctuation patterns found
around words. The "word" part is replaced by a single space. Uses
lang.lstm-unicharset.
# 4.0 可选
# 由单词周边的标点符号构造的dawg,需要用到lang.lstm-unicharset lang.lstm-word-dawg
(Optional - 4.0 LSTM) A dawg made from dictionary words from the
language. Uses lang.lstm-unicharset.
# 4.0 可选
# 由指定的语言的字典单词构造的dawg,需要用到lang.lstm-unicharset lang.lstm-number-dawg
(Optional - 4.0 LSTM) A dawg made from tokens which originally
contained digits. Each digit is replaced by a space character. Uses
lang.lstm-unicharset.
# 4.0可选
# 一个由最初包含数字的符号集构造的dawg
# Each digit is replaced by a space character.这句话还是不是很理解,直译的话就是每个数字都由一个空格字符代替,
我想或者是不是可以理解为每个数字都由一个空格字符所占用的位置代表??
# 需要用到lang.lstm-unicharset lang.lstm-unicharset
(Required - 4.0 LSTM) The unicode character set that Tesseract
recognizes, with properties. Same unicharset must be used to train
the LSTM and build the lstm-*-dawgs files.
# 4.0 必需
# 一个Tesseract可以识别的包含属性的unicode字符集。
# 相同的单字符集必须用来被训练LSTM,并且构造the lstm-*-dawgs files
lang.lstm-recoder
(Required - 4.0 LSTM) Unicharcompress, aka the recoder, which maps
the unicharset further to the codes actually used by the neural
network recognizer. This is created as part of the starter
traineddata by combine_lang_model.
# 4.0 必需
# Unicharcompress又名the recoder (单字符压缩?又名编码器?)
# 将单字符集合进一步映射到神经网络识别器实际使用到的代码上
# 这个lang.lang_recoder由combine_lang_model创建的starter traineddata的一部分
# (这个lang.lang_recoder可以通过combine_tessdata从traineddata中提取出来) lang.version
(Optional) Version string for the traineddata file. First appeared
in version 4.0 of Tesseract. Old version of traineddata files will
report Version string:Pre-4.0.0. 4.0 version of traineddata files
may include the network spec used for LSTM training as part of
version string.
# 4.0 可选
# 为the traineddata file.创建的版本字符串
# HISTORY
combine_tessdata(1) first appeared in version 3.00 of Tesseract SEE ALSO
tesseract(1), wordlist2dawg(1), cntraining(1), mftraining(1),
unicharset(5), unicharambigs(5) COPYING
Copyright (C) 2009, Google Inc. Licensed under the Apache License,
Version 2.0 AUTHOR
The Tesseract OCR engine was written by Ray Smith and his research
groups at Hewlett Packard (1985-1995) and Google (2006-present).

 参考资料


OCR3:tesseract script的更多相关文章

  1. 服务端调用js:javax.script

    谈起js在服务端的应用,大部分人的第一反应都是node.js.node.js作为一套服务器端的 JavaScript 运行环境,有自己的独到之处,但不是所有的地方都需要使用它. 例如在已有的服务端代码 ...

  2. Ansible 脚本运行一次后,再次运行时出现报错情况,原因:ansible script 的格式不对,应改成Unix编码

    Ansible 脚本运行一次后,再次运行时出现报错情况,原因:ansible  script 的格式不对,应改成Unix编码 find . -name "*" | xargs do ...

  3. npm run dev 报错:missing script:dev

    一.问题: 今天在运行vue项目时,在mac终端输入npm run dev,结果报错: 翻译是: npm错误:缺少script:dev npm错误:完整路径见:users/mymac/ .npm/_l ...

  4. LR:HTML-based script和URL-based script方式录制的区别

    转http://www.cnblogs.com/xiaojinniu425/p/6275257.html 一.区别: 为了更加直观的区别这两种录制方式,我们可以分别使用这两种方式录制同一场景(打开百度 ...

  5. OCR2:tesseract字库训练

    由于tesseract的中文语言包“chi_sim”对中文字体或者环境比较复杂的图片,识别正确率不高,因此需要针对特定情况用自己的样本进行训练,提高识别率,通过训练,也可以形成自己的语言库. 工具: ...

  6. V-rep学习笔记:main script and child scripts

    The main and child scripts The main script and the child scripts, which are simulation scripts, play ...

  7. Fiddler 高级用法:Fiddler Script 与 HTTP 断点调试

    转载自 https://my.oschina.net/leejun2005/blog/399108 1.Fiddler Script 1.1 Fiddler Script简介 在web前端开发的过程中 ...

  8. 一起来学linux:shell script(二)关于脚本

    (一)首先来看shell脚本的执行方式,shell脚本的后缀名都是sh文件. 1 sh test.sh 2 source test.sh 这两种方式有什么区别呢.test.sh 里的脚本很简单, 从键 ...

  9. OCR4:Tesseract 4

    Tesseract OCR 该软件包包含一个OCR引擎 -  libtesseract和一个命令行程序 -  tesseract. Tesseract 4增加了一个基于OCR引擎的新神经网络(LSTM ...

随机推荐

  1. Anaconda3(2)Anaconda3中安装TensorFlow

    https://zhuanlan.zhihu.com/p/34730661 1. 安装anaconda3:自行下载.安装[注意版本] (可参考引用链接) 2. 搭建TensorFlow环境 cuda1 ...

  2. Anaconda3(1)Windows10下安装Anaconda3(64位)详细过程

    https://blog.csdn.net/ychgyyn/article/details/82119201 前言Anaconda指的是一个开源的Python发行版本,其包含了conda.Python ...

  3. ZBX_TCP_READ() time out windows

    zabbix 客户端无法推送数据,日志显示在启动的时候ZBX_TCP_READ() time out windows, 场景:agent 到proxy的10051通,proxy到agnet的10050 ...

  4. hdu5111 树链剖分,主席树

    hdu5111 链接 hdu 思路 先考虑序列上如何解决. 1 3 2 5 4 1 2 4 5 3 这个序列变成 1 2 3 4 5 1 3 5 5 2 是对答案没有影响的(显然). 然后查询操作\( ...

  5. Codeforces Global Round 3 题解

    这场比赛让我上橙了. 前三题都是大水题,不说了. 第四题有点难想,即使想到了也不能保证是对的.(所以说下面D的做法可能是错的) E的难度是 $2300$,但是感觉很简单啊???说好的歪果仁擅长构造的呢 ...

  6. CSRF 跨站

    目录 CSRF 跨站请求伪造 解决跨站伪造问题: csrf 相关的装饰器: csrf.js文件: CSRF 跨站请求伪造 CSRF全称为Cross-site request forgery,也被称为: ...

  7. #ifndef #define #endif

    在一个大的软件工程里面,可能会有多个文件同时包含一个头文件,当这些文件编译链接成一个可执行文件时,就会出现大量重定义的错误.在头文件中实用#ifndef #define #endif能避免头文件的重定 ...

  8. Java编程思想之十八 枚举类型

    关键字enum可以将一组具名的值的有限集合创建为一种新的类型, 而这些具名的值可以作为常规的程序组件使用.这是一种非常有用的功能. 18.1 基本enum特性 创建enum时,编译器会为你生成一个相关 ...

  9. Visual Studio报错/plugin.vs.js,行:1074,错误:缺少标识符、字符串或数字

    C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\Common7\IDE\PrivateAssemblies/plugin. ...

  10. ThinkPHP5 使用 JWT 进行加密

    使用 Github 的 firebase\JWT - 使用 Composer 安装此扩展 - 代码示例 <?php /** * [InterCommon-接口公用] * @Author Rain ...