上次(http://www.cnblogs.com/stGeekpower/p/3457746.html)主要是对应于javadoc写了下LexicalizedParser类main函数的功能,这次看下main函数的具体处理过程。main函数大概350行左右,主要完成的工作是:初始化变量(各种标志位)、解析传入的各种参数、根据传入的选项参数分步骤完成各种工作。

根据选项来做的工作按顺序主要包括:分词(必须最先处理)、初始化LexicalizedParser(读入或训练)、编码设置、测试、保存(如果需要的话)、解析输出结果。

具体解析的部分:对句子解析是通过LexicalizedParser对象生成的ParserQuery类的parse函数来完成,对文件的解析由ParseFiles类的parseFiles函数(最终也是调用ParserQuery类)完成。

一、初始化变量

这部分主要处理申明一些标志位,以及构建解析器需要的变量;

  1. boolean train = false;//train or parse
  2. boolean saveToSerializedFile = false;//是否序列化存储至文件
  3. boolean saveToTextFile = false;//是否存储至文本文件
  4. String serializedInputFileOrUrl = null;//序列化输入文件或者url
  5. String textInputFileOrUrl = null;//文本输入文件或者url
  6. String serializedOutputFileOrUrl = null;//序列化输出文件或者url
  7. String textOutputFileOrUrl = null;//文本输入文件或者url
  8. String treebankPath = null;//语法树路径
  9. Treebank testTreebank = null;
  10. Treebank tuneTreebank = null;
  11. String testPath = null;
  12. FileFilter testFilter = null;
  13. String tunePath = null;
  14. FileFilter tuneFilter = null;
  15. FileFilter trainFilter = null;//训练过滤范围
  16. String secondaryTreebankPath = null;
  17. double secondaryTreebankWeight = 1.0;
  18. FileFilter secondaryTrainFilter = null;
  19.  
  20. // variables needed to process the files to be parsed
  21. TokenizerFactory<? extends HasWord> tokenizerFactory = null; //分词工厂
  22. String tokenizerOptions = null;//分词所需参数
  23. String tokenizerFactoryClass = null;//分词所用类
  24. String tokenizerMethod = null;//分词所用方法
  25. boolean tokenized = false; // whether or not the input file has already been tokenized
  26. Function<List<HasWord>, List<HasWord>> escaper = null; //转义
  27. String tagDelimiter = null; //分隔符
  28. String sentenceDelimiter = null;
  29. String elementDelimiter = null;

二、解析传入的各种参数

这里处理用户传入的各种选项参数,存入在一种申明的变量中;

  1. int argIndex = 0;
  2. if (args.length < 1) {//参数数量为0,错误返回
  3. System.err.println("Basic usage (see Javadoc for more): java edu.stanford.nlp.parser.lexparser" +
  4. ".LexicalizedParser parserFileOrUrl filename*");
  5. return;
  6. }
  7.  
  8. Options op = new Options(); //处理参数的对象
  9. List<String> optionArgs = new ArrayList<String>();
  10. String encoding = null;
  11. // while loop through option arguments,循环处理选项参数
  12. while (argIndex < args.length && args[argIndex].charAt(0) == '-') {
  13. if (args[argIndex].equalsIgnoreCase("-train") || args[argIndex].equalsIgnoreCase("-trainTreebank")) {//判断是否执行训练功能
  14. train = true;
  15. //处理训练时传入的参数信息,得到文件路径和过滤范围存至treebankDescription
  16. Pair<String, FileFilter> treebankDescription = ArgUtils.getTreebankDescription(args, argIndex, "-test");
  17. argIndex = argIndex + ArgUtils.numSubArgs(args, argIndex) + 1;
  18. treebankPath = treebankDescription.first();
  19. trainFilter = treebankDescription.second();
  20. } else if (args[argIndex].equalsIgnoreCase("-train2")) {
  21. // TODO: we could use the fully expressive -train options if
  22. // we add some mechanism for returning leftover options from
  23. // ArgUtils.getTreebankDescription
  24. // train = true; // cdm july 2005: should require -train for this
  25. int numSubArgs = ArgUtils.numSubArgs(args, argIndex);
  26. argIndex++;
  27. if (numSubArgs < 2) {
  28. throw new RuntimeException("Error: -train2 <treebankPath> [<ranges>] <weight>.");
  29. }
  30. secondaryTreebankPath = args[argIndex++];
  31. secondaryTrainFilter = (numSubArgs == 3) ? new NumberRangesFileFilter(args[argIndex++], true) : null;
  32. secondaryTreebankWeight = Double.parseDouble(args[argIndex++]);
  33. } else if (args[argIndex].equalsIgnoreCase("-tLPP") && (argIndex + 1 < args.length)) {
  34. // 当使用除英文外的语言或者English Penn Treebank之外的Treebank时候需要指定TreebankLangParserParams,
  35. // 该选项必须出现在其他的与语言相关的选项之前。不同的语言有不同的参数
  36. try {
  37. op.tlpParams = (TreebankLangParserParams) Class.forName(args[argIndex + 1]).newInstance();
  38. } catch (ClassNotFoundException e) {
  39. System.err.println("Class not found: " + args[argIndex + 1]);
  40. throw new RuntimeException(e);
  41. } catch (InstantiationException e) {
  42. System.err.println("Couldn't instantiate: " + args[argIndex + 1] + ": " + e.toString());
  43. throw new RuntimeException(e);
  44. } catch (IllegalAccessException e) {
  45. System.err.println("Illegal access" + e);
  46. throw new RuntimeException(e);
  47. }
  48. argIndex += 2;
  49. } else if (args[argIndex].equalsIgnoreCase("-encoding")) {//编码
  50. // sets encoding for TreebankLangParserParams
  51. // redone later to override any serialized parser one read in
  52. encoding = args[argIndex + 1];
  53. op.tlpParams.setInputEncoding(encoding);
  54. op.tlpParams.setOutputEncoding(encoding);
  55. argIndex += 2;
  56. } else if (args[argIndex].equalsIgnoreCase("-tokenized")) {//是否已经分词
  57. tokenized = true;
  58. argIndex += 1;
  59. } else if (args[argIndex].equalsIgnoreCase("-escaper")) {
  60. try {
  61. escaper = ReflectionLoading.loadByReflection(args[argIndex + 1]);
  62. } catch (Exception e) {
  63. System.err.println("Couldn't instantiate escaper " + args[argIndex + 1] + ": " + e);
  64. }
  65. argIndex += 2;
  66. } else if (args[argIndex].equalsIgnoreCase("-tokenizerOptions")) {//指定TokenizerFactory类完成tokenization 所需要的参数信息
  67. tokenizerOptions = args[argIndex + 1];
  68. argIndex += 2;
  69. } else if (args[argIndex].equalsIgnoreCase("-tokenizerFactory")) {//指定一个TokenizerFactory类来完成分词
  70. tokenizerFactoryClass = args[argIndex + 1];
  71. argIndex += 2;
  72. } else if (args[argIndex].equalsIgnoreCase("-tokenizerMethod")) {//分词方法
  73. tokenizerMethod = args[argIndex + 1];
  74. argIndex += 2;
  75. } else if (args[argIndex].equalsIgnoreCase("-sentences")) {//指定一个词语来划分句子边界,即分句根据
  76. sentenceDelimiter = args[argIndex + 1];
  77. if (sentenceDelimiter.equalsIgnoreCase("newline")) {
  78. sentenceDelimiter = "\n";
  79. }
  80. argIndex += 2;
  81. } else if (args[argIndex].equalsIgnoreCase("-parseInside")) {//解析的范围,可以是句,几句等等
  82. elementDelimiter = args[argIndex + 1];
  83. argIndex += 2;
  84. } else if (args[argIndex].equalsIgnoreCase("-tagSeparator")) {//指明标注符号
  85. tagDelimiter = args[argIndex + 1];
  86. argIndex += 2;
  87. } else if (args[argIndex].equalsIgnoreCase("-loadFromSerializedFile") ||
  88. args[argIndex].equalsIgnoreCase("-model")) {
  89. // load the parser from a binary serialized file
  90. // the next argument must be the path to the parser file
  91. serializedInputFileOrUrl = args[argIndex + 1];
  92. argIndex += 2;
  93. } else if (args[argIndex].equalsIgnoreCase("-loadFromTextFile")) {
  94. // load the parser from declarative text file
  95. // the next argument must be the path to the parser file
  96. textInputFileOrUrl = args[argIndex + 1];
  97. argIndex += 2;
  98. } else if (args[argIndex].equalsIgnoreCase("-saveToSerializedFile")) {
  99. saveToSerializedFile = true;
  100. if (ArgUtils.numSubArgs(args, argIndex) < 1) {
  101. System.err.println("Missing path: -saveToSerialized filename");
  102. } else {
  103. serializedOutputFileOrUrl = args[argIndex + 1];
  104. }
  105. argIndex += 2;
  106. } else if (args[argIndex].equalsIgnoreCase("-saveToTextFile")) {
  107. // save the parser to declarative text file
  108. saveToTextFile = true;
  109. textOutputFileOrUrl = args[argIndex + 1];
  110. argIndex += 2;
  111. } else if (args[argIndex].equalsIgnoreCase("-saveTrainTrees")) {
  112. // save the training trees to a binary file
  113. op.trainOptions.trainTreeFile = args[argIndex + 1];
  114. argIndex += 2;
  115. } else if (args[argIndex].equalsIgnoreCase("-treebank") ||
  116. args[argIndex].equalsIgnoreCase("-testTreebank") ||
  117. args[argIndex].equalsIgnoreCase("-test")) {//训练并测试,测试所需的参数
  118. Pair<String, FileFilter> treebankDescription = ArgUtils.getTreebankDescription(args, argIndex, "-test");
  119. argIndex = argIndex + ArgUtils.numSubArgs(args, argIndex) + 1;
  120. testPath = treebankDescription.first();
  121. testFilter = treebankDescription.second();
  122. } else if (args[argIndex].equalsIgnoreCase("-tune")) {
  123. Pair<String, FileFilter> treebankDescription = ArgUtils.getTreebankDescription(args, argIndex, "-tune");
  124. argIndex = argIndex + ArgUtils.numSubArgs(args, argIndex) + 1;
  125. tunePath = treebankDescription.first();
  126. tuneFilter = treebankDescription.second();
  127. } else {
  128. int oldIndex = argIndex;
  129. argIndex = op.setOptionOrWarn(args, argIndex);
  130. for (int i = oldIndex; i < argIndex; i++) {
  131. optionArgs.add(args[i]);
  132. }
  133. }
  134. } // end while loop through arguments

三、分词处理

句法分析的前提是句子已经被正确分词,这里即完成分词工作,当然分词我们可以选用自己合适的分词器;

  1. // set up tokenizerFactory with options if provided
  2. if (tokenizerFactoryClass != null || tokenizerOptions != null) {
  3. try {//分词工厂类、分词方法由参数指定,若不指定,默认PTBTokenizer
  4. if (tokenizerFactoryClass != null) {
  5. Class<TokenizerFactory<? extends HasWord>> clazz = ErasureUtils.uncheckedCast(Class.forName
  6. (tokenizerFactoryClass));
  7. Method factoryMethod;
  8. if (tokenizerOptions != null) {
  9. factoryMethod = clazz.getMethod(tokenizerMethod != null ? tokenizerMethod :
  10. "newWordTokenizerFactory", String.class);
  11. tokenizerFactory = ErasureUtils.uncheckedCast(factoryMethod.invoke(null, tokenizerOptions));
  12. } else {
  13. factoryMethod = clazz.getMethod(tokenizerMethod != null ? tokenizerMethod :
  14. "newTokenizerFactory");
  15. tokenizerFactory = ErasureUtils.uncheckedCast(factoryMethod.invoke(null));
  16. }
  17. } else {
  18. // have options but no tokenizer factory; default to PTB
  19. tokenizerFactory = PTBTokenizer.PTBTokenizerFactory.newWordTokenizerFactory(tokenizerOptions);
  20. }
  21. } catch (IllegalAccessException e) {
  22. System.err.println("Couldn't instantiate TokenizerFactory " + tokenizerFactoryClass + " with options " +
  23. "" + tokenizerOptions);
  24. throw new RuntimeException(e);
  25. } catch (NoSuchMethodException e) {
  26. System.err.println("Couldn't instantiate TokenizerFactory " + tokenizerFactoryClass + " with options " +
  27. "" + tokenizerOptions);
  28. throw new RuntimeException(e);
  29. } catch (ClassNotFoundException e) {
  30. System.err.println("Couldn't instantiate TokenizerFactory " + tokenizerFactoryClass + " with options " +
  31. "" + tokenizerOptions);
  32. throw new RuntimeException(e);
  33. } catch (InvocationTargetException e) {
  34. System.err.println("Couldn't instantiate TokenizerFactory " + tokenizerFactoryClass + " with options " +
  35. "" + tokenizerOptions);
  36. throw new RuntimeException(e);
  37. }

四、初始化LexicalizedParser

初始化LexicalizedParser有三种方式,分别是:根据数据训练一个,从文本文件读入,从序列化文件读入;

  1. if (tuneFilter != null || tunePath != null) {//处理tune treebank
  2. if (tunePath == null) {
  3. if (treebankPath == null) {
  4. throw new RuntimeException("No tune treebank path specified...");
  5. } else {
  6. System.err.println("No tune treebank path specified. Using train path: \"" + treebankPath + '\"');
  7. tunePath = treebankPath;
  8. }
  9. }
  10. tuneTreebank = op.tlpParams.testMemoryTreebank();
  11. tuneTreebank.loadPath(tunePath, tuneFilter);
  12. }
  13.  
  14. if (!train && op.testOptions.verbose) {
  15. StringUtils.printErrInvocationString("LexicalizedParser", args);
  16. }
  17. edu.stanford.nlp.parser.lexparser.LexicalizedParser lp; // always initialized in next if-then-else block
  18. if (train) {
  19. StringUtils.printErrInvocationString("LexicalizedParser", args);
  20.  
  21. // so we train a parser using the treebank
  22. GrammarCompactor compactor = null;
  23. if (op.trainOptions.compactGrammar() == 3) {
  24. compactor = new ExactGrammarCompactor(op, false, false);
  25. }
  26.  
  27. Treebank trainTreebank = makeTreebank(treebankPath, op, trainFilter);
  28.  
  29. Treebank secondaryTrainTreebank = null;
  30. if (secondaryTreebankPath != null) {
  31. secondaryTrainTreebank = makeSecondaryTreebank(secondaryTreebankPath, op, secondaryTrainFilter);
  32. }
  33.  
  34. List<List<TaggedWord>> extraTaggedWords = null;
  35. if (op.trainOptions.taggedFiles != null) {
  36. extraTaggedWords = new ArrayList<List<TaggedWord>>();
  37. List<TaggedFileRecord> fileRecords = TaggedFileRecord.createRecords(new Properties(),
  38. op.trainOptions.taggedFiles);
  39. for (TaggedFileRecord record : fileRecords) {
  40. for (List<TaggedWord> sentence : record.reader()) {
  41. extraTaggedWords.add(sentence);
  42. }
  43. }
  44. }
  45. //执行训练方法时对lp的初始化,根据标注数据训练出lp
  46. lp = getParserFromTreebank(trainTreebank, secondaryTrainTreebank, secondaryTreebankWeight, compactor, op,
  47. tuneTreebank, extraTaggedWords);
  48. } else if (textInputFileOrUrl != null) {
  49. // so we load the parser from a text grammar file,直接从文本文件中读入lp
  50. lp = getParserFromTextFile(textInputFileOrUrl, op);
  51. } else {
  52. // so we load a serialized parser,从序列化保存的文件中读入lp
  53. if (serializedInputFileOrUrl == null && argIndex < args.length) {
  54. // the next argument must be the path to the serialized parser
  55. serializedInputFileOrUrl = args[argIndex];
  56. argIndex++;
  57. }
  58. if (serializedInputFileOrUrl == null) {
  59. System.err.println("No grammar specified, exiting...");
  60. return;
  61. }
  62. String[] extraArgs = new String[optionArgs.size()];
  63. extraArgs = optionArgs.toArray(extraArgs);
  64. try {
  65. lp = loadModel(serializedInputFileOrUrl, op, extraArgs);
  66. op = lp.op;
  67. } catch (IllegalArgumentException e) {
  68. System.err.println("Error loading parser, exiting...");
  69. throw e;
  70. }
  71. }

五、控制编码

  1. // the following has to go after reading parser to make sure
  2. // op and tlpParams are the same for train and test
  3. // THIS IS BUTT UGLY BUT IT STOPS USER SPECIFIED ENCODING BEING
  4. // OVERWRITTEN BY ONE SPECIFIED IN SERIALIZED PARSER
  5. if (encoding != null) {
  6. op.tlpParams.setInputEncoding(encoding);
  7. op.tlpParams.setOutputEncoding(encoding);
  8. }

六、测试数据设置

  1. if (testFilter != null || testPath != null) {
  2. if (testPath == null) {
  3. if (treebankPath == null) {
  4. throw new RuntimeException("No test treebank path specified...");
  5. } else {
  6. System.err.println("No test treebank path specified. Using train path: \"" + treebankPath + '\"');
  7. testPath = treebankPath;
  8. }
  9. }
  10. testTreebank = op.tlpParams.testMemoryTreebank();
  11. testTreebank.loadPath(testPath, testFilter);
  12. }

七、需要的话将训练生成的解析器保存

  1. op.trainOptions.sisterSplitters = Generics.newHashSet(Arrays.asList(op.tlpParams.sisterSplitters()));
  2.  
  3. // at this point we should be sure that op.tlpParams is
  4. // set appropriately (from command line, or from grammar file),
  5. // and will never change again. -- Roger
  6.  
  7. // Now what do we do with the parser we've made
  8. if (saveToTextFile) {
  9. // save the parser to textGrammar format
  10. if (textOutputFileOrUrl != null) {
  11. lp.saveParserToTextFile(textOutputFileOrUrl);
  12. } else {
  13. System.err.println("Usage: must specify a text grammar output path");
  14. }
  15. }
  16. if (saveToSerializedFile) {
  17. if (serializedOutputFileOrUrl != null) {
  18. lp.saveParserToSerialized(serializedOutputFileOrUrl);
  19. } else if (textOutputFileOrUrl == null && testTreebank == null) {
  20. // no saving/parsing request has been specified
  21. System.err.println("usage: " + "java edu.stanford.nlp.parser.lexparser.LexicalizedParser " + "-train " +
  22. "trainFilesPath [fileRange] -saveToSerializedFile serializedParserFilename");
  23. }
  24. }

八、训练或者指定输入参数时,输出一些信息

  1. if (op.testOptions.verbose || train) {
  2. // Tell the user a little or a lot about what we have made
  3. // get lexicon size separately as it may have its own prints in it....
  4. String lexNumRules = lp.lex != null ? Integer.toString(lp.lex.numRules()) : "";
  5. System.err.println("Grammar\tStates\tTags\tWords\tUnaryR\tBinaryR\tTaggings");
  6. System.err.println("Grammar\t" +
  7. lp.stateIndex.size() + '\t' +
  8. lp.tagIndex.size() + '\t' +
  9. lp.wordIndex.size() + '\t' +
  10. (lp.ug != null ? lp.ug.numRules() : "") + '\t' +
  11. (lp.bg != null ? lp.bg.numRules() : "") + '\t' +
  12. lexNumRules);
  13. System.err.println("ParserPack is " + op.tlpParams.getClass().getName());
  14. System.err.println("Lexicon is " + lp.lex.getClass().getName());
  15. if (op.testOptions.verbose) {
  16. System.err.println("Tags are: " + lp.tagIndex);
  17. // System.err.println("States are: " + lp.pd.stateIndex); // This is too verbose. It was already
  18. // printed out by the below printOptions command if the flag -printStates is given (at training time)!
  19. }
  20. printOptions(false, op);
  21. }

九、执行解析工作

可以以句子的方式解析,也可用ParseFiles类的方法来解析多个文件。

  1. if (testTreebank != null) {
  2. // test parser on treebank
  3. EvaluateTreebank evaluator = new EvaluateTreebank(lp);
  4. evaluator.testOnTreebank(testTreebank);
  5. } else if (argIndex >= args.length) {
  6. // no more arguments, so we just parse our own test sentence
  7. PrintWriter pwOut = op.tlpParams.pw();
  8. PrintWriter pwErr = op.tlpParams.pw(System.err);
  9. ParserQuery pq = lp.parserQuery();
  10. if (pq.parse(op.tlpParams.defaultTestSentence())) {//解析
  11. lp.getTreePrint().printTree(pq.getBestParse(), pwOut);
  12. } else {
  13. pwErr.println("Error. Can't parse test sentence: " +
  14. op.tlpParams.defaultTestSentence());
  15. }
  16. } else {
  17. // We parse filenames given by the remaining arguments,解析
  18. ParseFiles.parseFiles(args, argIndex, tokenized, tokenizerFactory, elementDelimiter, sentenceDelimiter,
  19. escaper, tagDelimiter, op, lp.getTreePrint(), lp);
  20. }

Stanford parser学习:LexicalizedParser类分析的更多相关文章

  1. Stanford Parser学习入门(1)-Eclipse中配置

    Stanford Parser是斯坦福大学研发的用于语法分析的工具,属于stanford nlp系列工具之一.本文主要介绍Standfor Parser的入门用法. 在Stanford官方网站下载最新 ...

  2. Stanford Parser学习入门(2)-命令行运行

    在Stanford parser目录中已经定义了一部分命令行工具以及图形界面,本文将介绍如何在windows使用这些工具进行语法分析,Linux下也有shell可以使用. 关于如何搭建环境请参考上一篇 ...

  3. Stanford Parser学习入门(3)-标记

    以下是Stanford parser中的标记中文释义供参考. probabilistic context-free grammar(PCFG)     ROOT:要处理文本的语句 IP:简单从句 NP ...

  4. [Android FrameWork 6.0源码学习] LayoutInflater 类分析

    LayoutInflater是用来解析XML布局文件,然后生成对象的ViewTree的工具类.是这个工具类的存在,才能让我们写起Layout来那么省劲. 我们接下来进去刨析,看看里边的奥秘 //调用i ...

  5. 使用Stanford Parser进行句法分析

    一.句法分析 1.定义 句法分析判断输入的单词序列(一般为句子)的构成是否合乎给定的语法,并通过构造句法树来确定句子的结构以及各层次句法成分之间的关系,即确定一个句子中的哪些词构成一个短语,哪些词是动 ...

  6. 【Java EE 学习 69 下】【数据采集系统第一天】【实体类分析和Base类书写】

    之前SSH框架已经搭建完毕,现在进行实体类的分析和Base类的书写.Base类是抽象类,专门用于继承. 一.实体类关系分析 既然是数据采集系统,首先调查实体(Survey)是一定要有的,一个调查有多个 ...

  7. Stanford parser:入门使用

    一.stanford parser是什么? stanford parser是stanford nlp小组提供的一系列工具之一,能够用来完成语法分析任务.支持英文.中文.德文.法文.阿拉伯文等多种语言. ...

  8. Spring源码分析——BeanFactory体系之抽象类、类分析(二)

    上一篇分析了BeanFactory体系的2个类,SimpleAliasRegistry和DefaultSingletonBeanRegistry——Spring源码分析——BeanFactory体系之 ...

  9. ROS_Kinetic_29 kamtoa simulation学习与示例分析(一)

    致谢源代码网址:https://github.com/Tutorgaming/kamtoa-simulation kamtoa simulation学习与示例分析(一) 源码学习与分析是学习ROS,包 ...

随机推荐

  1. Struts2+hibernate3+Spring2的整合方法

    浅谈Struts+hibernate+Spring的整合方法 摘要:本文将介绍Struts,Spring与hibernate的集成.希望大家能从中受用. 1.在工程中导入spring支持,导入的Jar ...

  2. Mosaic HDU 4819 二维线段树入门题

    Mosaic Time Limit: 10000/5000 MS (Java/Others)    Memory Limit: 102400/102400 K (Java/Others)Total S ...

  3. framework&&library's root

    框架和文件集合的路径应该是相对路径而不是绝对路径 写法如下图所示:

  4. iOS 关于UIscrollView

    设置 滚动起始位置 [scrollView setContentOffset:CGPointMake(0, 0) animated:YES];

  5. NSBlockOperation添加多个任务

    //创建一个队列 NSOperationQueue *operation=[[NSOperationQueue alloc]init]; //把任务放在NSBlockOperation里面 NSBlo ...

  6. Cache的原理、设计及实现

    Cache的原理.设计及实现 前言 虽然CPU主频的提升会带动系统性能的改善,但系统性能的提高不仅仅取决于CPU,还与系统架构.指令结构.信息在各个部件之间的传送速度及存储部件的存取速度等因素有关,特 ...

  7. 【Shell脚本学习19】Shell while循环

    while循环用于不断执行一系列命令,也用于从输入文件中读取数据:命令通常为测试条件.其格式为: while command do    Statement(s) to be executed if ...

  8. oracle数据库例外处理与视图

    pl/sql例外处理 例 当输入编号没有时的例外处理 declare --定义 v_ename emp.ename%type; begin -- select ename into v_ename f ...

  9. Java Script基础(七) HTML DOM模型

    一.HTML DOM. HTML DOM的特性和方法是专门针对HTML的,HTML中的每个节点都是一个对象,通过访问属性和方法的方式,让一些DOM操作更加简便,在HTML DOM中有专门用来处理白哦个 ...

  10. oneThink 数据库连接失败,总提示密码不对的解决办法

    oneThink的数据库配置文件是\Application\Common\Conf\config.php,按理来说,在这里修改数据库配置应该就可以重新连接,可是不管我怎么修改密码总是和我设置的不一致, ...