elasticsearch-hadoop使用
elasticsearch-hadoop是一个深度集成Hadoop和ElasticSearch的项目,也是ES官方来维护的一个子项目,通过实现Hadoop和ES之间的输入输出,可以在Hadoop里面对ES集群的数据进行读取和写入,充分发挥Map-Reduce并行处理的优势,为Hadoop数据带来实时搜索的可能。
项目网址:http://www.elasticsearch.org/overview/hadoop/
运行环境:
CDH4、ElasticSearch0.90.2
http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/latest/CDH4-Quick-Start/cdh4qs_topic_3_3.html
https://github.com/medcl/elasticsearch-rtf
Hive和ES的互操作:
#安装,HIVE里面添加ElasticSearch-Hadoop的JAR路径
#下载hadoop-es jar包,https://download.elasticsearch.org/hadoop/hadoop-latest.zip
#Hive加载的JAR路径为本地路径
[medcl@node- ~]$ ls
elasticsearch-hadoop-1.3..M1.jar
[medcl@node- ~]$ pwd
/home/medcl
[medcl@node- ~]$ hive -hiveconf hive.aux.jars.path=/home/medcl/elasticsearch-hadoop-1.3..M1.jar
Logging initialized using configuration in file:/etc/hive/conf.dist/hive-log4j.properties
Hive history file=/tmp/medcl/hive_job_log_94db3616-e210-4aab-b07b-6fb159e217ec_1758848920.txt
#ElasticSearch集群名为"elasticsearch",和Hadoop在一个机器上
#Hive里面创建一个Table(user),并使用Hadoop-ElasticSearch关联一个索引(/index/user),2个字段,id和name
CREATE EXTERNAL TABLE user (id INT, name STRING,site STRING)
STORED BY 'org.elasticsearch.hadoop.hive.ESStorageHandler'
TBLPROPERTIES('es.resource' = 'index/user/',
'es.index.auto.create' = 'true')
在medcl用下操作:
CREATE EXTERNAL TABLE user (id INT, name STRING)
STORED BY 'org.elasticsearch.hadoop.hive.ESStorageHandler'
TBLPROPERTIES('es.resource' = '/index/user/',
'es.index.auto.create' = 'true');
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
hive> CREATE EXTERNAL TABLE user (id INT, name STRING)
> STORED BY 'org.elasticsearch.hadoop.hive.ESStorageHandler'
> TBLPROPERTIES('es.resource' = 'medcl/',
> 'es.index.auto.create' = 'false');
FAILED: Error in metadata: MetaException(message:Got exception: org.apache.hadoop.security.AccessControlException Permission denied: user=medcl, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x
#擦,看下权限
[medcl@node- ~]$ hadoop fs -lsr /
lsr: DEPRECATED: Please use 'ls -R' instead.
drwxrwxrwt - hdfs supergroup -- : /tmp
drwxr-xr-x - hdfs supergroup -- : /user
drwxr-xr-x - medcl supergroup -- : /user/medcl
drwxr-xr-x - medcl supergroup -- : /user/medcl/input
-rw-r--r-- medcl supergroup -- : /user/medcl/input/file1.txt
drwxr-xr-x - medcl supergroup -- : /user/medcl/lib
-rw-r--r-- medcl supergroup -- : /user/medcl/lib/elasticsearch-hadoop-1.3..M1.jar
drwxr-xr-x - hdfs supergroup -- : /var
drwxr-xr-x - hdfs supergroup -- : /var/lib
#原来user目录权限是hdfs,ok,切换hdfs,jar也换个hdfs用户可以访问到的位置,就/tmp吧
[root@node- medcl]# cp elasticsearch-hadoop-1.3..M1.jar /tmp/
[root@node- medcl]# ^C
[root@node- medcl]# sudo -u hdfs hive -hiveconf hive.aux.jars.path=/tmp/elasticsearch-hadoop-1.3..M1.jar
Logging initialized using configuration in file:/etc/hive/conf.dist/hive-log4j.properties
Hive history file=/tmp/hdfs/hive_job_log_bdad4d7a-f929-43d7-a56e-e026fdd7e3b4_1219802521.txt
hive> CREATE EXTERNAL TABLE user (id INT, name STRING)
> STORED BY 'org.elasticsearch.hadoop.hive.ESStorageHandler'
> TBLPROPERTIES('es.resource' = '/index/user/',
> 'es.index.auto.create' = 'false');
-- ::29.560 GMT Thread[main,,main] java.io.FileNotFoundException: derby.log (Permission denied)
----------------------------------------------------------------
-- ::29.877 GMT:
Booting Derby version The Apache Software Foundation - Apache Derby - 10.4.2.0 - (): instance a816c00e--fc62-4b5c-000000cec758
on database directory /var/lib/hive/metastore/metastore_db in READ ONLY mode Database Class Loader started - derby.database.classpath=''
FAILED: Error in metadata: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
FAILED: Execution Error, return code from org.apache.hadoop.hive.ql.exec.DDLTask #ok,干掉lock
[root@node- ~]# ls /var/lib/hive/metastore/metastore_db
dbex.lck db.lck log seg0 service.properties tmp
[root@node- ~]# rm /var/lib/hive/metastore/metastore_db/dbex.lck
rm: remove regular file `/var/lib/hive/metastore/metastore_db/dbex.lck'? y
[root@node- ~]# rm /var/lib/hive/metastore/metastore_db/db.lck
rm: remove regular file `/var/lib/hive/metastore/metastore_db/db.lck'? y
#另外忘记关另外一个hive实例了,难怪呢。
[root@node- tmp]# ps -aux|grep hive
Warning: bad syntax, perhaps a bogus '-'? See /usr/share/doc/procps-3.2./FAQ
root 0.0 0.1 pts/ S+ : : sudo -u hdfs hive -hiveconf hive.aux.jars.path=/tmp/elasticsearch-hadoop-1.3..M1.jar
hdfs 1.8 5.7 pts/ Sl+ : : /usr/lib/jvm/java-openjdk/bin/java -Xmx256m -Dhadoop.log.dir=/usr/lib/hadoop/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/lib/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=/usr/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/lib/hive/lib/hive-cli-0.10.-cdh4.5.0.jar org.apache.hadoop.hive.cli.CliDriver -hiveconf hive.aux.jars.path=/tmp/elasticsearch-hadoop-1.3..M1.jar
#权限问题
[root@node- tmp]# ll /var/lib/hive/metastore/metastore_db/
total
drwxrwxr-x medcl medcl Dec : log
drwxrwxr-x medcl medcl Dec : seg0
-rw-rw-r-- medcl medcl Dec : service.properties
drwxrwxr-x medcl medcl Dec : tmp
[root@node- tmp]# sudo -u hdfs hive -hiveconf hive.aux.jars.path=/tmp/elasticsearch-hadoop-1.3..M1.jar^C
[root@node- tmp]# chmod /var/lib/hive/metastore/metastore_db/ -R
[root@node- tmp]# sudo -u hdfs hive -hiveconf hive.aux.jars.path=/tmp/elasticsearch-hadoop-1.3..M1.jar
Logging initialized using configuration in file:/etc/hive/conf.dist/hive-log4j.properties
Hive history file=/tmp/hdfs/hive_job_log_d5749cb0-fde0-4da2--c85cf4673885_252074310.txt
hive> show tables;
OK
Time taken: 6.934 seconds
hive> CREATE EXTERNAL TABLE user (id INT, name STRING)
> STORED BY 'org.elasticsearch.hadoop.hive.ESStorageHandler'
> TBLPROPERTIES('es.resource' = '/index/user/',
> 'es.index.auto.create' = 'true');
OK
Time taken: 1.115 seconds #ok,创建成功了
hive> show tables;
OK
user
Time taken: 0.15 seconds
hive> #权限问题是Hive默认仓库路径造成的,生疏了
[root@node- tmp]# sudo su hdfs
bash-4.1$ hadoop fs -lsr /
lsr: DEPRECATED: Please use 'ls -R' instead.
drwxrwxrwt - hdfs supergroup -- : /tmp
drwxr-xr-x - hdfs supergroup -- : /user
drwxr-xr-x - hdfs supergroup -- : /user/hive
drwxr-xr-x - hdfs supergroup -- : /user/hive/warehouse
drwxr-xr-x - hdfs supergroup -- : /user/hive/warehouse/user #好了,开始往HIVE里面倒数据了,先来几行数据
[root@node- tmp]# cat files1.txt
,medcl
,lcdem
,tom
,jack #传上去
[root@node- tmp]# sudo su hdfs
bash-4.1$ hadoop fs -put files1.txt /tmp/
bash-4.1$ hadoop fs -ls /tmp/
Found items
-rw-r--r-- hdfs supergroup -- : /tmp/files1.txt #加载到Hive里面
hive -hiveconf hive.aux.jars.path=/tmp/elasticsearch-hadoop-1.3..M1.jar
#LOAD DATA LOCAL INPATH '/tmp/files1.txt' OVERWRITE INTO TABLE user_source;
#CREATE EXTERNAL TABLE user_source (id INT, name STRING); #不是原始Hive表,还不能直接LOAD
bash-4.1$ hive -hiveconf hive.aux.jars.path=/tmp/elasticsearch-hadoop-1.3..M1.jar
Logging initialized using configuration in file:/etc/hive/conf.dist/hive-log4j.properties
Hive history file=/tmp/hdfs/hive_job_log_a9516f87-6e2d-44db-9d38-18eed77d9dec_1583221137.txt
hive> LOAD DATA LOCAL INPATH '/tmp/files1.txt' OVERWRITE INTO TABLE user;
FAILED: SemanticException [Error ]: A non-native table cannot be used as target for LOAD
hive> CREATE EXTERNAL TABLE user_source (id INT, name STRING);
OK
Time taken: 1.104 seconds
hive> LOAD DATA LOCAL INPATH '/tmp/files1.txt' OVERWRITE INTO TABLE user_source;
Copying data from file:/tmp/files1.txt
Copying file: file:/tmp/files1.txt
Loading data to table default.user_source
Table default.user_source stats: [num_partitions: , num_files: , num_rows: , total_size: , raw_data_size: ]
OK
Time taken: 0.911 seconds
hive> show tables;
OK
user
user_source
Time taken: 0.226 seconds #下面这个错误是因为es-hadoop的jar文件没有传到HDFS上面,看来本地和HDFS都要上传,并且路径要一致
hive> select id,name from user_source;
Total MapReduce jobs =
Launching Job out of
Number of reduce tasks is set to since there's no reduce operator
java.io.FileNotFoundException: File does not exist: /tmp/elasticsearch-hadoop-1.3..M1.jar
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:)
at org.apache.hadoop.filecache.DistributedCache.getFileStatus(DistributedCache.java:)
at org.apache.hadoop.filecache.TrackerDistributedCacheManager.determineTimestamps(TrackerDistributedCacheManager.java:)
at org.apache.hadoop.filecache.TrackerDistributedCacheManager.determineTimestampsAndCacheVisibilities(TrackerDistributedCacheManager.java:)
at org.apache.hadoop.mapred.JobClient.copyAndConfigureFiles(JobClient.java:)
at org.apache.hadoop.mapred.JobClient.copyAndConfigureFiles(JobClient.java:)
at org.apache.hadoop.mapred.JobClient.access$(JobClient.java:)
at org.apache.hadoop.mapred.JobClient$.run(JobClient.java:)
at org.apache.hadoop.mapred.JobClient$.run(JobClient.java:)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:)
at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:)
at org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.java:)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.hadoop.util.RunJar.main(RunJar.java:)
Job Submission failed with exception 'java.io.FileNotFoundException(File does not exist: /tmp/elasticsearch-hadoop-1.3.0.M1.jar)'
FAILED: Execution Error, return code from org.apache.hadoop.hive.ql.exec.MapRedTask #ok,再看看
bash-4.1$ hadoop fs -put elasticsearch-hadoop-1.3..M1.jar /tmp/
bash-4.1$ hive -hiveconf hive.aux.jars.path=/tmp/elasticsearch-hadoop-1.3..M1.jar
Logging initialized using configuration in file:/etc/hive/conf.dist/hive-log4j.properties
Hive history file=/tmp/hdfs/hive_job_log_28ea1fbc-dc3b-4e62-9f47-1a88eed30069_1310993479.txt
hive> select id,name from user_source;
Total MapReduce jobs =
Launching Job out of
Number of reduce tasks is set to since there's no reduce operator
Starting Job = job_201312162220_0004, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201312162220_0004
Kill Command = /usr/lib/hadoop/bin/hadoop job -kill job_201312162220_0004
Hadoop job information for Stage-: number of mappers: ; number of reducers:
-- ::, Stage- map = %, reduce = %
-- ::, Stage- map = %, reduce = %, Cumulative CPU 0.88 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 0.88 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 0.88 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 0.88 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 0.88 sec
MapReduce Total cumulative CPU time: msec
Ended Job = job_201312162220_0004
MapReduce Jobs Launched:
Job : Map: Cumulative CPU: 0.88 sec HDFS Read: HDFS Write: SUCCESS
Total MapReduce CPU Time Spent: msec
OK
NULL NULL
NULL NULL
NULL NULL
NULL NULL
Time taken: 25.999 seconds #慢,数据怎么是空的,建成外表了(EXTERNAL),没有设置默认的分隔符,好纠结
hive> drop table user_source;
OK
Time taken: 0.649 seconds
hive> CREATE TABLE user_source (id INT, name STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';
OK
Time taken: 0.109 seconds
hive> LOAD DATA LOCAL INPATH '/tmp/files1.txt' INTO TABLE user_source;
Copying data from file:/tmp/files1.txt
Copying file: file:/tmp/files1.txt
Loading data to table default.user_source
Table default.user_source stats: [num_partitions: , num_files: , num_rows: , total_size: , raw_data_size: ]
OK
Time taken: 0.348 seconds
hive> select * from user_source;
OK
medcl
lcdem
tom
jack
Time taken: 0.155 seconds #源表现在有了,导入到ES所在的表里面去 hive> INSERT OVERWRITE TABLE user
> SELECT s.id, s.name FROM user_source s;
Total MapReduce jobs =
Launching Job out of
Number of reduce tasks is set to since there's no reduce operator
Starting Job = job_201312162220_0005, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201312162220_0005
Kill Command = /usr/lib/hadoop/bin/hadoop job -kill job_201312162220_0005
Hadoop job information for Stage-: number of mappers: ; number of reducers:
-- ::, Stage- map = %, reduce = %
-- ::, Stage- map = %, reduce = %, Cumulative CPU 1.16 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 1.16 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 1.16 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 1.16 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 1.16 sec
MapReduce Total cumulative CPU time: seconds msec
Ended Job = job_201312162220_0005
Rows loaded to user
MapReduce Jobs Launched:
Job : Map: Cumulative CPU: 1.16 sec HDFS Read: HDFS Write: SUCCESS
Total MapReduce CPU Time Spent: seconds msec
OK
Time taken: 21.849 seconds
hive> select * from user;
OK
Failed with exception java.io.IOException:java.lang.IllegalStateException: [GET] on [/index/user/&search_type=scan&scroll=10m&size=&preference=_shards:;_only_node:MP7Zl3owTRm8O2V6cWvOSg] failed; server[http://10.0.2.15:9200] returned [{"_index":"index","_type":"user","_id":"&search_type=scan&scroll=10m&size=50&preference=_shards:4;_only_node:MP7Zl3owTRm8O2V6cWvOSg","exists":false}]
Time taken: 0.387 seconds #可以看出来hadoop-elasticsearch翻译出来的查询语句好像有问题!不过elasticsearch里面已经有数据了,反正暂时不需要用hive来执行查询,先官方发个issue吧。 #ES查询结果
bash-4.1$ curl localhost:/index/user/_search?q=*&pretty=true
[]
bash-4.1$ {"took":,"timed_out":false,"_shards":{"total":,"successful":,"failed":},"hits":{"total":,"max_score":1.0,"hits":[{"_index":"index","_type":"user","_id":"3x4bEcriRvS6AHkX2Sb7UA","_score":1.0, "_source" : {"id":,"name":"lcdem"}},{"_index":"index","_type":"user","_id":"_3rGVWhaTSCixYxRzBUSLQ","_score":1.0, "_source" : {"id":,"name":"jack"}},{"_index":"index","_type":"user","_id":"T-Q_icjgR8ehsH3IV-twWw","_score":1.0, "_source" : {"id":,"name":"medcl"}},{"_index":"index","_type":"user","_id":"Vdz0sryBT5u0e9hfoMY8Tg","_score":1.0, "_source" : {"id":,"name":"tom"}}]}}
#接下来试试大量数据bulk导入的性能,是不是真的做到data locality。
elasticsearch-hadoop下载地址:https://github.com/elastic/elasticsearch-hadoop
elasticsearch-hadoop使用的更多相关文章
- es第十篇:Elasticsearch for Apache Hadoop
es for apache hadoop(elasticsearch-hadoop.jar)允许hadoop作业(mapreduce.hive.pig.cascading.spark)与es交互. A ...
- Hadoop vs Elasticsearch – Which one is More Useful
Hadoop vs Elasticsearch – Which one is More Useful Difference Between Hadoop and Elasticsearch H ...
- 轻量级OLAP(二):Hive + Elasticsearch
1. 引言 在做OLAP数据分析时,常常会遇到过滤分析需求,比如:除去只有性别.常驻地标签的用户,计算广告媒体上的覆盖UV.OLAP解决方案Kylin不支持复杂数据类型(array.struct.ma ...
- 使用Hive或Impala执行SQL语句,对存储在Elasticsearch中的数据操作(二)
CSSDesk body { background-color: #2574b0; } /*! zybuluo */ article,aside,details,figcaption,figure,f ...
- 使用Hive或Impala执行SQL语句,对存储在Elasticsearch中的数据操作
http://www.cnblogs.com/wgp13x/p/4934521.html 内容一样,样式好的版本. 使用Hive或Impala执行SQL语句,对存储在Elasticsearch中的数据 ...
- elasticsearch + hive环境搭建
一.环境介绍: elasticsearch:2.3.1 hive:0.12 二.环境搭建 2.1 首先获取elasticsearc-hadoop的jar包 链接地址:http://jcenter.bi ...
- 使用hive访问elasticsearch的数据
使用hive访问elasticsearch的数据 1.配置 将elasticsearch-hadoop-2.1.1.jar拷贝到hive/lib hive -hiveconf hive.aux.jar ...
- elasticsearch+spark+hbase 整合
1.用到的maven依赖 <dependency> <groupId>org.apache.spark</groupId> <artifactId>sp ...
- Awesome Hadoop
A curated list of amazingly awesome Hadoop and Hadoop ecosystem resources. Inspired by Awesome PHP, ...
- Elasticsearch 快速开始
Elasticsearch 是一个分布式的 RESTful 风格的搜索和数据分析引擎. 查询 : Elasticsearch 允许执行和合并多种类型的搜索 - 结构化.非结构化.地理位置.度量指标 - ...
随机推荐
- C语言中文件目录(一正二反)斜杠
正斜杠unix“/” linux,安卓,苹果都是 windows是两个反斜杠“\\”,但现在也兼容了可以使用正斜杠“/”
- windows,cmd中查看当前目录下的文件及文件夹
需求描述: 在使用cmd的过程中,有的时候需要查看当前目录下有哪些文件或者文件夹,类似linux下的ls命令 操作过程: 1.通过dir命令查看当前目录下有哪些的文件及文件夹 备注:通过dir命令,就 ...
- oracle 存储过程 where in参数传入问题
问题: 举个简单例子说明create or replace procedure procStr(inString in varchar2)asbeginselect * from book where ...
- MYSQL IFNULL函数的使用
IFNULL函数是MYSQL数据库中最重要的函数之一,下面就对该函数的使用方面进行分析,希望对您能够有所帮助. 下文对MYSQL IFNULL函数的使用进行了详细的叙述,供您参考学习,如果您在MYSQ ...
- Android中开发习惯
我觉得首先是命名规范.命名规范这种东西每个人都有自己的风格,Google 也有自己的一套规范(多看看 Android 系统源码就明白了).好的规范可以有效地提高代码的可读性,对于将来接手代码的小伙伴也 ...
- Path类和File类的应用
今天是我学习C#基础的第13天,可以说马上就要结束这个基础课程,感觉学习的理论性的我不能说全部掌握了,我只想说在思路上面的语法以及用法我应该基本掌握了,感觉效果不错,不得不说,要想在一种语言上面有大的 ...
- C++11新特性之0——移动语义、移动构造函数和右值引用
C++引用现在分为左值引用(能取得其地址)和 右值引用(不能取得其地址).其实很好理解,左值引用中的左值一般指的是出现在等号左边的值(带名称的变量,带*号的指针等一类的数据),程序能对这样的左值进行引 ...
- Swift/Objective-C-Swift与Objective-C混用教程
简介:我想很多iOS开发者在知道Swift后,心中最大的问题就是如何将Swift应用到原有项目之中.下面我将简要介绍这2种语言的混用方法,内容参考自官方文档 Using Swift with Coco ...
- Git介绍和基本原理
官方文档:http://git-scm.com/doc 1.1 起步 - 关于版本控制 本章关于开始学习 Git. 我们从介绍有关版本控制工具的一些背景知识开始,然后讲解如何在你的系统运行 Git,最 ...
- python epoll实现异步socket
一.同步和异步: 在程序执行中,同步运行意味着等待调用的函数.线程.子进程等的返回结果后继续处理:异步指不等待当下的返回结果,直接运行主进程下面的程序,等到有返回结果时,通知主进程处理.有点高效. 二 ...