Kafka:ZK+Kafka+Spark Streaming集群环境搭建(二十五)Structured Streaming:同一个topic中包含一组数据的多个部分,按照key它们拼接为一条记录(以及遇到的问题)。
需求:
目前kafka的topic上有一批数据,这些数据被分配到9个不同的partition中(就是发布时key:{m1,m2,m3,m4...m9},value:{records items}),mx(m1,m2...m9)这些数据的唯一键值:int_id+start_time,其中int_id和start_time是topic record中的记录。这9组数据按照唯一键值可以拼接(m1.primarykey1,m2.primarykey1,m3.primarykey1.....m9.primarykey1)。
伪代码:
m组成字段包含:
public class MS_PLRULQX {
private String key;
private String int_id;
private String start_time;
private long MS_PLRULQX_00;
private long MS_PLRULQX_01; public String getPrimaryKey() {
return this.int_id + "_" + this.scan_start_time;
}
}
完整MS_PLRULQX类定义:
import java.io.Serializable;
import org.apache.spark.sql.Row; public class MS_PLRULQX implements Serializable, Comparable<MS_PLRULQX> {
private static final long serialVersionUID = -2873721171908282946L; public MS_PLRULQX() {
} public MS_PLRULQX(Row row) {
this.key = row.getAs("key");
this.int_id = row.getAs("int_id");
this.start_time = row.getAs("start_time");
this.MS_PLRULQX_00 = row.getAs("MS_PLRULQX_00");
this.MS_PLRULQX_01 = row.getAs("MS_PLRULQX_01");
} private String key;
private String int_id;
private String start_time;
private long MS_PLRULQX_00;
private long MS_PLRULQX_01; public String getKey() {
return key;
} public void setKey(String key) {
this.key = key;
} public String getInt_id() {
return int_id;
} public void setInt_id(String int_id) {
this.int_id = int_id;
} public String getStart_time() {
return start_time;
} public void setStart_time(String start_time) {
this.start_time = start_time;
} public long getMS_PLRULQX_00() {
return MS_PLRULQX_00;
} public void setMS_PLRULQX_00(long MS_PLRULQX_00) {
this.MS_PLRULQX_00 = MS_PLRULQX_00;
} public long getMS_PLRULQX_01() {
return MS_PLRULQX_01;
} public void setMS_PLRULQX_01(long MS_PLRULQX_01) {
this.MS_PLRULQX_01 = MS_PLRULQX_01;
} public String getPrimaryKey() {
return this.int_id + "_" + this.scan_start_time;
} @Override
public int compareTo(MS_PLRULQX other) {
// key format:MS_PLRULQX1,MS_PLRULQX2,..MS_PLRULQX9
if (this.getKey().toLowerCase().indexOf("MS_PLRULQX".toLowerCase()) != -1) {
NumberUtils numberUtils = new NumberUtils();
String thisKeyStr = this.getKey().toLowerCase().replace("MS_PLRULQX".toLowerCase(), "");
String otherKeyStr = other.getKey().toLowerCase().replace("MS_PLRULQX".toLowerCase(), "");
if (numberUtils.isNumber(thisKeyStr)) {
int thisKeyValue = Integer.valueOf(thisKeyStr);
int otherKeyValue = Integer.valueOf(otherKeyStr);
if (thisKeyValue > otherKeyValue) {
return 1;
} else if (thisKeyStr == otherKeyStr) {
return 0;
} else {
return -1;
}
}
} return this.key.compareTo(other.key);
} }
MS_PLRULQX在9个topic中各有一份,把它们拼接起来,拼接条件primarykey相同的数据才能一起拼接,拼接后保留实体字段如下:
public class MS_PLRULQX_Combine implements Serializable {
private String key;
private String int_id;
private String start_time; private long mr_packetlossrateulqci_1_00;
private long mr_packetlossrateulqci_1_01; private long mr_packetlossrateulqci_2_00;
private long mr_packetlossrateulqci_2_01; private long mr_packetlossrateulqci_3_00;
private long mr_packetlossrateulqci_3_01; private long mr_packetlossrateulqci_4_00;
private long mr_packetlossrateulqci_4_01; private long mr_packetlossrateulqci_5_00;
private long mr_packetlossrateulqci_5_01; private long mr_packetlossrateulqci_6_00;
private long mr_packetlossrateulqci_6_01; private long mr_packetlossrateulqci_7_00;
private long mr_packetlossrateulqci_7_01; private long mr_packetlossrateulqci_8_00;
private long mr_packetlossrateulqci_8_01; private long mr_packetlossrateulqci_9_00;
private long mr_packetlossrateulqci_9_01;
}
完整MS_PLRULQX_Combine 类定义:
import java.io.Serializable;
import java.util.ArrayList;
import java.util.List; public class MS_PLRULQX_Combine implements Serializable {
private static final long serialVersionUID = -944128402186054489L; public MS_PLRULQX_Combine() {
} public MS_PLRULQX_Combine(List<MS_PLRULQX> list) {
int sizeOfList = list.size();
if (sizeOfList > 9) {
throw new RuntimeException("the measurement group items's length(" + list.size() + ") over than 9");
} if (sizeOfList >= 1) {
setItem1(list.get(0));
}
if (sizeOfList >= 2) {
setItem2(list.get(1));
}
if (sizeOfList >= 3) {
setItem3(list.get(2));
}
if (sizeOfList >= 4) {
setItem4(list.get(3));
}
if (sizeOfList >= 5) {
setItem5(list.get(4));
}
if (sizeOfList >= 6) {
setItem6(list.get(5));
}
if (sizeOfList >= 7) {
setItem7(list.get(6));
}
if (sizeOfList >= 8) {
setItem8(list.get(7));
}
if (sizeOfList >= 9) {
setItem9(list.get(8));
}
} private void setItem9(MS_PLRULQX item9) {
if (item9 != null) {
this.mr_packetlossrateulqci_9_00 = item9.getMr_packetlossrateulqci_00();
this.mr_packetlossrateulqci_9_01 = item9.getMr_packetlossrateulqci_01();
}
} private void setItem8(MS_PLRULQX item8) {
if (item8 != null) {
this.mr_packetlossrateulqci_8_00 = item8.getMr_packetlossrateulqci_00();
this.mr_packetlossrateulqci_8_01 = item8.getMr_packetlossrateulqci_01();
}
} private void setItem7(MS_PLRULQX item7) {
if (item7 != null) {
this.mr_packetlossrateulqci_7_00 = item7.getMr_packetlossrateulqci_00();
this.mr_packetlossrateulqci_7_01 = item7.getMr_packetlossrateulqci_01();
}
} private void setItem6(MS_PLRULQX item6) {
if (item6 != null) {
this.mr_packetlossrateulqci_6_00 = item6.getMr_packetlossrateulqci_00();
this.mr_packetlossrateulqci_6_01 = item6.getMr_packetlossrateulqci_01();
}
} private void setItem5(MS_PLRULQX item5) {
if (item5 != null) {
this.mr_packetlossrateulqci_5_00 = item5.getMr_packetlossrateulqci_00();
this.mr_packetlossrateulqci_5_01 = item5.getMr_packetlossrateulqci_01();
}
} private void setItem4(MS_PLRULQX item4) {
if (item4 != null) {
this.mr_packetlossrateulqci_4_00 = item4.getMr_packetlossrateulqci_00();
this.mr_packetlossrateulqci_4_01 = item4.getMr_packetlossrateulqci_01();
}
} private void setItem3(MS_PLRULQX item3) {
if (item3 != null) {
this.mr_packetlossrateulqci_3_00 = item3.getMr_packetlossrateulqci_00();
this.mr_packetlossrateulqci_3_01 = item3.getMr_packetlossrateulqci_01();
}
} private void setItem2(MS_PLRULQX item2) {
if (item2 != null) {
this.mr_packetlossrateulqci_2_00 = item2.getMr_packetlossrateulqci_00();
this.mr_packetlossrateulqci_2_01 = item2.getMr_packetlossrateulqci_01();
}
} private void setItem1(MS_PLRULQX item1) {
if (item1 != null) {
this.key = item1.getKey();
this.int_id = item1.getInt_id();
this.start_time = item1.getStart_time(); this.mr_packetlossrateulqci_1_00 = item1.getMr_packetlossrateulqci_00();
this.mr_packetlossrateulqci_1_01 = item1.getMr_packetlossrateulqci_01();
}
} private String key;
private String int_id;
private String start_time; private long mr_packetlossrateulqci_1_00;
private long mr_packetlossrateulqci_1_01; private long mr_packetlossrateulqci_2_00;
private long mr_packetlossrateulqci_2_01; private long mr_packetlossrateulqci_3_00;
private long mr_packetlossrateulqci_3_01; private long mr_packetlossrateulqci_4_00;
private long mr_packetlossrateulqci_4_01; private long mr_packetlossrateulqci_5_00;
private long mr_packetlossrateulqci_5_01; private long mr_packetlossrateulqci_6_00;
private long mr_packetlossrateulqci_6_01; private long mr_packetlossrateulqci_7_00;
private long mr_packetlossrateulqci_7_01; private long mr_packetlossrateulqci_8_00;
private long mr_packetlossrateulqci_8_01; private long mr_packetlossrateulqci_9_00;
private long mr_packetlossrateulqci_9_01; public String getKey() {
return key;
} public void setKey(String key) {
this.key = key;
} public String getInt_id() {
return int_id;
} public void setInt_id(String int_id) {
this.int_id = int_id;
} public String getStart_time() {
return start_time;
} public void setStart_time(String start_time) {
this.start_time = start_time;
} public long getMr_packetlossrateulqci_1_00() {
return mr_packetlossrateulqci_1_00;
} public void setMr_packetlossrateulqci_1_00(long mr_packetlossrateulqci_1_00) {
this.mr_packetlossrateulqci_1_00 = mr_packetlossrateulqci_1_00;
} public long getMr_packetlossrateulqci_1_01() {
return mr_packetlossrateulqci_1_01;
} public void setMr_packetlossrateulqci_1_01(long mr_packetlossrateulqci_1_01) {
this.mr_packetlossrateulqci_1_01 = mr_packetlossrateulqci_1_01;
} public long getMr_packetlossrateulqci_2_00() {
return mr_packetlossrateulqci_2_00;
} public void setMr_packetlossrateulqci_2_00(long mr_packetlossrateulqci_2_00) {
this.mr_packetlossrateulqci_2_00 = mr_packetlossrateulqci_2_00;
} public long getMr_packetlossrateulqci_2_01() {
return mr_packetlossrateulqci_2_01;
} public void setMr_packetlossrateulqci_2_01(long mr_packetlossrateulqci_2_01) {
this.mr_packetlossrateulqci_2_01 = mr_packetlossrateulqci_2_01;
} public long getMr_packetlossrateulqci_3_00() {
return mr_packetlossrateulqci_3_00;
} public void setMr_packetlossrateulqci_3_00(long mr_packetlossrateulqci_3_00) {
this.mr_packetlossrateulqci_3_00 = mr_packetlossrateulqci_3_00;
} public long getMr_packetlossrateulqci_3_01() {
return mr_packetlossrateulqci_3_01;
} public void setMr_packetlossrateulqci_3_01(long mr_packetlossrateulqci_3_01) {
this.mr_packetlossrateulqci_3_01 = mr_packetlossrateulqci_3_01;
} public long getMr_packetlossrateulqci_4_00() {
return mr_packetlossrateulqci_4_00;
} public void setMr_packetlossrateulqci_4_00(long mr_packetlossrateulqci_4_00) {
this.mr_packetlossrateulqci_4_00 = mr_packetlossrateulqci_4_00;
} public long getMr_packetlossrateulqci_4_01() {
return mr_packetlossrateulqci_4_01;
} public void setMr_packetlossrateulqci_4_01(long mr_packetlossrateulqci_4_01) {
this.mr_packetlossrateulqci_4_01 = mr_packetlossrateulqci_4_01;
} public long getMr_packetlossrateulqci_5_00() {
return mr_packetlossrateulqci_5_00;
} public void setMr_packetlossrateulqci_5_00(long mr_packetlossrateulqci_5_00) {
this.mr_packetlossrateulqci_5_00 = mr_packetlossrateulqci_5_00;
} public long getMr_packetlossrateulqci_5_01() {
return mr_packetlossrateulqci_5_01;
} public void setMr_packetlossrateulqci_5_01(long mr_packetlossrateulqci_5_01) {
this.mr_packetlossrateulqci_5_01 = mr_packetlossrateulqci_5_01;
} public long getMr_packetlossrateulqci_6_00() {
return mr_packetlossrateulqci_6_00;
} public void setMr_packetlossrateulqci_6_00(long mr_packetlossrateulqci_6_00) {
this.mr_packetlossrateulqci_6_00 = mr_packetlossrateulqci_6_00;
} public long getMr_packetlossrateulqci_6_01() {
return mr_packetlossrateulqci_6_01;
} public void setMr_packetlossrateulqci_6_01(long mr_packetlossrateulqci_6_01) {
this.mr_packetlossrateulqci_6_01 = mr_packetlossrateulqci_6_01;
} public long getMr_packetlossrateulqci_7_00() {
return mr_packetlossrateulqci_7_00;
} public void setMr_packetlossrateulqci_7_00(long mr_packetlossrateulqci_7_00) {
this.mr_packetlossrateulqci_7_00 = mr_packetlossrateulqci_7_00;
} public long getMr_packetlossrateulqci_7_01() {
return mr_packetlossrateulqci_7_01;
} public void setMr_packetlossrateulqci_7_01(long mr_packetlossrateulqci_7_01) {
this.mr_packetlossrateulqci_7_01 = mr_packetlossrateulqci_7_01;
} public long getMr_packetlossrateulqci_8_00() {
return mr_packetlossrateulqci_8_00;
} public void setMr_packetlossrateulqci_8_00(long mr_packetlossrateulqci_8_00) {
this.mr_packetlossrateulqci_8_00 = mr_packetlossrateulqci_8_00;
} public long getMr_packetlossrateulqci_8_01() {
return mr_packetlossrateulqci_8_01;
} public void setMr_packetlossrateulqci_8_01(long mr_packetlossrateulqci_8_01) {
this.mr_packetlossrateulqci_8_01 = mr_packetlossrateulqci_8_01;
} public long getMr_packetlossrateulqci_9_00() {
return mr_packetlossrateulqci_9_00;
} public void setMr_packetlossrateulqci_9_00(long mr_packetlossrateulqci_9_00) {
this.mr_packetlossrateulqci_9_00 = mr_packetlossrateulqci_9_00;
} public long getMr_packetlossrateulqci_9_01() {
return mr_packetlossrateulqci_9_01;
} public void setMr_packetlossrateulqci_9_01(long mr_packetlossrateulqci_9_01) {
this.mr_packetlossrateulqci_9_01 = mr_packetlossrateulqci_9_01;
}
}
从topic上获取数据流:
Dataset<Row> dsParsed = this.sparkSession.readStream().format("kafka").options(this.kafkaOptions).option("subscribe", topicName)
.option("startingOffsets", "earliest").load(); String waterMarkName = "query" + this.getTopicEncodeName(topicName) + "Agg";
int windowDuration = 2 * 60;
int slideDuration = 60; try {
dsParsed.withWatermark("timestamp", "2 hour").createTempView(waterMarkName);
} catch (AnalysisException e1) {
e1.printStackTrace();
throw new RuntimeException(e1);
} String aggSQL = "xxx";
Dataset<Row> dsSQL1 = sparkSession.sql(aggSQL);
dsSQL1.printSchema();
对获取的数据流按照key进行数据拼接:
正确的处理方式:按照key对数据进行分组,然后对同一组数据按照key进行排序,之后完成数据合并,把合并结果打印到console上。
KeyValueGroupedDataset<String, Row> tuple2Dataset = dsSQL1.groupByKey((MapFunction<Row, String>) row -> {
String int_id = row.getAs("int_id");
String start_time = row.getAs("start_time");
String key = int_id + "_" + start_time;
return key;
}, Encoders.STRING()); Dataset<MS_PLRULQX_Combine> tuple2FlatMapDataset = tuple2Dataset.flatMapGroups(
new FlatMapGroupsFunction<String, Row, MS_PLRULQX_Combine>() {
private static final long serialVersionUID = 1400167811199763836L; @Override
public Iterator<MS_PLRULQX_Combine> call(String key, Iterator<Row> values) throws Exception {
List<MS_PLRULQX> list = new ArrayList<MS_PLRULQX>(); while (values.hasNext()) {
Row value = values.next();
MS_PLRULQX item = new MS_PLRULQX(value);
list.add(item);
} Collections.sort(list, (v1, v2) -> -(v2.compareTo(v1))); return Arrays.asList(new MS_PLRULQX_Combine(list)).iterator();
}
}, Encoders.bean(MS_PLRULQX_Combine.class)); Dataset<Row> rows = tuple2FlatMapDataset.toDF();
rows.writeStream().format("console").outputMode("complete").trigger(Trigger.ProcessingTime(1, TimeUnit.MINUTES)).start();
对获取的数据流按照key进行数据拼接,另外一种方案遇到的问题:
该方案使用JavaRDD进行分组,排序,合并。
import scala.Tuple2;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.api.java.function.PairFunction; JavaPairRDD<String, MS_PLRULQX> pairs = dsSQL1.toJavaRDD().mapToPair(new PairFunction<Row, String, MS_PLRULQX>() {
private static final long serialVersionUID = -5203498264050492910L; @Override
public Tuple2<String, MS_PLRULQX> call(Row row) throws Exception {
MS_PLRULQX value = new MS_PLRULQX(row); return new Tuple2<String, MS_PLRULQX>(value.getPrimaryKey(), value);
}
}); JavaPairRDD<String, Iterable<MS_PLRULQX>> group = pairs.groupByKey(); JavaPairRDD<String, MS_PLRULQX_Combine> keyVsValuePairRDD = group.mapToPair(tuple -> {
List<MS_PLRULQX> list = new ArrayList<MS_PLRULQX>();
Iterator<MS_PLRULQX> it = tuple._2.iterator();
while (it.hasNext()) {
MS_PLRULQX score = it.next();
list.add(score);
} Collections.sort(list, (v1, v2) -> -(v2.compareTo(v1))); return new Tuple2<String, MS_PLRULQX_Combine>(tuple._1, new MS_PLRULQX_Combine(list));
}); JavaRDD<MS_PLRULQX_Combine> javaRDD = keyVsValuePairRDD
.map(new Function<Tuple2<String, MS_PLRULQX_Combine>, MS_PLRULQX_Combine>() {
private static final long serialVersionUID = -3031600976005716506L; @Override
public MS_PLRULQX_Combine call(Tuple2<String, MS_PLRULQX_Combine> v1) throws Exception {
return v1._2;
}
}); Dataset<Row> rows = this.sparkSession.createDataFrame(javaRDD, MS_PLRULQX_Combine.class);
rows.writeStream().format("console").outputMode("complete").trigger(Trigger.ProcessingTime(1, TimeUnit.MINUTES)).start();
sparkSession.streams().awaitAnyTermination();
抛出错误的位置就是:
Exception in thread "main" org.apache.spark.sql.AnalysisException: Queries with streaming sources must be executed with writeStream.start();;
at com.xx.xx.streaming.drivers.XXXDriver.run(xxxxDriver.java:85) 错误代码执行“JavaPairRDD<String, MS_PLRULQX> pairs = dsSQL1.toJavaRDD().mapToPair(new PairFunction<Row, String, MS_PLRULQX>() {”该行。
该错误代码,看起来像是“执行了.toJavaRDD()和执行dsSQL1.show/dsSQL1.collection.foreach(println(_))一样。”
从spark官网上(spark2.3.1)中只提到了structured spark支持Dataset/DataFrame API,并未提到支持RDD:
Kafka:ZK+Kafka+Spark Streaming集群环境搭建(二十五)Structured Streaming:同一个topic中包含一组数据的多个部分,按照key它们拼接为一条记录(以及遇到的问题)。的更多相关文章
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十五)Spark编写UDF、UDAF、Agg函数
Spark Sql提供了丰富的内置函数让开发者来使用,但实际开发业务场景可能很复杂,内置函数不能够满足业务需求,因此spark sql提供了可扩展的内置函数. UDF:是普通函数,输入一个或多个参数, ...
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十二)VMW安装四台CentOS,并实现本机与它们能交互,虚拟机内部实现可以上网。
Centos7出现异常:Failed to start LSB: Bring up/down networking. 按照<Kafka:ZK+Kafka+Spark Streaming集群环境搭 ...
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十)安装hadoop2.9.0搭建HA
如何搭建配置centos虚拟机请参考<Kafka:ZK+Kafka+Spark Streaming集群环境搭建(一)VMW安装四台CentOS,并实现本机与它们能交互,虚拟机内部实现可以上网.& ...
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十六)Structured Streaming中ForeachSink的用法
Structured Streaming默认支持的sink类型有File sink,Foreach sink,Console sink,Memory sink. ForeachWriter实现: 以写 ...
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十九)ES6.2.2 安装Ik中文分词器
注: elasticsearch 版本6.2.2 1)集群模式,则每个节点都需要安装ik分词,安装插件完毕后需要重启服务,创建mapping前如果有机器未安装分词,则可能该索引可能为RED,需要删除后 ...
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十四)定义一个avro schema使用comsumer发送avro字符流,producer接受avro字符流并解析
参考<在Kafka中使用Avro编码消息:Consumer篇>.<在Kafka中使用Avro编码消息:Producter篇> 在了解如何avro发送到kafka,再从kafka ...
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十八)ES6.2.2 增删改查基本操作
#文档元数据 一个文档不仅仅包含它的数据 ,也包含 元数据 —— 有关 文档的信息. 三个必须的元数据元素如下:## _index 文档在哪存放 ## _type 文档表示的对象类别 ## ...
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十三)kafka+spark streaming打包好的程序提交时提示虚拟内存不足(Container is running beyond virtual memory limits. Current usage: 119.5 MB of 1 GB physical memory used; 2.2 GB of 2.1 G)
异常问题:Container is running beyond virtual memory limits. Current usage: 119.5 MB of 1 GB physical mem ...
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(九)安装kafka_2.11-1.1.0
如何搭建配置centos虚拟机请参考<Kafka:ZK+Kafka+Spark Streaming集群环境搭建(一)VMW安装四台CentOS,并实现本机与它们能交互,虚拟机内部实现可以上网.& ...
随机推荐
- C# 读取CSV和EXCEL文件示例
我们习惯了直接连到数据库上面读取数据表的数据内容: 如果有一天我们需要读取CSV,EXCEL文件的内容的时候,可不可以也像读数据表的方式一样呢?当然可以,使用OleDB ADO.NET是很简单的事情 ...
- 高性能server分析 - Hadoop的RpcServer
一.Listener Listener线程,当Server处于运行状态时,其负责监听来自客户端的连接,并使用Select模式处理Accept事件. 同时,它开启了一个空闲连接(Idle Connect ...
- .net加载失败的程序集重新加载
在.net程序中,程序集是Lazy加载的,只有在用的时候才会去加载,当程序集加载失败时,会触发AppDomain.AssemblyResolve的事件,在这个事件中,我们甚至还可以进行补救,从别得地方 ...
- HDU 3472 HS BDC (混合图的欧拉路径判断)
HS BDC Time Limit: 2000/1000 MS (Java/Others) Memory Limit: 65536/32768 K (Java/Others)Total Subm ...
- AE开发中关于 “无法嵌入互操作类型.........请改用适用的接口”问题的解决方法
最近开始使用VS2010,在引用COM组件的时候,出现了“无法嵌入互操作类型……,请改用适用的接口”的错误提示. 查阅资料,找到解决方案,记录如下: 选中项目中引入的dll,鼠标右键,选择属性,把“嵌 ...
- 防止shell脚本长时间执行导致ssh超时
在一些对安全性要求较高的场景下.ssh的超时时间是管理员预先设置好的,在闲置一段时间后ssh连接会自己主动断开. 这样的情况下假设通过ssh运行脚本,而脚本运行时间又比較长的话.会导致sshclien ...
- Python 中函数的 收集参数 机制
定义函数的时候,在参数前加了一个 * 号,函数可以接收零个或多个值作为参数.返回结果是一个元组. 传递零个参数时函数并不报错,而是返回一个空元组.但以上这种方法也有局限性,它不能收集关键字参数. 对关 ...
- .Net Discovery 系列之二--string从入门到精通(下)
前两节我们介绍了string的两个基本特性,如果你觉得你已经比较全面的了解了string,那么就来看看这第3.4两节吧. 三.有趣的比较操作 在第一节与第二节中,我们分别介绍了字符串的恒定性与与驻留 ...
- WIN8系统中 任务管理器 性能栏 显示CPU利用率(已暂停)怎么回事?
解决办法: 点上方的 查看--更新速度--普通
- (Delphi) Using the Disk Cache 使用磁盘缓存
The Chilkat Spider component has disk caching capabilities. To setup a disk cache, create a new dire ...