Hive中的order by、sort by、distribute by、cluster by解释及测试
结论:
order by:全局排序,这也是4种排序手段中唯一一个能在终端输出中看出全局排序的方法,只有一个reduce,可能造成renduce任务时间过长,在严格模式下,要求必须具备limit子句。
sort by:可以运行多个reduce,每个reduce内排序,默认升序排序。
distribute by:控制map的输出在reduce中是如何划分的。通常与sort by组合使用,按照特定字段进行排序。
cluster by:如果distribute by字段和sort by字段相同,且安装默认升序方式进行排序,可以使用cluster by语句代替distribute by和sort by,但是这样会剥夺sort by的并行性,但是也不能保证全局输出是有序的(这是测试结果)。
1、order by全局排序测试:
set mapred.max.split.size=200;
set mapred.reduce.tasks=3;
select empno,ename,sal from emp order by sal asc limit 20;
hive (chavin)> set mapred.max.split.size=200;
hive (chavin)> set mapred.reduce.tasks=3;
hive (chavin)> select empno,ename,sal from emp order by sal asc limit 20;
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Job = job_1501198522682_0059, Tracking URL = http://chavin.king:8088/proxy/application_1501198522682_0059/
Kill Command = /opt/cdh-5.3.6/hadoop-2.5.0-cdh5.3.6/bin/hadoop job -kill job_1501198522682_0059
Hadoop job information for Stage-1: number of mappers: 3; number of reducers: 1
2017-07-30 02:21:58,594 Stage-1 map = 0%, reduce = 0%
2017-07-30 02:22:59,734 Stage-1 map = 0%, reduce = 0%
2017-07-30 02:24:00,084 Stage-1 map = 0%, reduce = 0%
2017-07-30 02:25:00,859 Stage-1 map = 0%, reduce = 0%
2017-07-30 02:25:28,846 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 123.57 sec
2017-07-30 02:26:03,306 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 128.97 sec
MapReduce Total cumulative CPU time: 2 minutes 8 seconds 970 msec
Ended Job = job_1501198522682_0059
MapReduce Jobs Launched:
Stage-Stage-1: Map: 3 Reduce: 1 Cumulative CPU: 128.97 sec HDFS Read: 2119 HDFS Write: 299 SUCCESS
Total MapReduce CPU Time Spent: 2 minutes 8 seconds 970 msec
OK
empno ename sal
7369 SMITH 800.0
7900 JAMES 950.0
7876 ADAMS 1100.0
7654 MARTIN 1250.0
7521 WARD 1250.0
7934 MILLER 1300.0
7844 TURNER 1500.0
7499 ALLEN 1600.0
7782 CLARK 2450.0
7698 BLAKE 2850.0
7566 JONES 2975.0
7788 SCOTT 3000.0
7902 FORD 3000.0
7839 KING 5000.0
8888 王崇闻 300000.0
8888 ChavinKing 300000.0
Time taken: 442.499 seconds, Fetched: 16 row(s)
我们可以从输出日志看出,这个任务一共启动了3个map任务,1个reduce任务,输出结果是按照sal字段内容升序排序,并且全局有序。注意在任务开始前我们设置了reduce数目为3,但是实际仅启动了一个reduce任务,这说明order by是强制启动一个reduce完成全局排序的。当数据集比较大时,一个reduce任务将会成为这个任务的性能瓶颈。
2、sort by测试:
==============================================
hive (chavin)> set mapred.max.split.size=200;
hive (chavin)> set mapred.reduce.tasks=3;
hive (chavin)> select empno,ename,sal from emp sort by sal asc;
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Defaulting to jobconf value of: 3
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Job = job_1501198522682_0063, Tracking URL = http://chavin.king:8088/proxy/application_1501198522682_0063/
Kill Command = /opt/cdh-5.3.6/hadoop-2.5.0-cdh5.3.6/bin/hadoop job -kill job_1501198522682_0063
Hadoop job information for Stage-1: number of mappers: 3; number of reducers: 3
2017-07-30 02:40:32,898 Stage-1 map = 0%, reduce = 0%
2017-07-30 02:41:18,531 Stage-1 map = 33%, reduce = 0%, Cumulative CPU 5.05 sec
2017-07-30 02:41:27,062 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 17.9 sec
2017-07-30 02:42:15,867 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 28.39 sec
2017-07-30 02:42:18,655 Stage-1 map = 100%, reduce = 78%, Cumulative CPU 30.31 sec
2017-07-30 02:42:20,045 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 33.92 sec
MapReduce Total cumulative CPU time: 33 seconds 920 msec
Ended Job = job_1501198522682_0063
MapReduce Jobs Launched:
Stage-Stage-1: Map: 3 Reduce: 3 Cumulative CPU: 40.36 sec HDFS Read: 2119 HDFS Write: 299 SUCCESS
Total MapReduce CPU Time Spent: 40 seconds 360 msec
OK
empno ename sal
7654 MARTIN 1250.0
7844 TURNER 1500.0
8888 王崇闻 300000.0
8888 ChavinKing 300000.0
7900 JAMES 950.0
7521 WARD 1250.0
7934 MILLER 1300.0
7499 ALLEN 1600.0
7782 CLARK 2450.0
7566 JONES 2975.0
7902 FORD 3000.0
7788 SCOTT 3000.0
7839 KING 5000.0
7369 SMITH 800.0
7876 ADAMS 1100.0
7698 BLAKE 2850.0
Time taken: 136.972 seconds, Fetched: 16 row(s)
sort by asc进行排序操作,启动了3个map任务,3个reduce任务,这与我们前期配置是相符合的。输出结果局部有序,也侧面验证了sort by属于reduce内进行排序的。
============================================
set mapred.max.split.size=200;
set mapred.reduce.tasks=3;
select empno,ename,sal from emp sort by sal desc;
hive (chavin)> set mapred.max.split.size=200;
hive (chavin)> set mapred.reduce.tasks=3;
hive (chavin)> select empno,ename,sal from emp sort by sal desc;
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Defaulting to jobconf value of: 3
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Job = job_1501198522682_0064, Tracking URL = http://chavin.king:8088/proxy/application_1501198522682_0064/
Kill Command = /opt/cdh-5.3.6/hadoop-2.5.0-cdh5.3.6/bin/hadoop job -kill job_1501198522682_0064
Hadoop job information for Stage-1: number of mappers: 3; number of reducers: 3
2017-07-30 02:43:57,741 Stage-1 map = 0%, reduce = 0%
2017-07-30 02:44:41,415 Stage-1 map = 33%, reduce = 0%, Cumulative CPU 4.27 sec
2017-07-30 02:44:44,267 Stage-1 map = 67%, reduce = 0%, Cumulative CPU 9.69 sec
2017-07-30 02:44:46,779 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 14.64 sec
2017-07-30 02:45:27,003 Stage-1 map = 100%, reduce = 22%, Cumulative CPU 17.61 sec
2017-07-30 02:45:28,551 Stage-1 map = 100%, reduce = 44%, Cumulative CPU 22.5 sec
2017-07-30 02:45:29,834 Stage-1 map = 100%, reduce = 56%, Cumulative CPU 24.54 sec
2017-07-30 02:45:32,660 Stage-1 map = 100%, reduce = 78%, Cumulative CPU 29.26 sec
2017-07-30 02:45:35,291 Stage-1 map = 100%, reduce = 90%, Cumulative CPU 35.0 sec
2017-07-30 02:45:37,542 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 36.92 sec
MapReduce Total cumulative CPU time: 36 seconds 920 msec
Ended Job = job_1501198522682_0064
MapReduce Jobs Launched:
Stage-Stage-1: Map: 3 Reduce: 3 Cumulative CPU: 36.92 sec HDFS Read: 2119 HDFS Write: 299 SUCCESS
Total MapReduce CPU Time Spent: 36 seconds 920 msec
OK
empno ename sal
8888 王崇闻 300000.0
8888 ChavinKing 300000.0
7844 TURNER 1500.0
7654 MARTIN 1250.0
7839 KING 5000.0
7788 SCOTT 3000.0
7902 FORD 3000.0
7566 JONES 2975.0
7782 CLARK 2450.0
7499 ALLEN 1600.0
7934 MILLER 1300.0
7521 WARD 1250.0
7900 JAMES 950.0
7698 BLAKE 2850.0
7876 ADAMS 1100.0
7369 SMITH 800.0
Time taken: 132.663 seconds, Fetched: 16 row(s)
sort by asc进行排序操作,启动了3个map任务,3个reduce任务,这与我们前期配置是相符合的。输出结果局部有序,也侧面验证了sort by属于reduce内进行排序的。
======================================================================
3、distribute by测试:
set mapred.max.split.size=200;
set mapred.reduce.tasks=3;
select empno,ename,sal from emp distribute by sal sort by sal asc;
hive (chavin)> set mapred.max.split.size=200;
hive (chavin)> set mapred.reduce.tasks=3;
hive (chavin)> select empno,ename,sal from emp distribute by sal sort by sal asc;
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Defaulting to jobconf value of: 3
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Job = job_1501198522682_0067, Tracking URL = http://chavin.king:8088/proxy/application_1501198522682_0067/
Kill Command = /opt/cdh-5.3.6/hadoop-2.5.0-cdh5.3.6/bin/hadoop job -kill job_1501198522682_0067
Hadoop job information for Stage-1: number of mappers: 3; number of reducers: 3
2017-07-30 02:52:14,833 Stage-1 map = 0%, reduce = 0%
2017-07-30 02:53:04,015 Stage-1 map = 33%, reduce = 0%, Cumulative CPU 6.0 sec
2017-07-30 02:53:05,393 Stage-1 map = 67%, reduce = 0%, Cumulative CPU 11.35 sec
2017-07-30 02:53:06,657 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 11.72 sec
2017-07-30 02:53:46,542 Stage-1 map = 100%, reduce = 22%, Cumulative CPU 20.83 sec
2017-07-30 02:53:47,981 Stage-1 map = 100%, reduce = 44%, Cumulative CPU 23.77 sec
2017-07-30 02:53:50,909 Stage-1 map = 100%, reduce = 70%, Cumulative CPU 27.81 sec
2017-07-30 02:53:54,581 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 37.24 sec
MapReduce Total cumulative CPU time: 37 seconds 240 msec
Ended Job = job_1501198522682_0067
MapReduce Jobs Launched:
Stage-Stage-1: Map: 3 Reduce: 3 Cumulative CPU: 39.56 sec HDFS Read: 2119 HDFS Write: 299 SUCCESS
Total MapReduce CPU Time Spent: 39 seconds 560 msec
OK
empno ename sal
7876 ADAMS 1100.0
7654 MARTIN 1250.0
7521 WARD 1250.0
7782 CLARK 2450.0
7566 JONES 2975.0
8888 王崇闻 300000.0
8888 ChavinKing 300000.0
7934 MILLER 1300.0
7499 ALLEN 1600.0
7698 BLAKE 2850.0
7788 SCOTT 3000.0
7902 FORD 3000.0
7839 KING 5000.0
7369 SMITH 800.0
7900 JAMES 950.0
7844 TURNER 1500.0
Time taken: 129.0 seconds, Fetched: 16 row(s)
distribute by asc进行排序操作,启动了3个map任务,3个reduce任务,这与我们前期配置是相符合的。输出结果局部有序,也侧面验证了sort by属于reduce内进行排序的。同时我们发现sal值相同的字段并排输出,这说明distribute by sal按照sal的不同值分发的不同的reduce中。
============================================================
set mapred.max.split.size=200;
set mapred.reduce.tasks=3;
select empno,ename,sal from emp distribute by sal sort by sal desc;
hive (chavin)> set mapred.max.split.size=200;
hive (chavin)> set mapred.reduce.tasks=3;
hive (chavin)> select empno,ename,sal from emp distribute by sal sort by sal desc;
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Defaulting to jobconf value of: 3
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Job = job_1501198522682_0068, Tracking URL = http://chavin.king:8088/proxy/application_1501198522682_0068/
Kill Command = /opt/cdh-5.3.6/hadoop-2.5.0-cdh5.3.6/bin/hadoop job -kill job_1501198522682_0068
Hadoop job information for Stage-1: number of mappers: 3; number of reducers: 3
2017-07-30 02:55:49,989 Stage-1 map = 0%, reduce = 0%
2017-07-30 02:56:29,911 Stage-1 map = 67%, reduce = 0%, Cumulative CPU 4.4 sec
2017-07-30 02:56:37,401 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 16.42 sec
2017-07-30 02:57:17,118 Stage-1 map = 100%, reduce = 22%, Cumulative CPU 19.05 sec
2017-07-30 02:57:21,673 Stage-1 map = 100%, reduce = 24%, Cumulative CPU 21.07 sec
2017-07-30 02:57:24,397 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 23.17 sec
2017-07-30 02:57:25,844 Stage-1 map = 100%, reduce = 56%, Cumulative CPU 26.7 sec
2017-07-30 02:57:27,134 Stage-1 map = 100%, reduce = 78%, Cumulative CPU 30.89 sec
2017-07-30 02:57:28,508 Stage-1 map = 100%, reduce = 81%, Cumulative CPU 32.52 sec
2017-07-30 02:57:29,728 Stage-1 map = 100%, reduce = 83%, Cumulative CPU 34.62 sec
2017-07-30 02:57:32,582 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 39.17 sec
MapReduce Total cumulative CPU time: 39 seconds 170 msec
Ended Job = job_1501198522682_0068
MapReduce Jobs Launched:
Stage-Stage-1: Map: 3 Reduce: 3 Cumulative CPU: 39.17 sec HDFS Read: 2119 HDFS Write: 299 SUCCESS
Total MapReduce CPU Time Spent: 39 seconds 170 msec
OK
empno ename sal
8888 王崇闻 300000.0
8888 ChavinKing 300000.0
7566 JONES 2975.0
7782 CLARK 2450.0
7654 MARTIN 1250.0
7521 WARD 1250.0
7876 ADAMS 1100.0
7839 KING 5000.0
7788 SCOTT 3000.0
7902 FORD 3000.0
7698 BLAKE 2850.0
7499 ALLEN 1600.0
7934 MILLER 1300.0
7844 TURNER 1500.0
7900 JAMES 950.0
7369 SMITH 800.0
Time taken: 135.076 seconds, Fetched: 16 row(s)
distribute by asc进行排序操作,启动了3个map任务,3个reduce任务,这与我们前期配置是相符合的。输出结果局部有序,也侧面验证了sort by属于reduce内进行排序的。同时我们发现sal值相同的字段并排输出,这说明distribute by sal按照sal的不同值分发的不同的reduce中。
=========================================================
4、cluster by测试:
set mapred.max.split.size=200;
set mapred.reduce.tasks=3;
select empno,ename,sal from emp cluster by sal;
hive (chavin)> set mapred.max.split.size=200;
hive (chavin)> set mapred.reduce.tasks=3;
hive (chavin)> select empno,ename,sal from emp cluster by sal;
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Defaulting to jobconf value of: 3
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Job = job_1501198522682_0069, Tracking URL = http://chavin.king:8088/proxy/application_1501198522682_0069/
Kill Command = /opt/cdh-5.3.6/hadoop-2.5.0-cdh5.3.6/bin/hadoop job -kill job_1501198522682_0069
Hadoop job information for Stage-1: number of mappers: 3; number of reducers: 3
2017-07-30 02:59:19,969 Stage-1 map = 0%, reduce = 0%
2017-07-30 03:00:00,105 Stage-1 map = 33%, reduce = 0%, Cumulative CPU 5.02 sec
2017-07-30 03:00:02,559 Stage-1 map = 67%, reduce = 0%, Cumulative CPU 9.55 sec
2017-07-30 03:00:03,736 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 14.01 sec
2017-07-30 03:00:40,134 Stage-1 map = 100%, reduce = 22%, Cumulative CPU 18.03 sec
2017-07-30 03:00:41,421 Stage-1 map = 100%, reduce = 44%, Cumulative CPU 21.22 sec
2017-07-30 03:00:44,508 Stage-1 map = 100%, reduce = 81%, Cumulative CPU 27.56 sec
2017-07-30 03:00:45,937 Stage-1 map = 100%, reduce = 92%, Cumulative CPU 31.57 sec
2017-07-30 03:00:47,208 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 33.51 sec
MapReduce Total cumulative CPU time: 33 seconds 510 msec
Ended Job = job_1501198522682_0069
MapReduce Jobs Launched:
Stage-Stage-1: Map: 3 Reduce: 3 Cumulative CPU: 33.51 sec HDFS Read: 2119 HDFS Write: 299 SUCCESS
Total MapReduce CPU Time Spent: 33 seconds 510 msec
OK
empno ename sal
7876 ADAMS 1100.0
7654 MARTIN 1250.0
7521 WARD 1250.0
7782 CLARK 2450.0
7566 JONES 2975.0
8888 王崇闻 300000.0
8888 ChavinKing 300000.0
7934 MILLER 1300.0
7499 ALLEN 1600.0
7698 BLAKE 2850.0
7788 SCOTT 3000.0
7902 FORD 3000.0
7839 KING 5000.0
7369 SMITH 800.0
7900 JAMES 950.0
7844 TURNER 1500.0
Time taken: 119.103 seconds, Fetched: 16 row(s)
cluster by在特定条件下可以替代distribute by和sort by的组合,进行排序操作,启动了3个map任务,3个reduce任务,这与我们前期配置是相符合的。输出结果局部有序。
测试结论:以上4种排序方法中,真正能达到全局排序的只有order by,针对于sort by的局部排序如果想达到全局排序效果还需要对其结果进行一次order by的过程。而distribute by和cluster by可以合并相同的值,但并不是宣传中的那样可以达到全局排序的效果。或许还有其他手段可以达到,但绝不是针对于这2种排序本身。
Hive中的order by、sort by、distribute by、cluster by解释及测试的更多相关文章
- Hive 中的 order by, sort by, distribute by 与 cluster by
Order By order by 会对输入做全排序, 因此只有一个Reducer(多个Reducer无法保证全局有序), 然而只有一个Reducer, 会导致当输入规模较大时, 消耗较长的计算时间. ...
- hive中order by,sort by, distribute by, cluster by作用以及用法
1. order by Hive中的order by跟传统的sql语言中的order by作用是一样的,会对查询的结果做一次全局排序,所以说,只有hive的sql中制定了order by所有的 ...
- [转载]hive中order by,sort by, distribute by, cluster by作用以及用法
1. order by Hive中的order by跟传统的sql语言中的order by作用是一样的,会对查询的结果做一次全局排序,所以说,只有hive的sql中制定了order by所有的 ...
- hive中order by,sort by, distribute by, cluster by的用法
1.order by hive中的order by 和传统sql中的order by 一样,对数据做全局排序,加上排序,会新启动一个job进行排序,会把所有数据放到同一个reduce中进行处理,不管数 ...
- hive中order by ,sort by ,distribute by, cluster by 的区别(**很详细**)
hive 查询语法 select [all | distinct] select_ condition, select_ condition from table_name a [join table ...
- hive 排序 order by sort by distribute by cluster by
order by: order by是全局排序,受hive.mapred.mode的影响. 使用orderby有一些限制: 1.在严格模式下(hive.mapred.mod ...
- Hive中的Order by与关系型数据库中的order by语句的异同点
在Hive中,ORDER BY语句是对查询结果集进行整体的排序,最终将会产生一个reducer进行全局的排序,达到的最终结果是和传统的关系型数据库是一样的. 在数据量非常大的时候,全局排序的单个red ...
- Hadoop Hive 中的排序 Order by ,Sort by ,Distribute by以及 Cluster By
order by order by 会对输入做全局排序,因此只有一个reducer(多个reducer无法保证全局有序)只有一个reducer,会导致当输入规模较大时,需要较长的计算时间. set h ...
- hive 中 Order by, Sort by ,Dristribute by,Cluster By 的作用和用法
order by order by 会对输入做全局排序,因此只有一个reducer(多个reducer无法保证全局有序) 只有一个reducer,会导致当输入规模较大时,需要较长的计算时间. set ...
- hive 的分隔符、orderby sort by distribute by的优化
一.Hive 分号字符 分号是SQL语句结束标记,在HiveQL中也是,可是在HiveQL中,对分号的识别没有那么智慧,比如: select concat(cookie_id,concat(';',' ...
随机推荐
- 分享一个Godaddy的优惠码,可以优惠35%——2013-11-23
国外的域名注册商就是好,还有优惠码,付费的时候贴上优惠码就能免相应的金额,不错. 在网上找的一个35%优惠的优惠码,可以买域名和主机.(主机就免了,有点贵,域名不错) 我买了个com域名,原本$12. ...
- 如何解决:对应的服务器 tls 为 tls 1.0,小程序要求的TLS版本必须大于等于1.2问题
微信小程序 TLS 版本必须大于等于1.2问题解决 此问题最近在微信小程序开发中,比较常见. 在解决这个问题之前,我们需要了解一下,当前的系统环境是否支持TLS1.2以上,可以参考一下表格: 请先确认 ...
- java中的数据加密2 对称加密
对称加密 也叫私钥加密. 采用单钥密码系统的加密方法,同一个密钥可以同时用作信息的加密和解密,这种加密方法称为对称加密,也称为单密钥加密. 需要对加密和解密使用相同密钥的加密算法.由于其速度快,对 ...
- Java知多少(2)虚拟机(JVM)以及跨平台原理
相信大家已经了解到Java具有跨平台的特性,可以“一次编译,到处运行”,在Windows下编写的程序,无需任何修改就可以在Linux下运行,这是C和C++很难做到的. 那么,跨平台是怎样实现的呢?这就 ...
- Java知多少(67)面向字符的输入流
字符流是针对字符数据的特点进行过优化的,因而提供一些面向字符的有用特性,字符流的源或目标通常是文本文件. Reader和Writer是java.io包中所有字符流的父类.由于它们都是抽象类,所以应使用 ...
- 【转】WPF自定义控件与样式(9)-树控件TreeView与菜单Menu-ContextMenu
一.前言 申明:WPF自定义控件与样式是一个系列文章,前后是有些关联的,但大多是按照由简到繁的顺序逐步发布的等. 本文主要内容: 菜单Menu的自定义样式: 右键菜单ContextMenu的自定义样式 ...
- Java如何使套接字向单个客户端显示消息?
在Java编程中,如何使用套接字向单个客户端显示消息? 以下示例演示了如何使用Socket类的ssock.accept()方法向单个套接字客户端上显示消息. package com.yiibai; i ...
- java socket 服务器多线程 数据转发的研究实践
项目中借鉴了以下资料: https://www.cnblogs.com/whenever/p/5526420.html https://www.cnblogs.com/jpwz/p/5715852.h ...
- react给一个div行内加背景图片并实现cover覆盖模式居中显示
具体background简写可以参考这篇文章. 这里注意,如果简写里要写background-size,则这里必须写 / ,否则整个背景图片样式没有解析出来. 它和font以及border-radi ...
- SpringBoot自定义错误页面,SpringBoot 404、500错误提示页面
SpringBoot自定义错误页面,SpringBoot 404.500错误提示页面 SpringBoot 4xx.html.5xx.html错误提示页面 ====================== ...