转载自:

https://github.com/digoal/blog/blob/master/201708/20170818_02.md?spm=a2c4e.11153940.blogcont179210.17.6f682764HWr8pC&file=20170818_02.md

背景

Greenplum支持行存和列存,支持堆表和AO表,那么他们有什么不同,如何选择呢?

行存和列存的原理

1、行存,以行为形式组织存储,一行是一个tuple,存在一起。当需要读取某列时,需要将这列前面的所有列都进行deform,所以访问第一列和访问最后一列的成本实际上是不一样的。

在这篇文档中,有deform的详细介绍。《PostgreSQL 向量化执行插件(瓦片式实现) 10x提速OLAP》

行存小结:

全表扫描要扫描更多的数据块。

压缩比较低。

读取任意列的成本不一样,越靠后的列,成本越高。

不适合向量计算、JIT架构。(简单来说,就是不适合批处理形式的计算)

需要REWRITE表时,需要对全表进行REWRITE,例如加字段有默认值。

2、列存,以列为形式组织存储,每列对应一个或一批文件。读取任一列的成本是一样的,但是如果要读取多列,需要访问多个文件,访问的列越多,开销越大。

列存小结:

压缩比高。

仅仅支持AO存储(后面会将)。

读取任意列的成本是一样的。

非常适合向量计算、JIT架构。对大批量数据的访问和统计,效率更高。

读取很多列时,由于需要访问更多的文件,成本更高。例如查询明细。

需要REWRITE表时,不需要对全表操作,例如加字段有默认值,只是添加字段对应的那个文件。

什么时候选择行存

如果OLTP的需求偏多,例如经常需要查询表的明细(输出很多列),需要更多的更新和删除操作时。可以考虑行存。

什么时候选择列存

如果OLAP的需求偏多,经常需要对数据进行统计时,选择列存。

需要比较高的压缩比时,选择列存。

如果用户有混合需求,可以采用分区表,例如按时间维度的需求分区,近期的数据明细查询多,那就使用行存,对历史的数据统计需求多那就使用列存。

堆表和AO表的原理

1、堆表,实际上就是PG的堆存储,堆表的所有变更都会产生REDO,可以实现时间点恢复。但是堆表不能实现逻辑增量备份(因为表的任意一个数据块都有可能变更,不方便通过堆存储来记录位点。)。

一个事务结束时,通过clog以及REDO来实现它的可靠性。同时支持通过REDO来构建MIRROR节点实现数据冗余。

2、AO表,看名字就知道,只追加的存储,删除更新数据时,通过另一个BITMAP文件来标记被删除的行,通过bit以及偏移对齐来判定AO表上的某一行是否被删除。

事务结束时,需要调用FSYNC,记录最后一次写入对应的数据块的偏移。(并且这个数据块即使只有一条记录,下次再发起事务又会重新追加一个数据块)同时发送对应的数据块给MIRROR实现数据冗余。

因此AO表不适合小事务,因为每次事务结束都会FSYNC,同时事务结束后这个数据块即使有空余也不会被复用。(你可以测试一下,AO表单条提交的IO放大很严重)。

虽然如此,AO表非常适合OLAP场景,批量的数据写入,高压缩比,逻辑备份支持增量备份,因此每次记录备份到的偏移量即可。加上每次备份全量的BITMAP删除标记(很小)。

什么时候选择堆表

当数据写入时,小事务偏多时选择堆表。

当需要时间点恢复时,选择堆表。

什么时候选择AO表

当需要列存时,选择AO表。

当数据批量写入时,选择AO表。

测试对比行存deform和列存的性能差别

1、创建一个函数,用于创建400列的表(行存堆表、AO行存表、AO列存表)。

create or replace function f(name, int, text) returns void as $$
declare
res text := '';
begin
for i in 1..$2 loop
res := res||'c'||i||' int8,';
end loop;
res := rtrim(res, ',');
if $3 = 'ao_col' then
res := 'create table '||$1||'('||res||') with (appendonly=true, blocksize=8192, compresstype=none, orientation=column)';
elsif $3 = 'ao_row' then
res := 'create table '||$1||'('||res||') with (appendonly=true, blocksize=8192, orientation=row)';
elsif $3 = 'heap_row' then
res := 'create table '||$1||'('||res||') with (appendonly=false)';
else
raise notice 'use ao_col, ao_row, heap_row as $3';
return;
end if;
execute res;
end;
$$ language plpgsql;

2、创建表如下

postgres=# select f('tbl_ao_col', 400, 'ao_col');
postgres=# select f('tbl_ao_row', 400, 'ao_row');
postgres=# select f('tbl_heap_row', 400, 'heap_row');

3、创建1个函数,用于填充数据,其中第一个和最后3个字段为测试数据的字段,其他都填充1。

create or replace function f_ins1(name, int, int8) returns void as $$
declare
res text := '';
begin
for i in 1..($2-4) loop
res := res||'1,';
end loop;
res := 'id,'||res;
res := rtrim(res, ',');
res := 'insert into '||$1||' select '||res||'id,random()*10000,random()*100000 from generate_series(1,'||$3||') t(id)';
execute res;
end;
$$ language plpgsql;

4、填充数据

postgres=# select f_ins1('tbl_ao_col',400,1000000);

5、创建1个函数,用于填充数据,其中前4个字段为测试数据的字段,其他都填充1。

create or replace function f_ins2(name, int, int8) returns void as $$
declare
res text := '';
begin
for i in 1..($2-4) loop
res := res||'1,';
end loop;
res := 'id,id,random()*10000,random()*100000,'||res;
res := rtrim(res, ',');
res := 'insert into '||$1||' select '||res||' from generate_series(1,'||$3||') t(id)';
execute res;
end;
$$ language plpgsql;

6、填充数据

postgres=# select f_ins1('tbl_ao_col',400,1000000);
f_ins1
-------- (1 row) postgres=# insert into tbl_ao_row select * from tbl_ao_col;
INSERT 0 1000000
postgres=# insert into tbl_heap_row select * from tbl_ao_col;
INSERT 0 1000000

7、表分析

postgres=# analyze tbl_ao_col ;
ANALYZE
postgres=# analyze tbl_ao_row;
ANALYZE
postgres=# analyze tbl_heap_row;
ANALYZE

8、表大小

postgres=# select pg_size_pretty(pg_relation_size('tbl_ao_col'));
pg_size_pretty
----------------
3060 MB
(1 row) postgres=# select pg_size_pretty(pg_relation_size('tbl_ao_row'));
pg_size_pretty
----------------
3117 MB
(1 row) postgres=# select pg_size_pretty(pg_relation_size('tbl_heap_row'));
pg_size_pretty
----------------
3473 MB
(1 row)

9、行存堆表,前面几个字段的统计

postgres=# explain analyze select c2,count(*),sum(c3),avg(c3),min(c3),max(c3) from tbl_heap_row group by c2;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------
Gather Motion 48:1 (slice2; segments: 48) (cost=136132.40..136132.42 rows=1 width=96)
Rows out: 1 rows at destination with 135 ms to end, start offset by 1.922 ms.
-> HashAggregate (cost=136132.40..136132.42 rows=1 width=96)
Group By: tbl_heap_row.c2
Rows out: 1 rows (seg42) with 0.002 ms to first row, 36 ms to end, start offset by 48 ms.
-> Redistribute Motion 48:48 (slice1; segments: 48) (cost=136132.35..136132.37 rows=1 width=96)
Hash Key: tbl_heap_row.c2
Rows out: 48 rows at destination (seg42) with 53 ms to end, start offset by 48 ms.
-> HashAggregate (cost=136132.35..136132.35 rows=1 width=96)
Group By: tbl_heap_row.c2
Rows out: Avg 1.0 rows x 48 workers. Max 1 rows (seg0) with 0.008 ms to first row, 1.993 ms to end, start offset by 48 ms.
-> Seq Scan on tbl_heap_row (cost=0.00..121134.54 rows=20831 width=16)
Rows out: Avg 20833.3 rows x 48 workers. Max 20854 rows (seg42) with 40 ms to first row, 73 ms to end, start offset by 50 ms.
Slice statistics:
(slice0) Executor memory: 345K bytes.
(slice1) Executor memory: 751K bytes avg x 48 workers, 751K bytes max (seg0).
(slice2) Executor memory: 359K bytes avg x 48 workers, 374K bytes max (seg42).
Statement statistics:
Memory used: 128000K bytes
Settings: optimizer=off
Optimizer status: legacy query optimizer
Total runtime: 138.524 ms
(22 rows)

10、行存堆表,末尾几个字段的统计

postgres=# explain analyze select c398,count(*),sum(c399),avg(c399),min(c399),max(c399) from tbl_heap_row group by c398;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------
Gather Motion 48:1 (slice2; segments: 48) (cost=136576.82..136799.05 rows=9877 width=96)
Rows out: 10001 rows at destination with 212 ms to end, start offset by 1.917 ms.
-> HashAggregate (cost=136576.82..136799.05 rows=206 width=96)
Group By: tbl_heap_row.c398
Rows out: Avg 208.4 rows x 48 workers. Max 223 rows (seg17) with 0.001 ms to first row, 70 ms to end, start offset by 14 ms.
-> Redistribute Motion 48:48 (slice1; segments: 48) (cost=136132.35..136329.89 rows=206 width=96)
Hash Key: tbl_heap_row.c398
Rows out: Avg 8762.2 rows x 48 workers at destination. Max 9422 rows (seg46) with 93 ms to end, start offset by 48 ms.
-> HashAggregate (cost=136132.35..136132.35 rows=206 width=96)
Group By: tbl_heap_row.c398
Rows out: Avg 8762.2 rows x 48 workers. Max 8835 rows (seg2) with 0.003 ms to first row, 12 ms to end, start offset by 49 ms.
-> Seq Scan on tbl_heap_row (cost=0.00..121134.54 rows=20831 width=16)
Rows out: Avg 20833.3 rows x 48 workers. Max 20854 rows (seg42) with 40 ms to first row, 133 ms to end, start offset by 51 ms.
Slice statistics:
(slice0) Executor memory: 377K bytes.
(slice1) Executor memory: 1156K bytes avg x 48 workers, 1156K bytes max (seg0).
(slice2) Executor memory: 414K bytes avg x 48 workers, 414K bytes max (seg1).
Statement statistics:
Memory used: 128000K bytes
Settings: optimizer=off
Optimizer status: legacy query optimizer
Total runtime: 214.024 ms
(22 rows)

11、行存AO表,前面几个字段的统计

postgres=# explain analyze select c2,count(*),sum(c3),avg(c3),min(c3),max(c3) from tbl_ao_row group by c2;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------
Gather Motion 48:1 (slice2; segments: 48) (cost=124755.04..124755.07 rows=1 width=96)
Rows out: 1 rows at destination with 149 ms to end, start offset by 1.890 ms.
-> HashAggregate (cost=124755.04..124755.07 rows=1 width=96)
Group By: tbl_ao_row.c2
Rows out: 1 rows (seg42) with 0.004 ms to first row, 55 ms to end, start offset by 64 ms.
-> Redistribute Motion 48:48 (slice1; segments: 48) (cost=124755.00..124755.02 rows=1 width=96)
Hash Key: tbl_ao_row.c2
Rows out: 48 rows at destination (seg42) with 32 ms to end, start offset by 64 ms.
-> HashAggregate (cost=124755.00..124755.00 rows=1 width=96)
Group By: tbl_ao_row.c2
Rows out: Avg 1.0 rows x 48 workers. Max 1 rows (seg0) with 0.001 ms to first row, 46 ms to end, start offset by 59 ms.
-> Append-only Scan on tbl_ao_row (cost=0.00..109755.00 rows=20834 width=16)
Rows out: Avg 20833.3 rows x 48 workers. Max 20854 rows (seg42) with 24 ms to end, start offset by 59 ms.
Slice statistics:
(slice0) Executor memory: 345K bytes.
(slice1) Executor memory: 770K bytes avg x 48 workers, 770K bytes max (seg0).
(slice2) Executor memory: 359K bytes avg x 48 workers, 374K bytes max (seg42).
Statement statistics:
Memory used: 128000K bytes
Settings: optimizer=off
Optimizer status: legacy query optimizer
Total runtime: 152.386 ms
(22 rows)

12、行存AO表,末尾几个字段的统计

postgres=# explain analyze select c398,count(*),sum(c399),avg(c399),min(c399),max(c399) from tbl_ao_row group by c398;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------
Gather Motion 48:1 (slice2; segments: 48) (cost=125186.01..125401.52 rows=9578 width=96)
Rows out: 10001 rows at destination with 183 ms to end, start offset by 1.846 ms.
-> HashAggregate (cost=125186.01..125401.52 rows=200 width=96)
Group By: tbl_ao_row.c398
Rows out: Avg 208.4 rows x 48 workers. Max 223 rows (seg17) with 0.003 ms to first row, 97 ms to end, start offset by 22 ms.
-> Redistribute Motion 48:48 (slice1; segments: 48) (cost=124755.00..124946.56 rows=200 width=96)
Hash Key: tbl_ao_row.c398
Rows out: Avg 8762.2 rows x 48 workers at destination. Max 9422 rows (seg46) with 32 ms to end, start offset by 68 ms.
-> HashAggregate (cost=124755.00..124755.00 rows=200 width=96)
Group By: tbl_ao_row.c398
Rows out: Avg 8762.2 rows x 48 workers. Max 8835 rows (seg2) with 0.013 ms to first row, 48 ms to end, start offset by 22 ms.
-> Append-only Scan on tbl_ao_row (cost=0.00..109755.00 rows=20834 width=16)
Rows out: Avg 20833.3 rows x 48 workers. Max 20854 rows (seg42) with 22 ms to end, start offset by 71 ms.
Slice statistics:
(slice0) Executor memory: 377K bytes.
(slice1) Executor memory: 1144K bytes avg x 48 workers, 1144K bytes max (seg0).
(slice2) Executor memory: 414K bytes avg x 48 workers, 414K bytes max (seg0).
Statement statistics:
Memory used: 128000K bytes
Settings: optimizer=off
Optimizer status: legacy query optimizer
Total runtime: 184.723 ms
(22 rows)

13、列存AO表,前面几个字段的统计

postgres=# explain analyze select c2,count(*),sum(c3),avg(c3),min(c3),max(c3) from tbl_ao_col group by c2;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------
Gather Motion 48:1 (slice2; segments: 48) (cost=122928.04..122928.07 rows=1 width=96)
Rows out: 1 rows at destination with 104 ms to end, start offset by 1.878 ms.
-> HashAggregate (cost=122928.04..122928.07 rows=1 width=96)
Group By: tbl_ao_col.c2
Rows out: 1 rows (seg42) with 0.003 ms to first row, 18 ms to end, start offset by 55 ms.
-> Redistribute Motion 48:48 (slice1; segments: 48) (cost=122928.00..122928.02 rows=1 width=96)
Hash Key: tbl_ao_col.c2
Rows out: 48 rows at destination (seg42) with 30 ms to end, start offset by 55 ms.
-> HashAggregate (cost=122928.00..122928.00 rows=1 width=96)
Group By: tbl_ao_col.c2
Rows out: Avg 1.0 rows x 48 workers. Max 1 rows (seg0) with 0.007 ms to first row, 3.991 ms to end, start offset by 54 ms.
-> Append-only Columnar Scan on tbl_ao_col (cost=0.00..107928.00 rows=20834 width=16)
Rows out: 0 rows (seg0) with 40 ms to end, start offset by 56 ms.
Slice statistics:
(slice0) Executor memory: 345K bytes.
(slice1) Executor memory: 903K bytes avg x 48 workers, 903K bytes max (seg0).
(slice2) Executor memory: 359K bytes avg x 48 workers, 374K bytes max (seg42).
Statement statistics:
Memory used: 128000K bytes
Settings: optimizer=off
Optimizer status: legacy query optimizer
Total runtime: 106.859 ms
(22 rows)

14、列存AO表,末尾几个字段的统计

postgres=# explain analyze select c398,count(*),sum(c399),avg(c399),min(c399),max(c399) from tbl_ao_col group by c398;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------
Gather Motion 48:1 (slice2; segments: 48) (cost=123364.18..123582.28 rows=9693 width=96)
Rows out: 10001 rows at destination with 120 ms to end, start offset by 1.921 ms.
-> HashAggregate (cost=123364.18..123582.28 rows=202 width=96)
Group By: tbl_ao_col.c398
Rows out: Avg 208.4 rows x 48 workers. Max 223 rows (seg17) with 0.001 ms to first row, 54 ms to end, start offset by 35 ms.
-> Redistribute Motion 48:48 (slice1; segments: 48) (cost=122928.00..123121.86 rows=202 width=96)
Hash Key: tbl_ao_col.c398
Rows out: Avg 8762.2 rows x 48 workers at destination. Max 9422 rows (seg46) with 31 ms to end, start offset by 63 ms.
-> HashAggregate (cost=122928.00..122928.00 rows=202 width=96)
Group By: tbl_ao_col.c398
Rows out: Avg 8762.2 rows x 48 workers. Max 8835 rows (seg2) with 0.004 ms to first row, 8.004 ms to end, start offset by 82 ms.
-> Append-only Columnar Scan on tbl_ao_col (cost=0.00..107928.00 rows=20834 width=16)
Rows out: 0 rows (seg0) with 28 ms to end, start offset by 64 ms.
Slice statistics:
(slice0) Executor memory: 377K bytes.
(slice1) Executor memory: 1272K bytes avg x 48 workers, 1272K bytes max (seg0).
(slice2) Executor memory: 414K bytes avg x 48 workers, 414K bytes max (seg0).
Statement statistics:
Memory used: 128000K bytes
Settings: optimizer=off
Optimizer status: legacy query optimizer
Total runtime: 122.173 ms
(22 rows)

对于非分布键的分组聚合请求,Greenplum采用了多阶段聚合如下:

第一阶段,在SEGMENT本地聚合。(需要扫描所有数据,这里不同存储,前面的列和后面的列的差别就体现出来了,行存储的deform开销,在对后面的列进行统计时性能影响很明显。) Greenplum会根据字段的distinct值的比例,考虑是直接重分布数据,还是先在本地聚合后再重分布数据(减少重分布的数据量)。

第二阶段,根据分组字段,将结果数据重分布。(重分布需要用到的字段,此时结果很小。)

第三阶段,再次在SEGMENT本地聚合。(需要对重分布后的数据进行聚合。)

第四阶段,返回结果给master,有必要的话master节点调用聚合函数的final func(已经是很少的记录数和运算量)。

如何查看表的存储结构

pg_class.relstorage表示这个对象是什么存储:

postgres=# select distinct relstorage from pg_class ;
relstorage
------------
a -- 行存储AO表
h -- heap堆表、索引
x -- 外部表(external table)
v -- 视图
c -- 列存储AO表
(5 rows)

查询当前数据库有哪些AO表:

postgres=# select t2.nspname, t1.relname from pg_class t1, pg_namespace t2 where t1.relnamespace=t2.oid and relstorage in ('c', 'a');
nspname | relname
----------+-------------------
postgres | tbl_tag
postgres | tbl_pos_1_prt_p1
postgres | tbl_pos_1_prt_p2
postgres | tbl_pos_1_prt_p3
postgres | tbl_pos_1_prt_p4
postgres | tbl_pos_1_prt_p5
postgres | tbl_pos_1_prt_p6
postgres | tbl_pos_1_prt_p7
postgres | tbl_pos_1_prt_p8
postgres | tbl_pos_1_prt_p9
postgres | tbl_pos_1_prt_p10
postgres | tbl_pos
postgres | xx_czrk_qm_col
postgres | ao1
(14 rows)

查询当前数据库有哪些堆表:

select t2.nspname, t1.relname from pg_class t1, pg_namespace t2 where t1.relnamespace=t2.oid and relstorage in ('h') and relkind='r';

参考

《PostgreSQL 向量化执行插件(瓦片式实现) 10x提速OLAP》

http://greenplum.org/docs/best_practices/schema.html

Greenplum 行存、列存,堆表、AO表的原理和选择的更多相关文章

  1. 老生常谈之SQL Server (行转列,列转行)

    Open the first article 在本文章中主要介绍以下内容: 1.静态行转列 2.静态列转行 3.动态行转列 4.动态列转行 1.静态行转列 --静态的行转列 --新建一个科目成绩表 - ...

  2. SQL行转列,列转行

    SQL 行转列,列转行 行列转换在做报表分析时还是经常会遇到的,今天就说一下如何实现行列转换吧. 行列转换就是如下图所示两种展示形式的互相转换 行转列 假如我们有下表: SELECT * FROM s ...

  3. Greenplum列存压缩表索引机制

    列存压缩表,简称AOCS表 数据生成 create table testao(date text, time text, open float, high float, low float, volu ...

  4. SQL Server 动态行转列(参数化表名、分组列、行转列字段、字段值)

    一.本文所涉及的内容(Contents) 本文所涉及的内容(Contents) 背景(Contexts) 实现代码(SQL Codes) 方法一:使用拼接SQL,静态列字段: 方法二:使用拼接SQL, ...

  5. 表的转置 行转列: DECODE(Oracle) 和 CASE WHEN 的异同点

    异同点 都可以对表行转列: DECODE功能上和简单Case函数比较类似,不能像Case搜索函数一样,进行更复杂的判断 在Case函数中,可以使用BETWEEN, LIKE, IS NULL, IN, ...

  6. Excel 一张表最多能装下多少行多少列数据?

    一个工作簿可以装下255张,那么每张工作表可以装下多少行多少列数据呢? 1.任意打开或新建一个Excel文档. 2.在文档中,找到其左上角的"文件"按钮,点击选择"选项& ...

  7. MySQL--数据表操作--行转列和列转行

    原创:转载需注明原创地址 https://www.cnblogs.com/fanerwei222/p/11782549.html MySQL--数据表操作: 会用到的查询操作. 1. Limit的用法 ...

  8. mssql 数据库表行转列,列转行 比较经典

    --行列互转 /******************************************************************************************** ...

  9. MSSQLServer 纵向表转横向表 横向表转纵向表 行转列 列转行

    MSSQLServer 纵向表转横向表  横向表转纵向表 建表语句及插入数据语句: CREATE TABLE Test_y( ) NULL, ) NULL, [Grade] [int] NULL ) ...

随机推荐

  1. 近期学习python的小问题及解决方案

    ①定义空的二维列表来读取放置文件的内容: 在python中定义二维数组 - woshare - 博客园https://www.cnblogs.com/woshare/p/5823303.html ②调 ...

  2. [二叉树算法]关于判断是否为BST的算法

    //判断是否为BST 搜索树==二叉排序树 1.递归知最大最小值.2.先中序判是否单调 bool IsValidBST(BTNode *p,int low,int high){ if(p==NULL) ...

  3. IErrorHandler

    /// <summary> /// WCF服务端异常处理器 /// </summary> public class WCF_ExceptionHandler : IErrorH ...

  4. gradle上传jar包到maven公共仓库

    首先这里说的中央仓库 是指的 https://issues.sonatype.org/ 而不是maven私服. 其次是使用gradle上传jar包,maven上传,网上有很多教程,这里不做赘述. 首选 ...

  5. 在Windows上安装Redis

    微软官网源码 https://github.com/MicrosoftArchive/redis 这里介绍安装Signed binaries版本 使用Chocolatey(Windows包管理工具)安 ...

  6. rabbitmd

    一.前期准备   (1)条件:准备3台linux系统,确保能连到download.yunwei.edu   (2)编写yum源下载脚本: vim yum-repo.sh wget -O /etc/yu ...

  7. 【转载】 C#通过File类实现文件拷贝复制的功能

    在Windows系统的使用过程中,一个较常使用的功能就是文件的复制拷贝操作,其实在C#开发中,也可以使用File类库中的Copy方法来实现文件的拷贝,支持设定原文件地址,以及拷贝复制后的文件存放路径. ...

  8. vue函数化组件 functional

    需要在组件中把functional 设置为true 一个函数化组件像这样: Vue.component('testcomponent', { functional: true, // 为了弥补缺少的实 ...

  9. Fortify漏洞之Path Manipulation(路径篡改)

    继续对Fortify的漏洞进行总结,本篇主要针对 Path Manipulation(路径篡改)的漏洞进行总结,如下: 1.Path Manipulation(路径篡改) 1.1.产生原因: 当满足以 ...

  10. idea 实用插件

    尊重劳动成果,本插件的整理原文出自:https://blog.csdn.net/weixin_41846320/article/details/82697818,感谢老铁的辛苦原创. 插件安装方式: ...