[转帖]Use TiFlash
https://docs.pingcap.com/tidb/v5.0/use-tiflash
After TiFlash is deployed, data replication does not automatically begin. You need to manually specify the tables to be replicated.
You can either use TiDB to read TiFlash replicas for medium-scale analytical processing, or use TiSpark to read TiFlash replicas for large-scale analytical processing, which is based on your own needs. See the following sections for details:
Create TiFlash replicas for tables
After TiFlash is connected to the TiKV cluster, data replication by default does not begin. You can send a DDL statement to TiDB through a MySQL client to create a TiFlash replica for a specific table:
The parameter of the above command is described as follows:
count
indicates the number of replicas. When the value is0
, the replica is deleted.
If you execute multiple DDL statements on the same table, only the last statement is ensured to take effect. In the following example, two DDL statements are executed on the table tpch50
, but only the second statement (to delete the replica) takes effect.
Create two replicas for the table:
Delete the replica:
Notes:
If the table
t
is replicated to TiFlash through the above DDL statements, the table created using the following statement will also be automatically replicated to TiFlash:CREATE TABLE table_name like t;For versions earlier than v4.0.6, if you create the TiFlash replica before using TiDB Lightning to import the data, the data import will fail. You must import data to the table before creating the TiFlash replica for the table.
If TiDB and TiDB Lightning are both v4.0.6 or later, no matter a table has TiFlash replica(s) or not, you can import data to that table using TiDB Lightning. Note that this might slow the TiDB Lightning procedure, which depends on the NIC bandwidth on the lightning host, the CPU and disk load of the TiFlash node, and the number of TiFlash replicas.
It is recommended that you do not replicate more than 1,000 tables because this lowers the PD scheduling performance. This limit will be removed in later versions.
Check replication progress
You can check the status of the TiFlash replicas of a specific table using the following statement. The table is specified using the WHERE
clause. If you remove the WHERE
clause, you will check the replica status of all tables.
In the result of above statement:
AVAILABLE
indicates whether the TiFlash replicas of this table are available or not.1
means available and0
means unavailable. Once the replicas become available, this status does not change. If you use DDL statements to modify the number of replicas, the replication status will be recalculated.PROGRESS
means the progress of the replication. The value is between0.0
and1.0
.1
means at least one replica is replicated.
Speed up TiFlash replication
Before TiFlash replicas are added, each TiKV instance performs a full table scan and sends the scanned data to TiFlash as a "snapshot" to create replicas. By default, TiFlash replicas are added slowly with fewer resources usage in order to minimize the impact on the online service. If there are spare CPU and disk IO resources in your TiKV and TiFlash nodes, you can accelerate TiFlash replication by performing the following steps.
Temporarily increase the snapshot write speed limit for each TiKV and TiFlash instance by adjusting the TiFlash Proxy and TiKV configuration. For example, when using TiUP to manage configurations, the configuration is as below:
tikv: server.snap-max-write-bytes-per-sec: 300MiB # Default to 100MiB. tiflash-learner: raftstore.snap-handle-pool-size: 10 # Default to 2. Can be adjusted to >= node's CPU num * 0.6. raftstore.apply-low-priority-pool-size: 10 # Default to 1. Can be adjusted to >= node's CPU num * 0.6. server.snap-max-write-bytes-per-sec: 300MiB # Default to 100MiB.The configuration change takes effect after restarting the TiFlash and TiKV instances. The TiKV configuration can be also changed online by using the Dynamic Config SQL statement, which takes effect immediately without restarting TiKV instances:
SET CONFIG tikv `server.snap-max-write-bytes-per-sec` = '300MiB';After adjusting the preceding configurations, you cannot observe the acceleration for now, as the replication speed is still restricted by the PD limit globally.
Use PD Control to progressively ease the new replica speed limit.
The default new replica speed limit is 30, which means, approximately 30 Regions add TiFlash replicas every minute. Executing the following command will adjust the limit to 60 for all TiFlash instances, which doubles the original speed:
tiup ctl:v<CLUSTER_VERSION> pd -u http://<PD_ADDRESS>:2379 store limit all engine tiflash 60 add-peerIn the preceding command, you need to replace
<CLUSTER_VERSION>
with the actual cluster version and<PD_ADDRESS>:2379
with the address of any PD node. For example:tiup ctl:v6.1.1 pd -u http://192.168.1.4:2379 store limit all engine tiflash 60 add-peerWithin a few minutes, you will observe a significant increase in CPU and disk IO resource usage of the TiFlash nodes, and TiFlash should create replicas faster. At the same time, the TiKV nodes' CPU and disk IO resource usage increases as well.
If the TiKV and TiFlash nodes still have spare resources at this point and the latency of your online service does not increase significantly, you can further ease the limit, for example, triple the original speed:
tiup ctl:v<CLUSTER_VERSION> pd -u http://<PD_ADDRESS>:2379 store limit all engine tiflash 90 add-peerAfter the TiFlash replication is complete, revert to the default configuration to reduce the impact on online services.
Execute the following PD Control command to restore the default new replica speed limit:
tiup ctl:v<CLUSTER_VERSION> pd -u http://<PD_ADDRESS>:2379 store limit all engine tiflash 30 add-peerComment out the changed configuration in TiUP to restore the default snapshot write speed limit:
# tikv: # server.snap-max-write-bytes-per-sec: 300MiB # tiflash-learner: # raftstore.snap-handle-pool-size: 10 # raftstore.apply-low-priority-pool-size: 10 # server.snap-max-write-bytes-per-sec: 300MiB
Set available zones
When configuring replicas, if you need to distribute TiFlash replicas to multiple data centers for disaster recovery, you can configure available zones by following the steps below:
Specify labels for TiFlash nodes in the cluster configuration file.
tiflash_servers: - host: 172.16.5.81 config: flash.proxy.labels: zone=z1 - host: 172.16.5.82 config: flash.proxy.labels: zone=z1 - host: 172.16.5.85 config: flash.proxy.labels: zone=z2After starting a cluster, specify the labels when creating replicas.
ALTER TABLE table_name SET TIFLASH REPLICA count LOCATION LABELS location_labels;For example:
ALTER TABLE t SET TIFLASH REPLICA 2 LOCATION LABELS "zone";PD schedules the replicas based on the labels. In this example, PD respectively schedules two replicas of the table
t
to two available zones. You can use pd-ctl to view the scheduling.> tiup ctl:v<CLUSTER_VERSION> pd -u http://<PD_ADDRESS>:2379 store ... "address": "172.16.5.82:23913", "labels": [ { "key": "engine", "value": "tiflash"}, { "key": "zone", "value": "z1" } ], "region_count": 4, ... "address": "172.16.5.81:23913", "labels": [ { "key": "engine", "value": "tiflash"}, { "key": "zone", "value": "z1" } ], "region_count": 5, ... "address": "172.16.5.85:23913", "labels": [ { "key": "engine", "value": "tiflash"}, { "key": "zone", "value": "z2" } ], "region_count": 9, ...
For more information about scheduling replicas by using labels, see Schedule Replicas by Topology Labels, Multiple Data Centers in One City Deployment, and Three Data Centers in Two Cities Deployment.
Use TiDB to read TiFlash replicas
TiDB provides three ways to read TiFlash replicas. If you have added a TiFlash replica without any engine configuration, the CBO (cost-based optimization) mode is used by default.
Smart selection
For tables with TiFlash replicas, the TiDB optimizer automatically determines whether to use TiFlash replicas based on the cost estimation. You can use the desc
or explain analyze
statement to check whether or not a TiFlash replica is selected. For example:
cop[tiflash]
means that the task will be sent to TiFlash for processing. If you have not selected a TiFlash replica, you can try to update the statistics using the analyze table
statement, and then check the result using the explain analyze
statement.
Note that if a table has only a single TiFlash replica and the related node cannot provide service, queries in the CBO mode will repeatedly retry. In this situation, you need to specify the engine or use the manual hint to read data from the TiKV replica.
Engine isolation
Engine isolation is to specify that all queries use a replica of the specified engine by configuring the corresponding variable. The optional engines are "tikv", "tidb" (indicates the internal memory table area of TiDB, which stores some TiDB system tables and cannot be actively used by users), and "tiflash", with the following two configuration levels:
TiDB instance-level, namely, INSTANCE level. Add the following configuration item in the TiDB configuration file:
[isolation-read] engines = ["tikv", "tidb", "tiflash"]The INSTANCE-level default configuration is
["tikv", "tidb", "tiflash"]
.SESSION level. Use the following statement to configure:
set @@session.tidb_isolation_read_engines = "engine list separated by commas";or
set SESSION tidb_isolation_read_engines = "engine list separated by commas";The default configuration of the SESSION level inherits from the configuration of the TiDB INSTANCE level.
The final engine configuration is the session-level configuration, that is, the session-level configuration overrides the instance-level configuration. For example, if you have configured "tikv" in the INSTANCE level and "tiflash" in the SESSION level, then the TiFlash replicas are read. If the final engine configuration is "tikv" and "tiflash", then the TiKV and TiFlash replicas are both read, and the optimizer automatically selects a better engine to execute.
Because TiDB Dashboard and other components need to read some system tables stored in the TiDB memory table area, it is recommended to always add the "tidb" engine to the instance-level engine configuration.
If the queried table does not have a replica of the specified engine (for example, the engine is configured as "tiflash" but the table does not have a TiFlash replica), the query returns an error.
Manual hint
Manual hint can force TiDB to use specified replicas for specific table(s) on the premise of satisfying engine isolation. Here is an example of using the manual hint:
If you set an alias to a table in a query statement, you must use the alias in the statement that includes a hint for the hint to take effect. For example:
In the above statements, tiflash[]
prompts the optimizer to read the TiFlash replicas. You can also use tikv[]
to prompt the optimizer to read the TiKV replicas as needed. For hint syntax details, refer to READ_FROM_STORAGE.
If the table specified by a hint does not have a replica of the specified engine, the hint is ignored and a warning is reported. In addition, a hint only takes effect on the premise of engine isolation. If the engine specified in a hint is not in the engine isolation list, the hint is also ignored and a warning is reported.
The MySQL client of 5.7.7 or earlier versions clears optimizer hints by default. To use the hint syntax in these early versions, start the client with the --comments
option, for example, mysql -h 127.0.0.1 -P 4000 -uroot --comments
.
The relationship of smart selection, engine isolation, and manual hint
In the above three ways of reading TiFlash replicas, engine isolation specifies the overall range of available replicas of engines; within this range, manual hint provides statement-level and table-level engine selection that is more fine-grained; finally, CBO makes the decision and selects a replica of an engine based on cost estimation within the specified engine list.
Before v4.0.3, the behavior of reading from TiFlash replica in a non-read-only SQL statement (for example, INSERT INTO ... SELECT
, SELECT ... FOR UPDATE
, UPDATE ...
, DELETE ...
) is undefined. In v4.0.3 and later versions, internally TiDB ignores the TiFlash replica for a non-read-only SQL statement to guarantee the data correctness. That is, for smart selection, TiDB automatically chooses the non-TiFlash replica; for engine isolation that specifies TiFlash replica only, TiDB reports an error; and for manual hint, TiDB ignores the hint.
Use TiSpark to read TiFlash replicas
Currently, you can use TiSpark to read TiFlash replicas in a method similar to the engine isolation in TiDB. This method is to configure the spark.tispark.isolation_read_engines
parameter. The parameter value defaults to tikv,tiflash
, which means that TiDB reads data from TiFlash or from TiKV according to CBO's selection. If you set the parameter value to tiflash
, it means that TiDB forcibly reads data from TiFlash.
Notes
When this parameter is set to
tiflash
, only the TiFlash replicas of all tables involved in the query are read and these tables must have TiFlash replicas; for tables that do not have TiFlash replicas, an error is reported. When this parameter is set totikv
, only the TiKV replica is read.
You can configure this parameter in one of the following ways:
Add the following item in the
spark-defaults.conf
file:spark.tispark.isolation_read_engines tiflashAdd
--conf spark.tispark.isolation_read_engines=tiflash
in the initialization command when initializing Spark shell or Thrift server.Set
spark.conf.set("spark.tispark.isolation_read_engines", "tiflash")
in Spark shell in a real-time manner.Set
set spark.tispark.isolation_read_engines=tiflash
in Thrift server after the server is connected via beeline.
Supported push-down calculations
TiFlash supports the push-down of the following operators:
- TableScan: Reads data from tables.
- Selection: Filters data.
- HashAgg: Performs data aggregation based on the Hash Aggregation algorithm.
- StreamAgg: Performs data aggregation based on the Stream Aggregation algorithm. SteamAgg only supports the aggregation without the
GROUP BY
condition. - TopN: Performs the TopN calculation.
- Limit: Performs the limit calculation.
- Project: Performs the projection calculation.
- HashJoin: Performs the join calculation based on the Hash Join algorithm, but with the following conditions:
- The operator can be pushed down only in the MPP mode.
- The operator must have the join condition with the equivalent value.
- The push-down of
Full Outer Join
is not supported.
In TiDB, operators are organized in a tree structure. For an operator to be pushed down to TiFlash, all of the following prerequisites must be met:
- All of its child operators can be pushed down to TiFlash.
- If an operator contains expressions (most of the operators contain expressions), all expressions of the operator can be pushed down to TiFlash.
Currently, TiFlash supports the following push-down expressions:
- Mathematical functions:
+, -, /, *, >=, <=, =, !=, <, >, round(int), round(double), abs, floor(int), ceil(int), ceiling(int)
- Logical functions:
and, or, not, case when, if, ifnull, isnull, in
- Bitwise operations:
bitand, bitor, bigneg, bitxor
- String functions:
substr, char_length, replace, concat, concat_ws, left, right
- Date functions:
date_format, timestampdiff, from_unixtime, unix_timestamp(int), unix_timestamp(decimal), str_to_date(date), str_to_date(datetime), datediff, year, month, day, extract(datetime)
- JSON function:
json_length
- Conversion functions:
cast(int as double), cast(int as decimal), cast(int as string), cast(int as time), cast(double as int), cast(double as decimal), cast(double as string), cast(double as time), cast(string as int), cast(string as double), cast(string as decimal), cast(string as time), cast(decimal as int), cast(decimal as string), cast(decimal as time), cast(time as int), cast(time as decimal), cast(time as string)
- Aggregate functions:
min, max, sum, count, avg, approx_count_distinct
In addition, expressions that contain the Time/Bit/Set/Enum/Geometry type cannot be pushed down to TiFlash.
If a query encounters unsupported push-down calculations, TiDB needs to complete the remaining calculations, which might greatly affect the TiFlash acceleration effect. The currently unsupported operators and expressions might be supported in future versions.
Use the MPP mode
TiFlash supports using the MPP mode to execute queries, which introduces cross-node data exchange (data shuffle process) into the computation. The MPP mode is enabled by default and can be disabled by setting the value of the global/session variable tidb_allow_mpp
to 0
or OFF
.
MPP mode supports these physical algorithms: Broadcast Hash Join, Shuffled Hash Join, and Shuffled Hash Aggregation. The optimizer automatically determines which algorithm to be used in a query. To check the specific query execution plan, you can execute the EXPLAIN
statement. If the result of the EXPLAIN
statement shows ExchangeSender and ExchangeReceiver operators, it indicates that the MPP mode has taken effect.
The following statement takes the table structure in the TPC-H test set as an example:
In the example execution plan, the ExchangeReceiver
and ExchangeSender
operators are included. The execution plan indicates that after the nation
table is read, the ExchangeSender
operator broadcasts the table to each node, the HashJoin
and HashAgg
operations are performed on the nation
table and the customer
table, and then the results are returned to TiDB.
TiFlash provides the following two global/session variables to control whether to use Broadcast Hash Join:
tidb_broadcast_join_threshold_size
: The unit of the value is bytes. If the table size (in the unit of bytes) is less than the value of the variable, the Broadcast Hash Join algorithm is used. Otherwise, the Shuffled Hash Join algorithm is used.tidb_broadcast_join_threshold_count
: The unit of the value is rows. If the objects of the join operation belong to a subquery, the optimizer cannot estimate the size of the subquery result set, so the size is determined by the number of rows in the result set. If the estimated number of rows in the subquery is less than the value of this variable, the Broadcast Hash Join algorithm is used. Otherwise, the Shuffled Hash Join algorithm is used.
Known issues of MPP
In the current version, TiFlash uses the start_ts
of a query as the unique key of the query. In most cases, the start_ts
of each query can uniquely identify a query, but in the following cases, different queries have the same start_ts
:
- All queries in the same transaction have the same
start_ts
. - When you use
tidb_snapshot
to read data at a specific historical time point, the same time point is manually specified.
When start_ts
cannot uniquely represent the MPP query, if TiFlash detects that different queries have the same start_ts
at a given time, TiFlash might report an error. Typical error cases are as follows:
- When multiple queries with the same
start_ts
are sent to TiFlash at the same time, you might encounter thetask has been registered
error. - When multiple simple queries with
LIMIT
are executed continuously in the same transaction, once theLIMIT
condition is met, TiDB sends a cancel request to TiFlash to cancel the query. This request also usesstart_ts
to identify the query to be canceled. If there are other queries with the samestart_ts
in TiFlash, these queries might be canceled by mistake. An example of this issue can be found in #43426.
This issue is fixed in TiDB v6.6.0. It is recommended to use the latest LTS version.
Notes
Currently, TiFlash does not support some features. These features might be incompatible with the native TiDB:
In the TiFlash computation layer:
Checking overflowed numerical values is not supported. For example, adding two maximum values of the
BIGINT
type9223372036854775807 + 9223372036854775807
. The expected behavior of this calculation in TiDB is to return theERROR 1690 (22003): BIGINT value is out of range
error. However, if this calculation is performed in TiFlash, an overflow value of-2
is returned without any error.The window function is not supported.
Reading data from TiKV is not supported.
Currently, the
sum
function in TiFlash does not support the string-type argument. But TiDB cannot identify whether any string-type argument has been passed into thesum
function during the compiling. Therefore, when you execute statements similar toselect sum(string_col) from t
, TiFlash returns the[FLASH:Coprocessor:Unimplemented] CastStringAsReal is not supported.
error. To avoid such an error in this case, you need to modify this SQL statement toselect sum(cast(string_col as double)) from t
.Currently, TiFlash's decimal division calculation is incompatible with that of TiDB. For example, when dividing decimal, TiFlash performs the calculation always using the type inferred from the compiling. However, TiDB performs this calculation using a type that is more precise than that inferred from the compiling. Therefore, some SQL statements involving the decimal division return different execution results when executed in TiDB + TiKV and in TiDB + TiFlash. For example:
mysql> create table t (a decimal(3,0), b decimal(10, 0)); Query OK, 0 rows affected (0.07 sec) mysql> insert into t values (43, 1044774912); Query OK, 1 row affected (0.03 sec) mysql> alter table t set tiflash replica 1; Query OK, 0 rows affected (0.07 sec) mysql> set session tidb_isolation_read_engines='tikv'; Query OK, 0 rows affected (0.00 sec) mysql> select a/b, a/b + 0.0000000000001 from t where a/b; +--------+-----------------------+ | a/b | a/b + 0.0000000000001 | +--------+-----------------------+ | 0.0000 | 0.0000000410001 | +--------+-----------------------+ 1 row in set (0.00 sec) mysql> set session tidb_isolation_read_engines='tiflash'; Query OK, 0 rows affected (0.00 sec) mysql> select a/b, a/b + 0.0000000000001 from t where a/b; Empty set (0.01 sec)In the example above,
a/b
's inferred type from the compiling isDecimal(7,4)
both in TiDB and in TiFlash. Constrained byDecimal(7,4)
,a/b
's returned type should be0.0000
. In TiDB,a/b
's runtime precision is higher thanDecimal(7,4)
, so the original table data is not filtered by thewhere a/b
condition. However, in TiFlash, the calculation ofa/b
usesDecimal(7,4)
as the result type, so the original table data is filtered by thewhere a/b
condition.
The MPP mode in TiFlash does not support the following features:
- The partitioned table is not supported. For queries on the partitioned tables, the MPP mode is not chosen by default.
- If the
new_collations_enabled_on_first_bootstrap
configuration item's value istrue
, the MPP mode does not support the string-type join key or the string column type in thegroup by
aggregation. For these two query types, the MPP mode is not chosen by default.
[转帖]Use TiFlash的更多相关文章
- nginx负载均衡基于ip_hash的session粘帖
nginx负载均衡基于ip_hash的session粘帖 nginx可以根据客户端IP进行负载均衡,在upstream里设置ip_hash,就可以针对同一个C类地址段中的客户端选择同一个后端服务器,除 ...
- [转帖]网络协议封封封之Panabit配置文档
原帖地址:http://myhat.blog.51cto.com/391263/322378
- [转帖]零投入用panabit享受万元流控设备——搭建篇
原帖地址:http://net.it168.com/a2009/0505/274/000000274918.shtml 你想合理高效的管理内网流量吗?你想针对各个非法网络应用与服务进行合理限制吗?你是 ...
- 3d数学总结帖
3d数学总结帖,以下是对3d学习过程中数学知识的简单总结 角度值和弧度制的互转 Deg2Rad 角度A1转弧度A2 => A2=A1*PI/180 Rad2Deg 弧度A2转换角度A1 => ...
- [转帖]The Lambda Calculus for Absolute Dummies (like myself)
Monday, May 7, 2012 The Lambda Calculus for Absolute Dummies (like myself) If there is one highly ...
- [转帖]FPGA开发工具汇总
原帖:http://blog.chinaaet.com/yocan/p/5100017074 ----------------------------------------------------- ...
- [Android分享] 【转帖】Android ListView的A-Z字母排序和过滤搜索功能
感谢eoe社区的分享 最近看关于Android实现ListView的功能问题,一直都是小伙伴们关心探讨的Android开发问题之一,今天看到有关ListView实现A-Z字母排序和过滤搜索功能 ...
- AxureRP7.0各类交互效果汇总帖(转)
了便于大家参考,我把这段时间发布分享的所有关于AxureRP7.0的原型做了整理. 以下资源均有对应的RP源文件可以下载. 当然 ,其中有部分是需要通过完成解密游戏[攻略]才能得到下载地址或者下载密码 ...
- 未能加载文件或程序集“Newtonsoft.Json, Version=4.0.0.0, Culture=neutral, PublicKeyToken=30a [问题点数:40分,结帖人u010259408]
未能加载文件或程序集“Newtonsoft.Json, Version=4.0.0.0, Culture=neutral, PublicKeyToken=30a [问题点数:40分,结帖人u01025 ...
- 转帖-[教程] Win7精简教程(简易中度)2016年8月-0day
[教程] Win7精简教程(简易中度)2016年8月 0day 发表于 2016-8-19 16:08:41 https://www.itsk.com/thread-370260-1-1.html ...
随机推荐
- SPSC Queue
在多线程编程中,一个著名的问题是生产者-消费者问题 (Producer Consumer Problem, PC Problem). 对于这类问题,通过信号量加锁 (https://www.cnblo ...
- CentOS安装openGauss2.0.1
CentOS安装openGauss2.0.1 OpenGauss是一款开源关系型数据库管理系统,采用木兰宽松许可证v2发行.openGauss内核源自PostgreSQL,深度融合华为在数据库领域多年 ...
- Dio网络请求
包 dio: ^4.0.0 http: ^0.13.3 dio_cookie_manager: ^2.0.0 cookie_jar: ^3.0.1 dio_http2_adapter: ^2.0.0 ...
- 数仓性能优化:倾斜优化-表达式计算倾斜的hint优化
本文分享自华为云社区<GaussDB(DWS)性能调优:倾斜优化-表达式计算倾斜的hint优化>,作者: 譡里个檔 . 1.原始SQL SELECT TMP4.TAX_AMT, CATE. ...
- 云原生数据库风起云涌,华为云GaussDB破浪前行
摘要:云原生数据库,实现多云协同.混合云解决方案.边云协同等能力的数据库. Gartner预测,2021年云数据库在整个数据库市场中的占比将首次达到50%:2023年75%的数据库将基于云的技术来构建 ...
- DBA:介里有你没有用过的“CHUAN”新社区版本Redis6.0
摘要:华为云DCS Redis 6.0社区版带来了极致性能.功能全面.可靠性强.性价比高的云服务,并且完全兼容开源Redis,客户端无需修改代码,开通后即可使用,使企业完全无需后顾之忧就能享受到业务响 ...
- centos8 新增ssh自定义端口与屏蔽默认22端口。
第一步:修改SSH配置文件(注意是sshd_config而不是ssh_config,多了个d) vim /etc/ssh/sshd_config找到"#Port 22",这一行直接 ...
- 一文读懂 DevSecOps:工作原理、优势和实现
由于 DevOps 方法的广泛采用以及由此产生的快速产品交付和部署,许多部门已采用更敏捷的方法来开发生命周期.在满足市场速度和规模要求的同时,设计安全的软件一直是现代 IT 公司共同面临的问题.结果, ...
- 火山引擎 DataTester 智能发布平台:智能化 A/B 实验,助力产品快速迭代
更多技术交流.求职机会,欢迎关注字节跳动数据平台微信公众号,回复[1]进入官方交流群 在互联网竞争炙热的红海时代,精益开发高效迭代越来越成为成为产品竞争的利器.产品迭代过程中,如何保障高效的功能迭代安 ...
- 多智能体协同控制研究中光学动作捕捉与UWB定位技术比较
人类在进行任何工作时,总是强调团队合作,teamwork.随着控制科学.计算机科学等多学科的交叉发展与融合,在智能体控制领域,对于单个机器人,无人机,无人车的控制已经不能满足现在领域的技术需求,从而和 ...