时序数据库InfluxDB的基本语法
一 了解InfluxDB的必要性
时序数据库主要存放的数据
Time series data is a series of data points each associated with a specific time. Examples include:
- Server performance metrics
- Financial averages over time
- Sensor data, such as temperature, barometric pressure, wind speeds, etc.
时序数据库和关系数据库的区别
Relational databases can be used to store and analyze time series data, but depending on the precision of your data, a query can involve potentially millions of rows. InfluxDB is purpose-built to store and query data by time, providing out-of-the-box functionality that optionally downsamples data after a specific age and a query engine optimized for time-based data.
二 基本概念
2.1 database & duration
database
A logical container for users, retention policies, continuous queries, and time series data.
duration
The attribute of the retention policy that determines how long InfluxDB stores data. Data older than the duration are automatically dropped from the database.
2.2 field
The key-value pair in an InfluxDB data structure that records metadata and the actual data value. Fields are required in InfluxDB data structures and they are not indexed - queries on field values scan all points that match the specified time range and, as a result, are not performant relative to tags.
Field keys are strings and they store metadata.Field values are the actual data; they can be strings, floats, integers, or booleans. A field value is always associated with a timestamp.
2.3 Tags
Tags are optional. The key-value pair in the InfluxDB data structure that records metadata.You don’t need to have tags in your data structure, but it’s generally a good idea to make use of them because, unlike fields, tags are indexed. This means that queries on tags are faster and that tags are ideal for storing commonly-queried metadata.
Tags 与 fields 的区别
Tags are indexed and fields are not indexed. This means that queries on tags are more performant than those on fields.
Tags 与 fields 的使用场景
(1)Store commonly-queried meta data in tags
(2)Store data in tags if you plan to use them with the InfluxQL GROUP BY
clause
(3)Store data in fields if you plan to use them with an InfluxQL function
(4)Store numeric values as fields (tag values only support string values)
2.4 measurement
The measurement acts as a container for tags, fields, and the time
column, and the measurement name is the description of the data that are stored in the associated fields. Measurement names are strings, and, for any SQL users out there, a measurement is conceptually similar to a table.
2.5 point
In InfluxDB, a point represents a single data record, similar to a row in a SQL database table. Each point:
- has a measurement, a tag set, a field key, a field value, and a timestamp;
- is uniquely identified by its series and timestamp.
You cannot store more than one point with the same timestamp in a series. If you write a point to a series with a timestamp that matches an existing point, the field set becomes a union of the old and new field set, and any ties go to the new field set.
2.6 series
In InfluxDB, a series is a collection of points that share a measurement, tag set, and field key. A point represents a single data record that has four components: a measurement, tag set, field set, and a timestamp. A point is uniquely identified by its series and timestamp.
series key
A series key identifies a particular series by measurement, tag set, and field key.
三 查询
3.1 正则模糊查询
1.实现查询以给定字段开始的数据
select fieldName from measurementName where fieldName=~/^给定字段/
2.实现查询以给定字段结束的数据
select fieldName from measurementName where fieldName=~/给定字段$/
3.实现查询包含给定字段数据
select fieldName from measurementName where fieldName=~/给定字段/
3.2 Select 注意事项:
必须包含field key
A query requires at least one field key in the SELECT
clause to return data. If the SELECT
clause only includes a single tag key or several tag keys, the query returns an empty response. This behavior is a result of how the system stores data.
3.3 Where 限定
使用单引号,否则无数据返回或报错
(1)Single quote string field values in the WHERE
clause. Queries with unquoted string field values or double quoted string field values will not return any data and, in most cases,will not return an error.
(2)Single quote tag values in the WHERE
clause. Queries with unquoted tag values or double quoted tag values will not return any data and, in most cases, will not return an error.
3.4 Group By
(1)Note that the GROUP BY
clause must come after the WHERE
clause.
(2)The GROUP BY
clause groups query results by: one or more specified tags ;specified time interval。
(3)You cannot use GROUP BY
to group fields.
(4)fill()
changes the value reported for time intervals that have no data.
By default, a GROUP BY time()
interval with no data reports null
as its value in the output column. fill()
changes the value reported for time intervals that have no data. Note that fill()
must go at the end of the GROUP BY
clause if you’reGROUP(ing) BY
several things (for example, both tags and a time interval).
3.5 ORDER BY time DESC
By default, InfluxDB returns results in ascending time order; the first point returned has the oldest timestamp and the last point returned has the most recent timestamp.ORDER BY time DESC
reverses that order such that InfluxDB returns the points with the most recent timestamps first.
注意:ORDER by time DESC
must appear after the GROUP BY
clause if the query includes a GROUP BY
clause. ORDER by time DESC
must appear after the WHERE
clause if the query includes a WHERE
clause and no GROUP BY
clause.
四.SHOW CARDINALITY
是用于估计或精确计算measurement、序列、tag key、tag value和field key的基数的一组命令。
SHOW CARDINALITY命令有两种可用的版本:估计和精确。估计值使用草图进行计算,对于所有基数大小来说,这是一个安全默认值。精确值是直接对TSM(Time-Structured Merge Tree)数据进行计数,但是,对于基数大的数据来说,运行成本很高。
下面以tag key、tag value为例。
4.1 SHOW TAG KEY CARDINALITY
估计或精确计算tag key集的基数。
ON <database>、FROM <sources>、WITH KEY = <key>、WHERE <condition>、GROUP BY <dimensions>和LIMIT/OFFSET子句是可选的。当使用这些查询子句时,查询将回退到精确计数(exect count)。当启用Time Series Index(TSI)时,才支持对time进行过滤。不支持在WHERE子句中使用time。
举例:
-- show estimated tag key cardinality
SHOW TAG KEY CARDINALITY
----计算精确值
-- show exact tag key cardinality
SHOW TAG KEY EXACT CARDINALITY
4.2 SHOW TAG VALUES CARDINALITY
估计或精确计算指定tag key对应的tag value的基数。
ON <database>、FROM <sources>、WITH KEY = <key>、WHERE <condition>、GROUP BY <dimensions>和LIMIT/OFFSET子句是可选的。当使用这些查询子句时,查询将回退到精确计数(exect count)。当启用Time Series Index(TSI)时,才支持对time进行过滤。
举例
-- show estimated tag key values cardinality for a specified tag key
SHOW TAG VALUES CARDINALITY WITH KEY = "myTagKey" -- show estimated tag key values cardinality for a specified tag key
SHOW TAG VALUES CARDINALITY WITH KEY = "myTagKey"
-----计算精确值
-- show exact tag key values cardinality for a specified tag key
SHOW TAG VALUES EXACT CARDINALITY WITH KEY = "myTagKey" -- show exact tag key values cardinality for a specified tag key
SHOW TAG VALUES EXACT CARDINALITY WITH KEY = "myTagKey"
4.3 应用场景举例
例如,前面的分享,我们通过Telegraf 将server的监控数据保存到了InfluxDB中,其中CPU指标是必不可少的(telegraf.conf 设置)。假如有一天,我们需要统计telegraf一共部署了多少台。其实就可以通过SHOW TAG VALUES EXACT CARDINALITY 获得。
SQL 语句如下:
SHOW TAG VALUES EXACT CARDINALITY from "cpu" WITH KEY = "host"
即查看cpu 中 host 的key值有多少个。因为通过telegraf.conf的设置,一台Server 对应一个唯一的host值,host值有多少个,就有多少台Server已部署了telegraf。
5 Drop 与 Delete
5.1 series
The DROP SERIES
query deletes all points from a series in a database, and it drops the series from the index.
The query takes the following form, where you must specify either the FROM
clause or the WHERE
clause.
语法如下:
DROP SERIES FROM <measurement_name[,measurement_name]> WHERE <tag_key>='<tag_value>'
A successful DROP SERIES
query returns an empty result.
Drop all points in the series that have a specific tag pair from all measurements in the database(即,如不指定from,将会把符合条件的所有表tag数据删除).
与Delete series 的区别是:
The DELETE
query deletes all points from a series in a database. UnlikeDROP SERIES
, DELETE
does not drop the series from the index.
5.2 measurement_name
DELETE FROM <measurement_name> WHERE [<tag_key>='<tag_value>'] | [<time interval>]
只允许根据tag和时间来进行删除操作.
measurement的drop,是比较消耗资源的,并且操作时间相对较长。看有网友的分享,建议 在 drop measurement 之前先删除所有的 tag。
即先执行:
DROP SERIES FROM 'measurement_name'
然后再执行:
drop measurement <measurement_name>
六 常用函数部分
常用函数汇总如下:
类型 | 函数名 | 备注说明1 | 备注说明2 |
聚合类 | COUNT() | Returns the number of non-null field values. | |
DISTINCT() | Returns the list of unique field values. | DISTINCT() often returns several results with the same timestamp; InfluxDB assumes points with the same series and timestamp are duplicate points and simply overwrites any duplicate point with the most recent point in the destination measurement. |
|
INTEGRAL() | Returns the area under the curve for subsequent field values. | InfluxDB calculates the area under the curve for subsequent field values and converts those results into the summed area per unit . The unit argument is an integer followed by a duration literal and it is optional. If the query does not specify the unit , the unit defaults to one second (1s ). |
|
MEAN() | Returns the arithmetic mean (average) of field values. | ||
MEDIAN() | Returns the middle value from a sorted list of field values. | MEDIAN() is nearly equivalent to PERCENTILE(field_key, 50) , except MEDIAN() returns the average of the two middle field values if the field contains an even number of values. |
|
MODE() | Returns the most frequent value in a list of field values. | MODE() returns the field value with the earliest timestamp if there’s a tie between two or more values for the maximum number of occurrences. |
|
SPREAD() | Returns the difference between the minimum and maximum field values. | ||
STDDEV() | Returns the standard deviation of field values. | ||
SUM() | Returns the sum of field values. | ||
查询选择类 | BOTTOM() | Returns the smallest N field values. |
BOTTOM() returns the field value with the earliest timestamp if there’s a tie between two or more values for the smallest value. |
FIRST() | Returns the field value with the oldest timestamp. | ||
LAST() | Returns the field value with the most recent timestamp. | ||
MAX() | Returns the greatest field value. | ||
MIN() | Returns the lowest field value. | ||
PERCENTILE() | Returns the N th percentile field value. |
||
SAMPLE() | Returns a random sample of N field values. |
SAMPLE() uses reservoir sampling to generate the random points. |
|
TOP() | Returns the greatest N field values. |
TOP() returns the field value with the earliest timestamp if there’s a tie between two or more values for the greatest value. |
|
转换类 | ABS() | Returns the absolute value of the field value. | |
ACOS() | Returns the arccosine (in radians) of the field value. | Field values must be between -1 and 1. | |
ASIN() | Returns the arcsine (in radians) of the field value. | Field values must be between -1 and 1. | |
ATAN() | Returns the arctangent (in radians) of the field value. | Field values must be between -1 and 1. | |
ATAN2() | Returns the the arctangent of y/x in radians. |
||
CEIL() | Returns the subsequent value rounded up to the nearest integer. | ||
COS() | Returns the cosine of the field value. | ||
CUMULATIVE_SUM() | Returns the running total of subsequent field values. | ||
DERIVATIVE() | Returns the rate of change between subsequent field values. | InfluxDB calculates the difference between subsequent field values and converts those results into the rate of change per unit . The unit argument is an integer followed by a duration literal and it is optional. If the query does not specify the unit the unit defaults to one second (1s ). |
|
DIFFERENCE() | Returns the result of subtraction between subsequent field values. | ||
ELAPSED() | Returns the difference between subsequent field value’s timestamps. | InfluxDB calculates the difference between subsequent timestamps. The unit option is an integer followed by a duration literal and it determines the unit of the returned difference. If the query does not specify the unit option the query returns the difference between timestamps in nanoseconds. |
|
EXP() | Returns the exponential of the field value. | ||
FLOOR() | Returns the subsequent value rounded down to the nearest integer. | ||
LN() | Returns the natural logarithm of the field value. | ||
LOG() | Returns the logarithm of the field value with base b . |
||
LOG2() | Returns the logarithm of the field value to the base 2. | ||
LOG10() | Returns the logarithm of the field value to the base 10. | ||
MOVING_AVERAGE() | Returns the rolling average across a window of subsequent field values. | ||
POW() | Returns the field value to the power of x |
||
ROUND() | Returns the subsequent value rounded to the nearest integer. | ||
SIN() | Returns the sine of the field value. | ||
SQRT() | Returns the square root of field value. | ||
TAN() | Returns the tangent of the field value. | ||
推测类 | HOLT_WINTERS() | Returns N number of predicted field values |
Predict when data values will cross a given threshold; Compare predicted values with actual values to detect anomalies in your data. |
技术分析类 | CHANDE_MOMENTUM_OSCILLATOR() | The Chande Momentum Oscillator (CMO) is a technical momentum indicator developed by Tushar Chande. The CMO indicator is created by calculating the difference between the sum of all recent higher data points and the sum of all recent lower data points, then dividing the result by the sum of all data movement over a given time period. The result is multiplied by 100 to give the -100 to +100 range. | |
EXPONENTIAL_MOVING_AVERAGE() | An exponential moving average (EMA) is a type of moving average that is similar to a simple moving average, except that more weight is given to the latest data. It’s also known as the “exponentially weighted moving average.” This type of moving average reacts faster to recent data changes than a simple moving average. | ||
DOUBLE_EXPONENTIAL_MOVING_AVERAGE() | The Double Exponential Moving Average (DEMA) attempts to remove the inherent lag associated to Moving Averages by placing more weight on recent values. The name suggests this is achieved by applying a double exponential smoothing which is not the case. The name double comes from the fact that the value of an EMA is doubled. To keep it in line with the actual data and to remove the lag, the value “EMA of EMA” is subtracted from the previously doubled EMA. | ||
KAUFMANS_EFFICIENCY_RATIO() | Kaufman’s Efficiency Ration, or simply “Efficiency Ratio” (ER), is calculated by dividing the data change over a period by the absolute sum of the data movements that occurred to achieve that change. The resulting ratio ranges between 0 and 1 with higher values representing a more efficient or trending market.
The ER is very similar to the Chande Momentum Oscillator (CMO). The difference is that the CMO takes market direction into account, but if you take the absolute CMO and divide by 100, you you get the Efficiency Ratio. |
||
KAUFMANS_ADAPTIVE_MOVING_AVERAGE() | Kaufman’s Adaptive Moving Average (KAMA) is a moving average designed to account for sample noise or volatility. KAMA will closely follow data points when the data swings are relatively small and noise is low. KAMA will adjust when the data swings widen and follow data from a greater distance. This trend-following indicator can be used to identify the overall trend, time turning points and filter data movements. | ||
TRIPLE_EXPONENTIAL_MOVING_AVERAGE() | The triple exponential moving average (TEMA) was developed to filter out volatility from conventional moving averages. While the name implies that it’s a triple exponential smoothing, it’s actually a composite of a single exponential moving average, a double exponential moving average, and a triple exponential moving average. | ||
TRIPLE_EXPONENTIAL_DERIVATIVE() | The triple exponential derivative indicator, commonly referred to as “TRIX,” is an oscillator used to identify oversold and overbought markets, and can also be used as a momentum indicator. TRIX calculates a triple exponential moving average of the log of the data input over the period of time. The previous value is subtracted from the previous value. This prevents cycles that are shorter than the defined period from being considered by the indicator.
Like many oscillators, TRIX oscillates around a zero line. When used as an oscillator, a positive value indicates an overbought market while a negative value indicates an oversold market. When used as a momentum indicator, a positive value suggests momentum is increasing while a negative value suggests momentum is decreasing. Many analysts believe that when the TRIX crosses above the zero line it gives a buy signal, and when it closes below the zero line, it gives a sell signal. |
||
RELATIVE_STRENGTH_INDEX() | The relative strength index (RSI) is a momentum indicator that compares the magnitude of recent increases and decreases over a specified time period to measure speed and change of data movements. |
参考网址:
https://blog.csdn.net/xuxiannian/article/details/103559246
https://blog.csdn.net/funnyPython/article/details/89888972
时序数据库InfluxDB的基本语法的更多相关文章
- 时序数据库InfluxDB安装及使用
时序数据库InfluxDB安装及使用 1 安装配置 安装 wget https://dl.influxdata.com/influxdb/releases/influxdb-1.3.1.x86_64. ...
- 分布式时序数据库InfluxDB
我们内部的监控系统用到分布式时序数据库InfluxDB http://www.ttlsa.com/monitor-safe/monitor/distributed-time-series-databa ...
- 时序数据库InfluxDB
在系统服务部署过后,线上运行服务的稳定性是系统好坏的重要体现,监控系统状态至关重要,经过调研了解,时序数据库influxDB在此方面表现优异. influxDB介绍 时间序列数据是以时间字段为每行数据 ...
- 时序数据库InfluxDB(I)- 搭建与采集信息demo操作
搭建环境:vmware workstation pro15.5.0, ubuntu18.04.3 实践时间:2019.10.12-10.27 (一)时序数据库InfluxDB准备 (1)安装 曾出现问 ...
- [Go] 时序数据库influxdb的安装
日志类的数据时候存储在时序数据库中,下面就是时序数据库influxdb的安装 curl -sL https://repos.influxdata.com/influxdb.key | apt-key ...
- Spring Boot中使用时序数据库InfluxDB
除了最常用的关系数据库和缓存之外,之前我们已经介绍了在Spring Boot中如何配置和使用MongoDB.LDAP这些存储的案例.接下来,我们继续介绍另一种特殊的数据库:时序数据库InfluxDB在 ...
- 简析时序数据库 InfluxDB
时序数据基础 时序数据特点 时序数据TimeSeries是一连串随时间推移而发生变化的相关事件. 以下图的 CPU 监控数据为例,同个 IP 的相关监控数据组成了一条时序数据,不相关数据则分布在不同的 ...
- 时序数据库InfluxDB使用详解
1 安装配置 这里说一下使用docker容器运行influxdb的步骤,物理机安装请参照官方文档.拉取镜像文件后运行即可,当前最新版本是1.3.5.启动容器时设置挂载的数据目录和开放端口.Influx ...
- 深入浅出:了解时序数据库 InfluxDB
数据模型 1.时序数据的特征 时序数据应用场景就是在时间线上每个时间点都会从多个数据源涌入数据,按照连续时间的多种纬度产生大量数据,并按秒甚至毫秒计算的实时性写入存储. 传统的RDBMS数据库对写入的 ...
随机推荐
- mac下编译安装grafana
下载grafana源码 从grafana git 仓库下载指定的分支. 编译后端 我下载的时候,grafana的最新release是7.3.7,其需要安装go 1.15版本 生成可执行文件 进入项目根 ...
- mycat<三>
server.xml文件 <?xml version="1.0" encoding="UTF-8"?> <!-- - - Licensed u ...
- 2020最精细的Java学习路线图
在吾爱破解发布的Java学习路线图自我感觉良好,之后看到动力节点Java学院的这份学习路线图感觉专业的东西还得专业的人来做,这份专业的学路线图把我上次的Java学习路线图秒成渣,虽然内容差不多,上份是 ...
- 手撕LRU缓存
面试官:来了,老弟,LRU缓存实现一下? 我:直接LinkedHashMap就好了. 面试官:不要用现有的实现,自己实现一个. 我:..... 面试官:回去等消息吧.... 大家好,我是程序员学长,今 ...
- JS 之 每日一题 之 算法 ( 划分字母区间 )
题目详解: 字符串 S 由小写字母组成.我们要把这个字符串划分为尽可能多的片段,同一个字母只会出现在其中的一个片段.返回一个表示每个字符串片段的长度的列表. 例子: 示例 1: 输入:S = &quo ...
- mybaits源码分析--binding模块(五)
一.binding模块 接下来我们看看在org.apache.ibatis.binding包下提供的Binding模块 ,binding其实在执行sqlSession.getMapper(UserMa ...
- React Native startReactApplication 方法简析
在 React Native 启动流程简析 这篇文章里,我们梳理了 RN 的启动流程,最后的 startReactApplication 由于相对复杂且涉及到最终执行前端 js 的流程,我们单独将其提 ...
- springMVC学习总结(三) --springMVC重定向
根据springMVC学习总结(一) --springMVC搭建搭建项目 在com.myl.controller包下创建一个java类WebController. 在jsp子文件夹下创建一个视图文件i ...
- Django——session保持登录
Django操作session语法: # 1.设置Sessions值 request.session['session_name'] ="admin" # 2.获取Sessions ...
- Filter案例之登录验证
一.登录验证,权限控制 1.需求分析 其中,登录有关的资源被访问时要直接放行,不然会死循环: 2.代码实现