初识InfluxDB

1. InfluxDB介绍

时间序列数据库,简称时序数据库,Time Series Database,一个全新的领域,最大的特点就是每个条数据都带有Time列。

时序数据库到底能用到什么业务场景,答案是:监控系统。

InfluxDB是一个当下比较流行的时序数据库,InfluxDB使用 Go 语言编写,无需外部依赖,安装配置非常方便,适合构建大型分布式系统的监控系统。

2. 安装方法

2.1 linux版本

软件包:influxdb-1.7.7.x86_64.rpm

安装命令:yum localinstall influxdb-1.7.7.x86_64.rpm

2.2 常用命令

/usr/bin/influxd influxdb服务器

/usr/bin/influx influxdb命令行客户端

/usr/bin/influx_inspect 查看工具

/usr/bin/influx_stress 压力测试工具

/usr/bin/influx_tsm 数据库转换工具(将数据库从b1或bz1格式转换为tsm1格式)

2.3 常用目录

/var/lib/influxdb/data 存放最终存储的数据,文件以.tsm结尾

/var/lib/influxdb/meta 存放数据库元数据

/var/lib/influxdb/wal 存放预写日志文件

2.4 配置文件

路径 /etc/influxdb/influxdb.conf

 ### Welcome to the InfluxDB configuration file.

 # The values in this file override the default values used by the system if
# a config option is not specified. The commented out lines are the configuration
# field and the default value used. Uncommenting a line and changing the value
# will change the value used at runtime when the process is restarted. # Once every 24 hours InfluxDB will report usage data to usage.influxdata.com
# The data includes a random ID, os, arch, version, the number of series and other
# usage data. No data from user databases is ever transmitted.
# Change this option to true to disable reporting.
# 该选项用于上报influxdb的使用信息给InfluxData公司,默认值为false
# reporting-disabled = false # Bind address to use for the RPC service for backup and restore.
# 备份恢复时使用,默认值为8088
# bind-address = "127.0.0.1:8088" ###
### [meta]
###
### Controls the parameters for the Raft consensus group that stores metadata
### about the InfluxDB cluster.
### [meta]
# Where the metadata/raft database is stored
# meta数据存放目录,默认值:/var/lib/influxdb/meta
dir = "/var/lib/influxdb/meta" # Automatically create a default retention policy when creating a database.
# 用于控制默认存储策略,数据库创建时,会自动生成autogen的存储策略,默认值:true
# retention-autocreate = true # If log messages are printed for the meta service
# 是否开启日志,默认值:true
# logging-enabled = true ###
### [data]
###
### Controls where the actual shard data for InfluxDB lives and how it is
### flushed from the WAL. "dir" may need to be changed to a suitable place
### for your system, but the WAL settings are an advanced configuration. The
### defaults should work for most systems.
### [data]
# The directory where the TSM storage engine stores TSM files.
# 最终数据(TSM文件)存储目录,默认值:/var/lib/influxdb/data
dir = "/var/lib/influxdb/data" # The directory where the TSM storage engine stores WAL files.
# 预写日志存储目录,默认值:/var/lib/influxdb/wal
wal-dir = "/var/lib/influxdb/wal" # The amount of time that a write will wait before fsyncing. A duration
# greater than 0 can be used to batch up multiple fsync calls. This is useful for slower
# disks or when WAL write contention is seen. A value of 0s fsyncs every write to the WAL.
# Values in the range of 0-100ms are recommended for non-SSD disks.
# wal-fsync-delay = "0s" # The type of shard index to use for new shards. The default is an in-memory index that is
# recreated at startup. A value of "tsi1" will use a disk based index that supports higher
# cardinality datasets.
# index-version = "inmem" # Trace logging provides more verbose output around the tsm engine. Turning
# this on can provide more useful output for debugging tsm engine issues.
# 是否开启trace日志,默认值: false
# trace-logging-enabled = false # Whether queries should be logged before execution. Very useful for troubleshooting, but will
# log any sensitive data contained within a query.
# query-log-enabled = true # Validates incoming writes to ensure keys only have valid unicode characters.
# This setting will incur a small overhead because every key must be checked.
# validate-keys = false # Settings for the TSM engine # CacheMaxMemorySize is the maximum size a shard's cache can
# reach before it starts rejecting writes.
# Valid size suffixes are k, m, or g (case insensitive, 1024 = 1k).
# Values without a size suffix are in bytes.
# 用于限定shard最大值,大于该值时会拒绝写入,默认值:
# DefaultCacheMaxMemorySize = 1024 * 1024 * 1024 // 1GB
# cache-max-memory-size = "1g" # CacheSnapshotMemorySize is the size at which the engine will
# snapshot the cache and write it to a TSM file, freeing up memory
# Valid size suffixes are k, m, or g (case insensitive, 1024 = 1k).
# Values without a size suffix are in bytes.
# 用于设置快照大小,大于该值时数据会刷新到tsm文件,默认值:
# DefaultCacheSnapshotMemorySize = 25 * 1024 * 1024 // 25MB
# cache-snapshot-memory-size = "25m" # CacheSnapshotWriteColdDuration is the length of time at
# which the engine will snapshot the cache and write it to
# a new TSM file if the shard hasn't received writes or deletes
# tsm1引擎 snapshot写盘延迟,默认值:10m
# cache-snapshot-write-cold-duration = "10m" # CompactFullWriteColdDuration is the duration at which the engine
# will compact all TSM files in a shard if it hasn't received a
# write or delete
#tsm文件在压缩前可以存储的最大时间,默认值:4h
# compact-full-write-cold-duration = "4h" # The maximum number of concurrent full and level compactions that can run at one time. A
# value of 0 results in 50% of runtime.GOMAXPROCS(0) used at runtime. Any number greater
# than 0 limits compactions to that value. This setting does not apply
# to cache snapshotting.
# max-concurrent-compactions = 0 # CompactThroughput is the rate limit in bytes per second that we
# will allow TSM compactions to write to disk. Note that short bursts are allowed
# to happen at a possibly larger value, set by CompactThroughputBurst
# compact-throughput = "48m" # CompactThroughputBurst is the rate limit in bytes per second that we
# will allow TSM compactions to write to disk.
# compact-throughput-burst = "48m" # If true, then the mmap advise value MADV_WILLNEED will be provided to the kernel with respect to
# TSM files. This setting has been found to be problematic on some kernels, and defaults to off.
# It might help users who have slow disks in some cases.
# tsm-use-madv-willneed = false # Settings for the inmem index # The maximum series allowed per database before writes are dropped. This limit can prevent
# high cardinality issues at the database level. This limit can be disabled by setting it to
# 0.
# 限制数据库的级数,该值为0时取消限制,默认值:1000000
# max-series-per-database = 1000000 # The maximum number of tag values per tag that are allowed before writes are dropped. This limit
# can prevent high cardinality tag values from being written to a measurement. This limit can be
# disabled by setting it to 0.
# 一个tag最大的value数,0取消限制,默认值:100000
# max-values-per-tag = 100000 # Settings for the tsi1 index # The threshold, in bytes, when an index write-ahead log file will compact
# into an index file. Lower sizes will cause log files to be compacted more
# quickly and result in lower heap usage at the expense of write throughput.
# Higher sizes will be compacted less frequently, store more series in-memory,
# and provide higher write throughput.
# Valid size suffixes are k, m, or g (case insensitive, 1024 = 1k).
# Values without a size suffix are in bytes.
# max-index-log-file-size = "1m" # The size of the internal cache used in the TSI index to store previously
# calculated series results. Cached results will be returned quickly from the cache rather
# than needing to be recalculated when a subsequent query with a matching tag key/value
# predicate is executed. Setting this value to 0 will disable the cache, which may
# lead to query performance issues.
# This value should only be increased if it is known that the set of regularly used
# tag key/value predicates across all measurements for a database is larger than 100. An
# increase in cache size may lead to an increase in heap usage.
series-id-set-cache-size = 100 ###
### [coordinator]
###
### Controls the clustering service configuration.
### [coordinator]
# The default time a write request will wait until a "timeout" error is returned to the caller.
# 写操作超时时间,默认值: 10s
# write-timeout = "10s" # The maximum number of concurrent queries allowed to be executing at one time. If a query is
# executed and exceeds this limit, an error is returned to the caller. This limit can be disabled
# by setting it to 0.
# 最大并发查询数,0无限制,默认值: 0
# max-concurrent-queries = 0 # The maximum time a query will is allowed to execute before being killed by the system. This limit
# can help prevent run away queries. Setting the value to 0 disables the limit.
# 查询操作超时时间,0无限制,默认值:0s
# query-timeout = "0s" # The time threshold when a query will be logged as a slow query. This limit can be set to help
# discover slow or resource intensive queries. Setting the value to 0 disables the slow query logging.
# 慢查询超时时间,0无限制,默认值:0s
# log-queries-after = "0s" # The maximum number of points a SELECT can process. A value of 0 will make
# the maximum point count unlimited. This will only be checked every second so queries will not
# be aborted immediately when hitting the limit.
# SELECT语句可以处理的最大点数(points),0无限制,默认值:0
# max-select-point = 0 # The maximum number of series a SELECT can run. A value of 0 will make the maximum series
# count unlimited.
# SELECT语句可以处理的最大级数(series),0无限制,默认值:0
# max-select-series = 0 # The maximum number of group by time bucket a SELECT can create. A value of zero will max the maximum
# number of buckets unlimited.
# SELECT语句可以处理的最大"GROUP BY time()"的时间周期,0无限制,默认值:0
# max-select-buckets = 0 ###
### [retention]
###
### Controls the enforcement of retention policies for evicting old data.
### [retention]
# Determines whether retention policy enforcement enabled.
# 是否启用该模块,默认值 : true
# enabled = true # The interval of time when retention policy enforcement checks run.
# 检查时间间隔,默认值 :"30m"
# check-interval = "30m" ###
### [shard-precreation]
###
### Controls the precreation of shards, so they are available before data arrives.
### Only shards that, after creation, will have both a start- and end-time in the
### future, will ever be created. Shards are never precreated that would be wholly
### or partially in the past. [shard-precreation]
# Determines whether shard pre-creation service is enabled.
# enabled = true # The interval of time when the check to pre-create new shards runs.
# check-interval = "10m" # The default period ahead of the endtime of a shard group that its successor
# group is created.
# advance-period = "30m" ###
### Controls the system self-monitoring, statistics and diagnostics.
###
### The internal database for monitoring data is created automatically if
### if it does not already exist. The target retention within this database
### is called 'monitor' and is also created with a retention period of 7 days
### and a replication factor of 1, if it does not exist. In all cases the
### this retention policy is configured as the default for the database. [monitor]
# Whether to record statistics internally.
# 是否启用该模块,默认值 :true
# store-enabled = true # The destination database for recorded statistics
# 默认数据库:"_internal"
# store-database = "_internal" # The interval at which to record statistics
# 统计间隔,默认值:"10s"
# store-interval = "10s" ###
### [http]
###
### Controls how the HTTP endpoints are configured. These are the primary
### mechanism for getting data into and out of InfluxDB.
### [http]
# Determines whether HTTP endpoint is enabled.
# 是否启用该模块,默认值 :true
enabled = true # Determines whether the Flux query endpoint is enabled.
# flux-enabled = false # Determines whether the Flux query logging is enabled.
# flux-log-enabled = false # The bind address used by the HTTP service.
# 绑定地址,默认值:":8086"
bind-address = ":8086" # Determines whether user authentication is enabled over HTTP/HTTPS.
# 是否开启认证,默认值:false
# auth-enabled = false # The default realm sent back when issuing a basic auth challenge.
# realm = "InfluxDB" # Determines whether HTTP request logging is enabled.
# 是否开启日志,默认值:true
# log-enabled = true # Determines whether the HTTP write request logs should be suppressed when the log is enabled.
# suppress-write-log = false # When HTTP request logging is enabled, this option specifies the path where
# log entries should be written. If unspecified, the default is to write to stderr, which
# intermingles HTTP logs with internal InfluxDB logging.
#
# If influxd is unable to access the specified path, it will log an error and fall back to writing
# the request log to stderr.
# access-log-path = "" # Filters which requests should be logged. Each filter is of the pattern NNN, NNX, or NXX where N is
# a number and X is a wildcard for any number. To filter all 5xx responses, use the string 5xx.
# If multiple filters are used, then only one has to match. The default is to have no filters which
# will cause every request to be printed.
# access-log-status-filters = [] # Determines whether detailed write logging is enabled.
# 是否开启写操作日志,如果置成true,每一次写操作都会打日志,默认值:false
# write-tracing = false # Determines whether the pprof endpoint is enabled. This endpoint is used for
# troubleshooting and monitoring.
# 是否开启pprof,默认值:true
# pprof-enabled = true # Enables a pprof endpoint that binds to localhost:6060 immediately on startup.
# This is only needed to debug startup issues.
# debug-pprof-enabled = false # Determines whether HTTPS is enabled.
# 是否开启https,默认值:false
# https-enabled = false # The SSL certificate to use when HTTPS is enabled.
# 设置https证书路径,默认值:"/etc/ssl/influxdb.pem"
# https-certificate = "/etc/ssl/influxdb.pem" # Use a separate private key location.
# 设置https私钥,无默认值
# https-private-key = "" # The JWT auth shared secret to validate requests using JSON web tokens.
# 用于JWT签名的共享密钥,无默认值
# shared-secret = "" # The default chunk size for result sets that should be chunked.
# 配置查询返回最大行数,默认值:0
# max-row-limit = 0 # The maximum number of HTTP connections that may be open at once. New connections that
# would exceed this limit are dropped. Setting this value to 0 disables the limit.
# 配置最大连接数,0无限制,默认值:0
# max-connection-limit = 0 # Enable http service over unix domain socket
# unix-socket-enabled = false # The path of the unix domain socket.
# unix-socket路径,默认值:"/var/run/influxdb.sock"
# bind-socket = "/var/run/influxdb.sock" # The maximum size of a client request body, in bytes. Setting this value to 0 disables the limit.
# max-body-size = 25000000 # The maximum number of writes processed concurrently.
# Setting this to 0 disables the limit.
# max-concurrent-write-limit = 0 # The maximum number of writes queued for processing.
# Setting this to 0 disables the limit.
# max-enqueued-write-limit = 0 # The maximum duration for a write to wait in the queue to be processed.
# Setting this to 0 or setting max-concurrent-write-limit to 0 disables the limit.
# enqueued-write-timeout = 0 ###
### [logging]
###
### Controls how the logger emits logs to the output.
### [logging]
# Determines which log encoder to use for logs. Available options
# are auto, logfmt, and json. auto will use a more a more user-friendly
# output format if the output terminal is a TTY, but the format is not as
# easily machine-readable. When the output is a non-TTY, auto will use
# logfmt.
# format = "auto" # Determines which level of logs will be emitted. The available levels
# are error, warn, info, and debug. Logs that are equal to or above the
# specified level will be emitted.
# level = "info" # Suppresses the logo output that is printed when the program is started.
# The logo is always suppressed if STDOUT is not a TTY.
# suppress-logo = false ###
### [subscriber]
###
### Controls the subscriptions, which can be used to fork a copy of all data
### received by the InfluxDB host.
### [subscriber]
# Determines whether the subscriber service is enabled.
# 是否启用该模块,默认值 :true
# enabled = true # The default timeout for HTTP writes to subscribers.
# http超时时间,默认值:"30s"
# http-timeout = "30s" # Allows insecure HTTPS connections to subscribers. This is useful when testing with self-
# signed certificates.
# 是否允许不安全的证书,当测试自己签发的证书时比较有用。默认值: false
# insecure-skip-verify = false # The path to the PEM encoded CA certs file. If the empty string, the default system certs will be used
# 设置CA证书,无默认值
# ca-certs = "" # The number of writer goroutines processing the write channel.
# 设置并发数目,默认值:40
# write-concurrency = 40 # The number of in-flight writes buffered in the write channel.
# 设置buffer大小,默认值:1000
# write-buffer-size = 1000 ###
### [[graphite]]
###
### Controls one or many listeners for Graphite data.
### [[graphite]]
# Determines whether the graphite endpoint is enabled.
# 是否启用该模块,默认值 :false
# enabled = false
# 数据库名称,默认值:"graphite"
# database = "graphite"
# 存储策略,无默认值
# retention-policy = ""
# 绑定地址,默认值:":2003"
# bind-address = ":2003"
# 协议,默认值:"tcp"
# protocol = "tcp"
# 一致性级别,默认值:"one"
# consistency-level = "one" # These next lines control how batching works. You should have this enabled
# otherwise you could get dropped metrics or poor performance. Batching
# will buffer points in memory if you have many coming in. # Flush if this many points get buffered
# 批量size,默认值:5000
# batch-size = 5000 # number of batches that may be pending in memory
# 配置在内存中等待的batch数,默认值:10
# batch-pending = 10 # Flush at least this often even if we haven't hit buffer limit
# 超时时间,默认值:"1s"
# batch-timeout = "1s" # UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS max.
# udp读取buffer的大小,0表示使用操作系统提供的值,如果超过操作系统的默认配置则会出错。 该配置的默认值:0
# udp-read-buffer = 0 ### This string joins multiple matching 'measurement' values providing more control over the final measurement name.
#多个measurement间的连接符,默认值: "."
# separator = "." ### Default tags that will be added to all metrics. These can be overridden at the template level
### or by tags extracted from metric
# tags = ["region=us-east", "zone=1c"] ### Each template line requires a template pattern. It can have an optional
### filter before the template and separated by spaces. It can also have optional extra
### tags following the template. Multiple tags should be separated by commas and no spaces
### similar to the line protocol format. There can be only one default template.
# templates = [
# "*.app env.service.resource.measurement",
# # Default template
# "server.*",
# ] ###
### [collectd]
###
### Controls one or many listeners for collectd data.
### [[collectd]]
# 是否启用该模块,默认值 :false
# enabled = false
# 绑定地址,默认值: ":25826"
# bind-address = ":25826"
# 数据库名称,默认值:"collectd"
# database = "collectd"
# 存储策略,无默认值
# retention-policy = ""
#
# The collectd service supports either scanning a directory for multiple types
# db files, or specifying a single db file.
# 路径,默认值:"/usr/share/collectd/types.db"
# typesdb = "/usr/local/share/collectd"
#
# security-level = "none"
# auth-file = "/etc/collectd/auth_file" # These next lines control how batching works. You should have this enabled
# otherwise you could get dropped metrics or poor performance. Batching
# will buffer points in memory if you have many coming in. # Flush if this many points get buffered
# batch-size = 5000 # Number of batches that may be pending in memory
# batch-pending = 10 # Flush at least this often even if we haven't hit buffer limit
# batch-timeout = "10s" # UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS max.
# read-buffer = 0 # Multi-value plugins can be handled two ways.
# "split" will parse and store the multi-value plugin data into separate measurements
# "join" will parse and store the multi-value plugin as a single multi-value measurement.
# "split" is the default behavior for backward compatibility with previous versions of influxdb.
# parse-multivalue-plugin = "split"
###
### [opentsdb]
###
### Controls one or many listeners for OpenTSDB data.
### [[opentsdb]]
# 是否启用该模块,默认值:false
# enabled = false
# 绑定地址,默认值:":4242"
# bind-address = ":4242"
# 默认数据库:"opentsdb"
# database = "opentsdb"
# 存储策略,无默认值
# retention-policy = ""
# 一致性级别,默认值:"one"
# consistency-level = "one"
# 是否开启tls,默认值:false
# tls-enabled = false
# 证书路径,默认值:"/etc/ssl/influxdb.pem"
# certificate= "/etc/ssl/influxdb.pem" # Log an error for every malformed point.
# 出错时是否记录日志,默认值:true
# log-point-errors = true # These next lines control how batching works. You should have this enabled
# otherwise you could get dropped metrics or poor performance. Only points
# metrics received over the telnet protocol undergo batching. # Flush if this many points get buffered
# batch-size = 1000 # Number of batches that may be pending in memory
# batch-pending = 5 # Flush at least this often even if we haven't hit buffer limit
# batch-timeout = "1s" ###
### [[udp]]
###
### Controls the listeners for InfluxDB line protocol data via UDP.
### [[udp]]
# 是否启用该模块,默认值:false
# enabled = false
# 绑定地址,默认值:":8089"
# bind-address = ":8089"
# 数据库名称,默认值:"udp"
# database = "udp"
# 存储策略,无默认值
# retention-policy = "" # InfluxDB precision for timestamps on received points ("" or "n", "u", "ms", "s", "m", "h")
# 时间精度,无默认值
# precision = "" # These next lines control how batching works. You should have this enabled
# otherwise you could get dropped metrics or poor performance. Batching
# will buffer points in memory if you have many coming in. # Flush if this many points get buffered
# batch-size = 5000 # Number of batches that may be pending in memory
# batch-pending = 10 # Will flush at least this often even if we haven't hit buffer limit
# batch-timeout = "1s" # UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS max.
# udp读取buffer的大小,0表示使用操作系统提供的值,如果超过操作系统的默认配置则会出错。 该配置的默认值:0
# read-buffer = 0 ###
### [continuous_queries]
###
### Controls how continuous queries are run within InfluxDB.
### [continuous_queries]
# Determines whether the continuous query service is enabled.
# enabled = true # Controls whether queries are logged when executed by the CQ service.
# 是否开启日志,默认值:true
# log-enabled = true # Controls whether queries are logged to the self-monitoring data store.
# query-stats-enabled = false # interval for how often continuous queries will be checked if they need to run
# 时间间隔,默认值:"1s"
# run-interval = "1s" ###
### [tls]
###
### Global configuration settings for TLS in InfluxDB.
### [tls]
# Determines the available set of cipher suites. See https://golang.org/pkg/crypto/tls/#pkg-constants
# for a list of available ciphers, which depends on the version of Go (use the query
# SHOW DIAGNOSTICS to see the version of Go used to build InfluxDB). If not specified, uses
# the default settings from Go's crypto/tls package.
# ciphers = [
# "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305",
# "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
# ] # Minimum version of the tls protocol that will be negotiated. If not specified, uses the
# default settings from Go's crypto/tls package.
# min-version = "tls1.2" # Maximum version of the tls protocol that will be negotiated. If not specified, uses the
# default settings from Go's crypto/tls package.
# max-version = "tls1.2"

/etc/influxdb/influxdb.conf

3 登陆说明

使用命令influx

 Usage of influx:
-version
Display the version and exit.
-host 'host name'
Host to connect to.
-port 'port #'
Port to connect to.
-socket 'unix domain socket'
Unix socket to connect to.
-database 'database name'
Database to connect to the server.
-password 'password'
Password to connect to the server. Leaving blank will prompt for password (--password '').
-username 'username'
Username to connect to the server.
-ssl
Use https for requests.
-unsafeSsl
Set this when connecting to the cluster using https and not use SSL verification.
-execute 'command'
Execute command and quit.
-type 'influxql|flux'
Type specifies the query language for executing commands or when invoking the REPL.
-format 'json|csv|column'
Format specifies the format of the server responses: json, csv, or column.
-precision 'rfc3339|h|m|s|ms|u|ns'
Precision specifies the format of the timestamp: rfc3339, h, m, s, ms, u or ns.
-consistency 'any|one|quorum|all'
Set write consistency level: any, one, quorum, or all
-pretty
Turns on pretty print for the json format.
-import
Import a previous database export from file
-pps
How many points per second the import will allow. By default it is zero and will not throttle importing.
-path
Path to file to import
-compressed
Set to true if the import file is compressed

help文件

范例:

# 连接本机的IfluxDB
influx # 连接指定主机的InfluxDB
influx -host localhost # 连接指定主机指定端口的InfluxDB
influx -host localhost -port 8086 # 连接指定主机指定端口的InfluxDB的指定数据库
influx -host localhost -port 8086 -database testdb # 使用指定用户连接指定主机指定端口的InfluxDB的指定数据库
influx -host localhost -port 8086 -database testdb -username root # 使用指定用户密码连接指定主机指定端口的InfluxDB的指定数据库
influx -host localhost -port 8086 -database testdb -username root -password root # 远程执行命令
influx -execute 'show databases;' # 使用指定用户密码连接指定主机指定端口的InfluxDB的指定数据库执行命令,返回结果为json格式
influx -host localhost -port 8086 -database testdb -username root -password root -execute 'select * from win' -format 'json' # 使用指定用户密码连接指定主机指定端口的InfluxDB的指定数据库执行命令,返回结果为json格式,并进行格式优化
influx -host localhost -port 8086 -database testdb -username root -password root -execute 'select * from win' -format 'json' -pretty

  

01-初识InfluxDB的更多相关文章

  1. TensorFlow --- 01初识

    由于博客园对Markdown支持不够友好,阅读此文请前往云栖社区:TensorFlow --- 01初识

  2. day24 01 初识继承

    day24 01 初识继承 面向对象的三大特性:继承,多态,封装 一.继承的概念 继承:是一种创建新类的方式,新建的类可以继承一个或者多个父类,父类又可称基类或超类,新建的类称为派生类或者子类 cla ...

  3. day22 01 初识面向对象----简单的人狗大战小游戏

    day22 01 初识面向对象----简单的人狗大战小游戏 假设有一个简单的小游戏:人狗大战   怎样用代码去实现呢? 首先得有任何狗这两个角色,并且每个角色都有他们自己的一些属性,比如任务名字nam ...

  4. [转帖]时序数据库技术体系(二):初识InfluxDB

    时序数据库技术体系(二):初识InfluxDB https://sq.163yun.com/blog/article/169866295296581632 把生命浪费在美好事物上2018-06-26 ...

  5. 081 01 Android 零基础入门 02 Java面向对象 01 Java面向对象基础 01 初识面向对象 06 new关键字

    081 01 Android 零基础入门 02 Java面向对象 01 Java面向对象基础 01 初识面向对象 06 new关键字 本文知识点:new关键字 说明:因为时间紧张,本人写博客过程中只是 ...

  6. 080 01 Android 零基础入门 02 Java面向对象 01 Java面向对象基础 01 初识面向对象 05 单一职责原则

    080 01 Android 零基础入门 02 Java面向对象 01 Java面向对象基础 01 初识面向对象 05 单一职责原则 本文知识点:单一职责原则 说明:因为时间紧张,本人写博客过程中只是 ...

  7. 079 01 Android 零基础入门 02 Java面向对象 01 Java面向对象基础 01 初识面向对象 04 实例化对象

    079 01 Android 零基础入门 02 Java面向对象 01 Java面向对象基础 01 初识面向对象 04 实例化对象 本文知识点:实例化对象 说明:因为时间紧张,本人写博客过程中只是对知 ...

  8. 078 01 Android 零基础入门 02 Java面向对象 01 Java面向对象基础 01 初识面向对象 03 创建类

    078 01 Android 零基础入门 02 Java面向对象 01 Java面向对象基础 01 初识面向对象 03 创建类 本文知识点:创建类 说明:因为时间紧张,本人写博客过程中只是对知识点的关 ...

  9. 077 01 Android 零基础入门 02 Java面向对象 01 Java面向对象基础 01 初识面向对象 02 类和对象

    077 01 Android 零基础入门 02 Java面向对象 01 Java面向对象基础 01 初识面向对象 02 类和对象 本文知识点:类和对象 说明:因为时间紧张,本人写博客过程中只是对知识点 ...

  10. 076 01 Android 零基础入门 02 Java面向对象 01 Java面向对象基础 01 初识面向对象 01 Java面向对象导学

    076 01 Android 零基础入门 02 Java面向对象 01 Java面向对象基础 01 初识面向对象 01 Java面向对象导学 本文知识点:Java面向对象导学 说明:因为时间紧张,本人 ...

随机推荐

  1. python面试--转载

    一, 面的是一家上海的创业公司,地址比较偏远,找了半天,过去的时候还发现他们在搬家,心想,创业公司真不容易啊,什么都要自己来. 期间他问到了我的Python基础知识,我答得支支吾吾,各种不确定,还有被 ...

  2. Java 的 LinkedList 的底层数据结构

    1. 数据结构--LinkedList源码摘要 public class LinkedList<E> extends AbstractSequentialList<E> imp ...

  3. [LC] 256. Paint House

    There are a row of n houses, each house can be painted with one of the three colors: red, blue or gr ...

  4. C++逆序输出字符串

    使用库函数 //使用库函数 #include <iostream> #include <string> #include <algorithm> using nam ...

  5. Query对象与DOM对象之间的转换

    什么是jQuery对象? ---就是通过jQuery包装DOM对象后产生的对象.jQuery对象是jQuery独有的,其可以使用jQuery里的方法. 比如: $("#test") ...

  6. jenkins使用(2)-配置项目代码的3种方式

    1.通过cmd命令直接进入项目代码的文件夹运行,注意路径中不要有中文 2.代码放到工作区:从本地复制项目代码到工作区目录下 代码结构的优化 3.代码连接git或svn,实时更新代码 svn检出 然后上 ...

  7. 吴裕雄--天生自然HTML学习笔记:HTML 样式- CSS

    CSS (Cascading Style Sheets) 用于渲染HTML元素标签的样式. <!DOCTYPE html> <html> <head> <me ...

  8. spring学习笔记一:spring介绍

    jar包下载地址:http://repo.spring.io/release/org/springframework/spring/ spring特点: 1.非侵入性 spring框架的API不会在业 ...

  9. Spring Boot 之 Redis详解

    Redis是目前业界使用最广泛的内存数据存储. Redis支持丰富的数据结构,同时支持数据持久化. Redis还提供一些类数据库的特性,比如事务,HA,主从库. REmote DIctionary S ...

  10. C++ 标准IO

    标准输入 gets() 1 char * (char *str) gets() 主要是从标准输入流读取字符串并回显,读到换行符时退出,并会将换行符省去. 返回值为获得的字符串的首地址. 123 cha ...